For companies
  • Hire developers
  • Hire designers
  • Hire marketers
  • Hire product managers
  • Hire project managers
  • Hire assistants
  • How Arc works
  • How much can you save?
  • Case studies
  • Pricing
    • Remote dev salary explorer
    • Freelance developer rate explorer
    • Job description templates
    • Interview questions
    • Remote work FAQs
    • Team bonding playbooks
    • Employer blog
For talent
  • Overview
  • Remote jobs
  • Remote companies
    • Resume builder and guide
    • Talent career blog
Clear Fracture
Clear Fracture

Big Data Engineer

Location

Remote restrictions apply
See all remote locations

Salary Estimate

N/AIconOpenNewWindows

Seniority

N/A

Tech stacks

Data
Software Development
Database
+32

Visa

U.S. visa required

Permanent role
3 days ago
Apply now

Software Engineering Focus / Data Modeling / AI & Agentic Systems

Clear Fracture is building AI-driven data integration systems that enable organizations to connect, transform, and reason over complex data using agentic workflows. Our platform operates across cloud and on-prem environments and is designed to support multi-tenant, production-scale use cases.

We are looking for a Data Engineer who operates as a software engineer first, with strong experience in data modeling and data systems. You will play a key role in building the core data layer that powers our agentic platform—designing schemas, implementing data services, and enabling reliable, scalable data flows.

In addition to building core data infrastructure, you will also develop real use cases on the platform itself, helping shape how users interact with data. This includes designing data interfaces, abstractions, and tooling that make it easier to understand, model, and work with data across the system.

This is not a traditional ETL-only role. You will write production code, design systems, and help define how data is represented, accessed, and understood across the platform.

Key ResponsibilitiesData Modeling & System Design

  • Design and implement logical and physical data models for complex, evolving datasets.
  • Define schemas and access patterns that support multi-tenant usage and application-level workflows.
  • Balance normalization, performance, and flexibility across different storage systems.
  • Partner with product and engineering teams to translate requirements into scalable data designs.

Platform Use Cases & Data Interfaces

  • Develop real-world data use cases on top of the platform to validate and extend its capabilities.

  • Design and build data interfaces and abstractions that help users understand and work with data.

  • Contribute to systems such as:

  • Data glossaries

  • Semantic layers

  • Metadata and schema discovery tools

  • Help define how users explore, model, and interact with data within the platform.

  • Translate complex data structures into intuitive, usable representations.

Software Engineering for Data Systems

  • Build backend services and APIs that expose and operate on data models.
  • Implement data access layers that are reliable, maintainable, and performant.
  • Contribute to core application architecture where data and services intersect.
  • Write clean, testable, production-grade code.

Data Pipelines & Processing

  • Design and implement pipelines for ingesting, transforming, and validating data.
  • Support both batch and near-real-time processing workflows.
  • Build systems that handle structured, semi-structured, and unstructured data.

AI & Agentic Workflow Integration

  • Enable data flows that support AI-driven and agent-based workflows.
  • Work with embeddings, context retrieval, and data representations used in modern AI systems.
  • Help design systems that make data accessible and useful for autonomous agents.

Data Quality & Reliability

  • Implement validation, monitoring, and testing for data systems.
  • Ensure correctness, consistency, and observability of data pipelines and services.
  • Diagnose and resolve data-related issues in production environments.

Required Qualifications

  • Bachelor’s degree in Computer Science, Engineering, or related field, or equivalent practical experience.
  • 6+ years of professional experience in software engineering and/or data engineering roles.
  • Due to the nature of the work, U.S. Citizenship and the ability to obtain a Secret Clearance are required.
  • Strong programming skills in Python (or similar backend language).
  • Experience designing and implementing data models for production systems, with advanced knowledge of dimensional modeling topics like slowly changing dimensions and entity relationship diagrams.
  • Proficiency in SQL and experience with relational databases (e.g., PostgreSQL).
  • Experience building backend services or APIs that interact with data systems.
  • Experience designing and operating data pipelines (ETL/ELT).
  • Familiarity with NoSQL databases and different data storage paradigms.
  • Experience working with large datasets and performance optimization.
  • Experience with Docker and containerized development workflows.
  • Familiarity with Kubernetes-based environments.
  • Strong understanding of software engineering fundamentals (testing, version control, system design).

Preferred Qualifications

  • Experience building multi-tenant data systems.
  • Familiarity with semantic layers, data catalogs, or data discovery systems.
  • Experience designing data-facing user interfaces or developer tooling.
  • Experience with streaming systems (e.g., Kafka or similar).
  • Experience with orchestration tools (e.g., Airflow, Dagster, Prefect).
  • Experience working with AI/ML data pipelines or agent-based systems.
  • Experience supporting on-prem or hybrid deployments.
  • Exposure to data governance, access control, and metadata systems.
  • Experience with cloud platforms (AWS, Azure, GCP).
  • Familiarity with vector databases (e.g., Pinecone, ChromaDB) and embedding-based retrieval.

What We Value

  • Engineering mindset: You approach data systems as software systems, not just pipelines.
  • Data intuition: You understand how to model real-world complexity into clear, usable structures.
  • Product thinking: You care about how users interact with and understand data, not just how it is stored.
  • Systems thinking: You see how data flows through services, APIs, and AI systems.
  • Ownership: You take responsibility for the reliability and usability of what you build.
  • Pragmatism: You balance ideal design with real-world constraints.
  • Collaboration: You work effectively across engineering disciplines

About Clear Fracture

đź”—Website
Visit company profileIconOpenNewWindows

Unlock all Arc benefits!

  • Browse remote jobs in one place
  • Land interviews more quickly
  • Get hands-on recruiter support
PRODUCTS
Arc

The remote career platform for talent

Codementor

Find a mentor to help you in real time

LINKS
About usPricingArc Careers - Hiring Now!Remote Junior JobsRemote jobsCareer Success StoriesTalent Career BlogArc Newsletter
JOBS BY EXPERTISE
Remote Front End Developer JobsRemote Back End Developer JobsRemote Full Stack Developer JobsRemote Mobile Developer JobsRemote Data Scientist JobsRemote Game Developer JobsRemote Data Engineer JobsRemote Programming JobsRemote Design JobsRemote Marketing JobsRemote Product Manager JobsRemote Project Manager JobsRemote Administrative Support Jobs
JOBS BY TECH STACKS
Remote AWS Developer JobsRemote Java Developer JobsRemote Javascript Developer JobsRemote Python Developer JobsRemote React Developer JobsRemote Shopify Developer JobsRemote SQL Developer JobsRemote Unity Developer JobsRemote Wordpress Developer JobsRemote Web Development JobsRemote Motion Graphic JobsRemote SEO JobsRemote AI Jobs
© Copyright 2026 Arc
Cookie PolicyPrivacy PolicyTerms of Service