For companies
  • Hire developers
  • Hire designers
  • Hire marketers
  • Hire product managers
  • Hire project managers
  • Hire assistants
  • How Arc works
  • How much can you save?
  • Case studies
  • Pricing
    • Remote dev salary explorer
    • Freelance developer rate explorer
    • Job description templates
    • Interview questions
    • Remote work FAQs
    • Team bonding playbooks
    • Employer blog
For talent
  • Overview
  • Remote jobs
  • Remote companies
    • Resume builder and guide
    • Talent career blog
Arc Exclusive
Arc Exclusive

Senior 3D Pipeline Engineer (Production Scaling & API Architecture) - Full-time - East Asia/EMEA

Location

Remote restrictions apply
See all remote locations

Hourly rate

Hourly rate

Min. experience

5+ years

Hours per week

40 hours

Duration

52 weeks

Required skills

PythonComputer VisionBlenderPyTorchRESTful API

Freelance job
Posted 13 hours ago
Apply now
Actively recruiting / 10 applicants

We’re here to help you

Juliana Torrisi is in direct contact with the company and can answer any questions you may have. Email

Juliana TorrisiJuliana Torrisi, Recruiter

Role Mission (First 90 Days):

We're hiring a "Real Killer" 3D Pipeline Engineer to transform our advanced 3D analysis system from a powerful prototype into a production-grade, API-ready platform.

In your first 90 days, you'll architect and implement the scaling infrastructure that takes our 8-phase pipeline (currently processing 500+ components with 72% accuracy) to handle enterprise workloads at web scale.

This means refactoring our Blender mega-script architecture for distributed processing, building robust APIs for our spatial intelligence system, and creating adapters for multiple 3D data formats (photogrammetric reconstructions, parametric CAD, gaussian splatting).

By day 90, we expect a demonstrable production system handling 10x our current load with sub-5-minute processing times, comprehensive API documentation, and at least one major enhancement to our computer vision pipeline (perhaps integrating DINO v3 for improved component segmentation or implementing real-time 3D format conversion).

Core Responsibilities

  • Production Pipeline Architecture: Transform our research-grade 3D pipeline into a battle-tested production system. You'll refactor our 12,000+ line codebase to support horizontal scaling, implement robust error recovery, and create comprehensive monitoring. This includes breaking our monolithic Blender mega-script into microservices, implementing job queuing for parallel processing, and ensuring our KD-tree spatial queries maintain O(log n) performance at scale. You'll own the entire pipeline from GLB ingestion through AI-powered component identification to final export.

  • 3D Format Mastery & Integration: Build universal adapters for diverse 3D data sources. Our clients throw everything at us – photogrammetric reconstructions from construction sites (with elevation maps, color data, depth channels), parametrically modeled CAD files from engineers, emerging formats like gaussian splatting from neural rendering. You'll create robust importers that normalize these into our pipeline, preserving critical metadata while extracting geometric signatures for our deduplication engine. This isn't just format conversion – it's intelligent data transformation that maintains semantic meaning across representations.

  • Computer Vision Enhancement: Level up our vision capabilities with state-of-the-art models. While our multi-provider ensemble (GPT-4V + Claude) works well, you'll integrate specialized models like DINO v3 for superior geometric understanding. This means implementing new vision backbones, creating training pipelines for domain-specific fine-tuning, and building A/B testing infrastructure to validate improvements. You'll work at the intersection of research and production – reading papers on Friday, prototyping on Monday, and shipping to production by Friday.

  • Blender Python Wizardry: Our Blender integration is the heart of our geometric analysis. You'll extend our mega-script architecture to handle new challenges: dynamic scene composition for context rendering, procedural material generation for better component visibility, and advanced camera positioning algorithms for optimal viewpoint selection. This requires deep Blender Python expertise – not just using the API, but understanding its performance characteristics and working around its limitations. You'll optimize our rendering pipeline to maintain quality while drastically reducing computation time.

  • API Design & Developer Experience: Create the APIs that make our 3D intelligence accessible to the world. This means designing RESTful endpoints for job submission, implementing WebSocket connections for real-time progress updates, and creating SDKs that developers actually want to use. You'll handle everything from request validation to response streaming for large 3D datasets. The goal: any developer should be able to integrate our 3D analysis in under 30 minutes with beautiful documentation and helpful error messages.

  • Rapid Research Integration: Stay ahead of the curve by quickly evaluating and integrating new research. When a relevant paper drops (new 3D representation, better vision model, novel geometric algorithm), you'll assess its potential impact, build a proof-of-concept, and determine production feasibility within days, not months. This requires both deep technical understanding and pragmatic judgment – knowing when to adopt cutting-edge tech and when to stick with proven solutions.

Required Experience

  • Blender Python Mastery: You're in the top 1% of Blender Python developers. You've written complex automation scripts, understand bpy's threading model, and can debug Blender's quirks in your sleep. You know when to use modifiers vs. direct mesh manipulation, how to optimize viewport performance, and have probably contributed to Blender add-ons. Show us a Blender script you're proud of – ideally something that processes hundreds of objects efficiently.

  • Computer Vision Implementation: Deep experience implementing vision models in production. You're fluent in PyTorch/TensorFlow, have fine-tuned models like DINO/SAM/CLIP, and understand the practical tradeoffs between accuracy and inference speed. You can explain vision transformers to a CEO and implement custom CUDA kernels when needed. Experience with 3D-specific vision tasks (point cloud processing, mesh analysis, multi-view synthesis) is crucial.

  • 3D Data Format Expertise: You've worked with the full spectrum of 3D data – from raw photogrammetry outputs to pristine CAD models. You understand coordinate systems, material representations, and the nightmare of 3D format standards. Experience with point clouds (LAS/LAZ/PCD), meshes (OBJ/PLY/glTF), and emerging representations (NeRF/Gaussian Splatting) is essential. You should be able to write a basic glTF parser from scratch.

  • Production Systems Architecture: You've scaled systems from prototype to production. This means experience with distributed processing, job queues, caching strategies, and API design. You understand how to profile Python code, when to drop to C++, and how to design systems that gracefully degrade under load. Experience with cloud services (especially Google Cloud) and modern DevOps practices is required.

  • Geometric Algorithm Implementation: Strong foundation in computational geometry. You can implement KD-trees, understand mesh simplification algorithms, and optimize spatial queries. You're comfortable with linear algebra, can debug transformation matrices, and understand concepts like manifold meshes and geometric hashing. This isn't just theory – you've implemented these algorithms in production code.

Preferred Experience

  • 3D Reconstruction & Photogrammetry: Experience with the full photogrammetry pipeline – from image capture to mesh generation. You understand COLMAP, have used tools like Metashape or RealityCapture, and know how to handle challenging reconstruction scenarios. Bonus points for experience with depth sensors, structured light scanning, or LiDAR processing.

  • Neural 3D Representations: Hands-on experience with neural rendering techniques. You've implemented NeRF variants, understand Gaussian Splatting's advantages, and can explain why these matter for real applications. Experience with libraries like nerfstudio or gsplat implementations shows you're tracking the cutting edge.

  • Specialized Vision Models: You've gone beyond basic CNN classifiers. Experience with geometric deep learning (PointNet, DGCNN), self-supervised models (DINO, MAE), or 3D-aware architectures shows depth. If you've fine-tuned these models for domain-specific tasks or implemented custom architectures, we want to hear about it.

  • CAD/CAM Integration: Understanding of parametric modeling and CAD systems. Experience with APIs like Fusion 360, SolidWorks, or FreeCAD shows you understand how engineers create 3D data. Knowledge of STEP/IGES formats and B-rep vs. mesh representations indicates deep domain expertise.

  • Real-time 3D Processing: Experience with real-time constraints – game engines, AR/VR applications, or live 3D streaming. You understand LOD systems, occlusion culling, and GPU optimization. Three.js or Unity/Unreal experience shows you understand the full pipeline from processing to visualization.

Ideal Signals of Excellence

  • Show Us Your Pipeline: The ultimate signal – you've built something similar before. Maybe it's a 3D processing pipeline for a robotics company, an automated CAD analysis tool, or a photogrammetry processing system. We want to see architecture diagrams, performance metrics, and lessons learned. A GitHub repo with a well-documented 3D processing project immediately puts you in the top tier.

  • Research to Production Track Record: Evidence that you can rapidly commercialize research ideas. Perhaps you implemented a paper's algorithm and made it 10x faster, or took an academic prototype and scaled it to handle real-world data. Show us PRs where you've integrated cutting-edge models into production systems, or blog posts explaining how you made research practical.

  • Performance Optimization War Stories: Tell us about the time you made something 10x faster. Maybe you vectorized a geometric algorithm, implemented a spatial index that changed everything, or found a clever way to parallelize Blender operations. We love engineers who profile first, optimize second, and can explain exactly why their solution worked.

  • Multi-format 3D Portfolio: Demonstrate experience across 3D formats with actual projects. Show us a photogrammetry project you processed, a CAD assembly you analyzed, or a neural radiance field you trained. Bonus points for tools or scripts that convert between formats while preserving semantic information.

  • Open Source Contributions: Contributions to relevant projects (Blender, Open3D, PyTorch3D, trimesh) show you're engaged with the community. Even better if you've released your own 3D processing tools. Quality matters more than quantity – one meaningful PR to Blender's Python API is worth a hundred trivial fixes.

Why This Role Matters

Our 3D pipeline is the beating heart of Spatial Support. While competitors offer basic 3D viewers or simple CAD tools, we're building true 3D intelligence – systems that understand assemblies like an experienced engineer, identify components with superhuman accuracy, and make spatial relationships queryable at web scale. Your work directly enables this vision.

Consider the impact: today, analyzing a complex assembly requires hours of manual inspection. Our current pipeline reduces this to minutes with 72% accuracy. Your mission is to make it seconds with 90%+ accuracy, handling any 3D format thrown at it. This isn't incremental improvement – it's the difference between a cool demo and a product that transforms industries.

By 2026, we envision our platform processing millions of 3D models, from construction site scans to manufacturing assemblies to medical devices. Every optimization you make, every format you support, every percentage point of accuracy you gain translates to real-world impact. You're not just scaling a pipeline – you're building the infrastructure for the next generation of 3D understanding.

The "It" Factor

The perfect candidate for this role is a rare breed – someone with the geometric intuition of a graphics programmer, the pragmatism of a systems engineer, and the vision of a researcher. You get genuinely excited about spatial data structures but equally passionate about API response times. You can spend Monday implementing a paper on neural signed distance fields and Friday optimizing database queries.

What sets you apart is your ability to navigate the entire stack. You're equally comfortable debugging a Blender Python script, profiling CUDA kernels, designing RESTful APIs, or explaining geometric algorithms on a whiteboard. You have opinions on coordinate systems and strong feelings about mesh topology. When you see a 3D model, you don't just see triangles – you see opportunities for optimization, challenges in segmentation, and potential for intelligence.

You're "commercially minded" but technically rigorous. You know that shipping beats perfection, but you also know when cutting corners will haunt you. You can make the hard calls: when to use the latest transformer model vs. a simple heuristic, when to process client-side vs. server-side, when to cache vs. compute.

Most importantly, you're excited by the challenge of making 3D data as queryable and intelligent as text data has become. Just as NLP transformed how we process documents, you want to transform how we process 3D worlds. If you've ever looked at a complex CAD assembly and thought "this should be as searchable as a database," you're our person.

Technical Brief Addendum

  • Current Scale: 500+ components per assembly, 72% accuracy, 2-5 minute processing
  • Target Scale: 5000+ components, 90% accuracy, <30 second processing
  • Tech Stack: Python 3.8+, Blender 3.0+, PyTorch, Redis, MongoDB, Google Cloud
  • Immediate Challenges: Distributed Blender processing, real-time format conversion, API rate limiting
  • Research Opportunities: DINO v3 integration, Gaussian Splatting pipelines, few-shot component learning

We're not looking for someone who can do everything on day one. We're looking for someone who can identify what matters most, execute with excellence, and build systems that scale. If you're reading this and thinking "I could make this 10x better," please reach out. Include a 3D processing project you're proud of and let's build the future of 3D intelligence together.

Unlock all Arc benefits!

  • Browse remote jobs in one place
  • Land interviews more quickly
  • Get hands-on recruiter support
PRODUCTS
Arc

The remote career platform for talent

Codementor

Find a mentor to help you in real time

LINKS
About usPricingArc Careers - Hiring Now!Remote Junior JobsRemote jobsCareer Success StoriesTalent Career BlogArc Newsletter
JOBS BY EXPERTISE
Remote Front End Developer JobsRemote Back End Developer JobsRemote Full Stack Developer JobsRemote Mobile Developer JobsRemote Data Scientist JobsRemote Game Developer JobsRemote Data Engineer JobsRemote Programming JobsRemote Design JobsRemote Marketing JobsRemote Product Manager JobsRemote Project Manager JobsRemote Administrative Support Jobs
JOBS BY TECH STACKS
Remote AWS Developer JobsRemote Java Developer JobsRemote Javascript Developer JobsRemote Python Developer JobsRemote React Developer JobsRemote Shopify Developer JobsRemote SQL Developer JobsRemote Unity Developer JobsRemote Wordpress Developer JobsRemote Web Development JobsRemote Motion Graphic JobsRemote SEO JobsRemote AI Jobs
© Copyright 2025 Arc
Cookie PolicyPrivacy PolicyTerms of Service