For companies
  • Hire developers
  • Hire designers
  • Hire marketers
  • Hire product managers
  • Hire project managers
  • Hire assistants
  • How Arc works
  • How much can you save?
  • Case studies
  • Pricing
    • Remote dev salary explorer
    • Freelance developer rate explorer
    • Job description templates
    • Interview questions
    • Remote work FAQs
    • Team bonding playbooks
    • Employer blog
For talent
  • Overview
  • Remote jobs
  • Remote companies
    • Resume builder and guide
    • Talent career blog
NVIDIA
NVIDIA

Senior Applied AI Software Engineer, Distributed Inference Systems

Location

Remote restrictions apply
See all remote locations

Salary Estimate

N/AIconOpenNewWindows

Seniority

Senior

Tech stacks

Python
Kubernetes
Rust
+21

Visa

U.S. visa required

Permanent role
2 days ago
Apply now

NVIDIA Dynamo is an innovative, open-source platform focused on efficient, scalable inference for large language and reasoning models in distributed GPU environments. By bringing to bear sophisticated techniques in serving architecture, GPU resource management, and intelligent request handling, Dynamo achieves high-performance AI inference for demanding applications. Our team is addressing the most challenging issues in distributed AI infrastructure, and we’re searching for engineers enthusiastic about building the next generation of scalable AI systems.

As a Senior Applied AI Software Engineer On The Dynamo Project, You Will Address Some Of The Most Sophisticated And High-impact Challenges In Distributed Inference, Including

  • Dynamo k8s Serving Platform: Build the Kubernetes deployment and workload management stack for Dynamo to facilitate inference deployments at scale. Identify bottlenecks and apply optimization techniques to fully use hardware capacity.
  • Scalability & Reliability: Develop robust, production-grade inference workload management systems that scale from a handful to thousands of GPUs, supporting a variety of LLM frameworks (e.g., TensorRT-LLM, vLLM, SGLang).
  • Disaggregated Serving: Architect and optimize the separation of prefill (context ingestion) and decode (token generation) phases across distinct GPU clusters to improve throughput and resource utilization. Contribute to embedding disaggregation for multi-modal models (Vision-Language models, Audio Language Models, Video Language Models).
  • Dynamic GPU Scheduling: Develop and refine Planner algorithms for real-time allocation and rebalancing of GPU resources based on fluctuating workloads and system bottlenecks, ensuring peak performance at scale.
  • Intelligent Routing: Enhance the smart routing system to efficiently direct inference requests to GPU worker replicas with relevant KV cache data, minimizing re-computation and latency for sophisticated, multi-step reasoning tasks.
  • Distributed KV Cache Management: Innovate in the management and transfer of large KV caches across heterogeneous memory and storage hierarchies, using the NVIDIA Optimized Transfer Library (NIXL) for low-latency, cost-effective data movement.

What You'll Be Doing

  • Collaborate on the design and development of the Dynamo Kubernetes stack.
  • Introduce new features to the Dynamo Python SDK and Dynamo Rust Runtime Core Library.
  • Design, implement, and optimize distributed inference components in Rust and Python.
  • Contribute to the development of disaggregated serving for Dynamo-supported inference engines (vLLM, SGLang, TRT-LLM, llama.cpp, mistral.rs).
  • Improve intelligent routing and KV-cache management subsystems.
  • Contribute to open-source repositories, participate in code reviews, and assist with issue triage on GitHub.
  • Work closely with the community to address issues, capture feedback, and evolve the framework’s APIs and architecture.
  • Write clear documentation and contribute to user and developer guides.

What We Need To See

  • BS/MS or higher in computer engineering, computer science or related engineering (or equivalent experience).
  • 5+ years of proven experience in related field.
  • Strong proficiency in systems programming (Rust and/or C++), with experience in Python for workflow and API development. Experience with Go for Kubernetes controllers and operators development.
  • Deep understanding of distributed systems, parallel computing, and GPU architectures.
  • Experience with cloud-native deployment and container orchestration (Kubernetes, Docker).
  • Experience with large-scale inference serving, LLMs, or similar high-performance AI workloads.
  • Background with memory management, data transfer optimization, and multi-node orchestration.
  • Familiarity with open-source development workflows (GitHub, continuous integration and continuous deployment).
  • Excellent problem-solving and communication skills.

Ways To Stand Out From The Crowd

  • Prior contributions to open-source AI inference frameworks (e.g., vLLM, TensorRT-LLM, SGLang).
  • Experience with GPU resource scheduling, cache management, or high-performance networking.
  • Understanding of LLM-specific inference challenges, such as context window scaling and multi-model agentic workflows.

With highly competitive salaries and a comprehensive benefits package, NVIDIA is widely considered to be one of the technology world's most desirable employers. We have some of the most forward-thinking and hardworking people in the world working for us and, due to outstanding growth, our special engineering teams are growing fast. If you're a creative and autonomous engineer with a genuine passion for technology, we want to hear from you!

The base salary range is 148,000 USD - 287,500 USD. Your base salary will be determined based on your location, experience, and the pay of employees in similar positions.

You will also be eligible for equity and benefits . _NVIDIA accepts applications on an ongoing basis.

_NVIDIA is committed to fostering a diverse work environment and proud to be an equal opportunity employer. As we highly value diversity in our current and future employees, we do not discriminate (including in our hiring and promotion practices) on the basis of race, religion, color, national origin, gender, gender expression, sexual orientation, age, marital status, veteran status, disability status or any other characteristic protected by law.

JR1998487

About NVIDIA

👥10000-
📍Santa Clara, CA
🔗Website

NVIDIA benefits and support

🏥Health insurance
🌴Retirement pension
🌞Healthy living stipend
📕Learning stipend
🍼Maternity/paternity leave
⌚️Flexible working hours
📊Stock options
🗺Company retreat
See more
Visit company profileIconOpenNewWindows

Unlock all Arc benefits!

  • Browse remote jobs in one place
  • Land interviews more quickly
  • Get hands-on recruiter support
PRODUCTS
Arc

The remote career platform for talent

Codementor

Find a mentor to help you in real time

LINKS
About usPricingArc Careers - Hiring Now!Remote Junior JobsRemote jobsCareer Success StoriesTalent Career BlogArc Newsletter
JOBS BY EXPERTISE
Remote Front End Developer JobsRemote Back End Developer JobsRemote Full Stack Developer JobsRemote Mobile Developer JobsRemote Data Scientist JobsRemote Game Developer JobsRemote Data Engineer JobsRemote Programming JobsRemote Design JobsRemote Marketing JobsRemote Product Manager JobsRemote Project Manager JobsRemote Administrative Support Jobs
JOBS BY TECH STACKS
Remote AWS Developer JobsRemote Java Developer JobsRemote Javascript Developer JobsRemote Python Developer JobsRemote React Developer JobsRemote Shopify Developer JobsRemote SQL Developer JobsRemote Unity Developer JobsRemote Wordpress Developer JobsRemote Web Development JobsRemote Motion Graphic JobsRemote SEO JobsRemote AI Jobs
© Copyright 2025 Arc
Cookie PolicyPrivacy PolicyTerms of Service