For companies
  • Hire developers
  • Hire designers
  • Hire marketers
  • Hire product managers
  • Hire project managers
  • Hire assistants
  • How Arc works
  • How much can you save?
  • Case studies
  • Pricing
    • Remote dev salary explorer
    • Freelance developer rate explorer
    • Job description templates
    • Interview questions
    • Remote work FAQs
    • Team bonding playbooks
    • Employer blog
For talent
  • Overview
  • Remote jobs
  • Remote companies
    • Resume builder and guide
    • Talent career blog
Lensa
Lensa

Senior MLOps Engineer - OpenShift AI - Remote, Ireland

Location

Remote restrictions apply
See all remote locations

Salary Estimate

N/AIconOpenNewWindows

Seniority

Senior

Tech stacks

Software Development
AI
Kubernetes
+30

Contract role
6 days ago
Apply now

Lensa is the leading career site for job seekers at every stage of their career. Our client, Red Hat, is seeking professionals. Apply via Lensa today!

Job Summary

Do you want to help shape the future of AI by building robust infrastructure and tools for developing trustworthy large language models and agentic workflows? We're seeking a software engineer who combines strong systems engineering skills with a passion for AI safety to develop frameworks that ensure AI systems behave reliably and align with human values.

The OpenShift AI team is looking for a Senior Software Engineer with Kubernetes and MLOps or LLMOps experience to join our rapidly growing engineering team. Our team’s focus is to make machine learning model deployment and monitoring seamless, scalable, and trustworthy across the hybrid cloud and the edge. This is a very exciting opportunity to build and impact the next generation of hybrid cloud MLOps platforms.

In this role, you'll be contributing as a technical infrastructure expert for responsible AI features of the open source Open Data Hub (https://opendatahub.io/) project by actively participating in KServe (https://github.com/kserve) , TrustyAI (https://github.com/trustyai-explainability) , Kubeflow (https://github.com/kubeflow/) , and several other open source communities. You will work as part of an evolving development team to rapidly design, secure, build, test and release model serving, trustworthy AI, and model registry capabilities. The role is primarily an individual contributor who will be a key notable contributor to trustworthy AI and MLOps/LLMOps upstream communities and collaborate closely with the internal cross-functional development teams.

Job Responsibilities

  • Lead the architecture and implementation of MLOps/LLMOps systems within OpenShift AI, establishing best practices for scalability, reliability, and maintainability while actively contributing to relevant open source communities
  • Design and develop robust, production-grade features focused on AI trustworthiness, including model monitoring, bias detection, and explainability frameworks that integrate seamlessly with OpenShift AI
  • Drive technical decision-making around system architecture, technology selection, and implementation strategies for key MLOps components, with a focus on open source technologies like KServe and TrustyAI
  • Define and implement technical standards for model deployment, monitoring, and validation pipelines, while mentoring team members on MLOps best practices and engineering excellence
  • Collaborate with product management to translate customer requirements into technical specifications, architect solutions that address scalability and performance challenges, and provide technical leadership in customer-facing discussions
  • Lead code reviews, architectural reviews, and technical documentation efforts to ensure high code quality and maintainable systems across distributed engineering teams
  • Identify and resolve complex technical challenges in production environments, particularly around model serving, scaling, and reliability in enterprise Kubernetes deployments
  • Partner with cross-functional teams to establish technical roadmaps, evaluate build-vs-buy decisions, and ensure alignment between engineering capabilities and product vision
  • Provide technical mentorship to team members, including code review feedback, architecture guidance, and career development support while fostering a culture of engineering excellence

Required Qualifications

  • 5+ years of software engineering experience, with at least 4 years focusing on ML/AI systems in production environments
  • Strong expertise in Python, with demonstrated experience building and deploying production ML systems
  • Deep understanding of Kubernetes and container orchestration, particularly in ML workload contexts
  • Extensive experience with MLOps tools and frameworks (e.g., KServe, Kubeflow, MLflow, or similar)
  • Track record of technical leadership in open source projects, including significant contributions and community engagement
  • Proven experience architecting and implementing large-scale distributed systems
  • Strong background in software engineering best practices, including CI/CD, testing, and monitoring
  • Experience mentoring engineers and driving technical decisions in a team environment

Preferred Qualifications

  • Experience with Red Hat OpenShift or similar enterprise Kubernetes platforms
  • Contributions to ML/AI open source projects, particularly in the MLOps space
  • Background in implementing ML model monitoring, explainability, or bias detection systems
  • Experience with LLM operations and deployment at scale
  • Public speaking experience at technical conferences
  • Advanced degree in Computer Science, Machine Learning, or related field
  • Experience working with distributed engineering teams across multiple time zones
  • Familiarity with AI governance and responsible AI practices

About Red Hat

Red Hat (https://www.redhat.com/) is the world’s leading provider of enterprise open source (https://www.redhat.com/en/about/open-source) software solutions, using a community-powered approach to deliver high-performing Linux, cloud, container, and Kubernetes technologies. Spread across 40+ countries, our associates work flexibly across work environments, from in-office, to office-flex, to fully remote, depending on the requirements of their role. Red Hatters are encouraged to bring their best ideas, no matter their title or tenure. We're a leader in open source because of our open and inclusive environment. We hire creative, passionate people ready to contribute their ideas, help solve complex problems, and make an impact.

Inclusion at Red Hat

Red Hat’s culture is built on the open source principles of transparency, collaboration, and inclusion, where the best ideas can come from anywhere and anyone. When this is realized, it empowers people from different backgrounds, perspectives, and experiences to come together to share ideas, challenge the status quo, and drive innovation. Our aspiration is that everyone experiences this culture with equal opportunity and access, and that all voices are not only heard but also celebrated. We hope you will join our celebration, and we welcome and encourage applicants from all the beautiful dimensions that compose our global village.

Equal Opportunity Policy (EEO)

Red Hat is proud to be an equal opportunity workplace and an affirmative action employer. We review applications for employment without regard to their race, color, religion, sex, sexual orientation, gender identity, national origin, ancestry, citizenship, age, veteran status, genetic information, physical or mental disability, medical condition, marital status, or any other basis prohibited by law.

Red Hat does not seek or accept unsolicited resumes or CVs from recruitment agencies. We are not responsible for, and will not pay, any fees, commissions, or any other payment related to unsolicited resumes or CVs except as required in a written contract between Red Hat and the recruitment agency or party requesting payment of a fee.

Red Hat supports individuals with disabilities and provides reasonable accommodations to job applicants. If you need assistance completing our online job application, email application-assistance@redhat.com . General inquiries, such as those regarding the status of a job application, will not receive a reply.

About Lensa

🔗Website
Visit company profileIconOpenNewWindows

Unlock all Arc benefits!

  • Browse remote jobs in one place
  • Land interviews more quickly
  • Get hands-on recruiter support
PRODUCTS
Arc

The remote career platform for talent

Codementor

Find a mentor to help you in real time

LINKS
About usPricingArc Careers - Hiring Now!Remote Junior JobsRemote jobsCareer Success StoriesTalent Career BlogArc Newsletter
JOBS BY EXPERTISE
Remote Front End Developer JobsRemote Back End Developer JobsRemote Full Stack Developer JobsRemote Mobile Developer JobsRemote Data Scientist JobsRemote Game Developer JobsRemote Data Engineer JobsRemote Programming JobsRemote Design JobsRemote Marketing JobsRemote Product Manager JobsRemote Project Manager JobsRemote Administrative Support Jobs
JOBS BY TECH STACKS
Remote AWS Developer JobsRemote Java Developer JobsRemote Javascript Developer JobsRemote Python Developer JobsRemote React Developer JobsRemote Shopify Developer JobsRemote SQL Developer JobsRemote Unity Developer JobsRemote Wordpress Developer JobsRemote Web Development JobsRemote Motion Graphic JobsRemote SEO JobsRemote AI Jobs
© Copyright 2025 Arc
Cookie PolicyPrivacy PolicyTerms of Service