About The Company
The company is redefining the future of airport operations through cutting-edge autonomous technology. Their mission is to transform ground handling — the complex, fast-paced world of aircraft servicing — by delivering reliable, intelligent automation solutions that enhance safety, efficiency, and scalability. Backed by leading venture capital firms in aviation and autonomous systems, they partner with the world's largest airlines and ground service providers to bring advanced autonomy to the tarmac.
Role Overview
We are seeking a highly skilled Senior Perception Engineer to lead the development of robust 3D object detection models that power our autonomous ground vehicles. You will play a critical role in shaping the perception stack, leveraging multi-modal sensor data (LiDAR, camera, and radar) to ensure precise and reliable object detection in dynamic airport environments.
You’ll be part of a highly collaborative team of experts, where your contributions will directly impact safety-critical systems deployed in real-world operations. This is a unique opportunity to work on complex perception challenges in a rapidly growing sector of autonomy.
Responsibilities
- Design and implement state-of-the-art 3D object detection models, including multi-modal approaches that fuse data from LiDAR, cameras, and radar sensors.
- Develop and maintain robust data pipelines for collection, preprocessing, and annotation to support model training and evaluation.
- Build and manage the lifecycle of a proprietary multi-modal dataset tailored for object detection and tracking in real-world ground handling scenarios.
- Train, validate, and deploy perception models to production, ensuring reliability and real-time performance in safety-critical environments.
- Conduct in-depth experimentation and benchmarking of cutting-edge models from academic literature and open-source frameworks to evaluate suitability and improve system accuracy.
- Establish rigorous evaluation metrics and monitoring systems to track model performance across development and deployment phases.
- Collaborate cross-functionally with autonomy, systems, and software teams to integrate perception outputs into the broader autonomy stack.
- Mentor junior engineers, share best practices, and contribute to a strong engineering culture grounded in innovation, ownership, and excellence.
Qualifications
- Master’s or PhD in Computer Science, Robotics, Electrical Engineering, Artificial Intelligence, or a related technical field.
- Minimum of 5 years of experience developing and deploying 2D and 3D object detection systems in real-world or research environments.
- Deep expertise in computer vision and deep learning techniques, particularly related to 3D perception and multi-sensor fusion.
- Strong understanding of the full perception pipeline, including sensing, data preprocessing, object detection, tracking, and localization.
- Proficient in Python and familiar with relevant libraries such as PyTorch, TensorFlow, OpenCV, and ROS.
- Demonstrated ability to lead technical projects from design to deployment with minimal oversight.
Preferred Qualifications
- PhD in Computer Vision, Robotics, or a related field with a strong publication record.
- Publications in top-tier conferences such as CVPR, ICCV, NeurIPS, ICRA, IJCAI, or AAAI.
- Experience working with real-time perception systems in safety-critical or highly dynamic environments.
- Familiarity with automotive or robotic sensor suites, including calibration and synchronization of multi-modal data.
- Experience mentoring or leading small teams of engineers or researchers.