We are looking for an experienced Data Engineering Specialist to join a platform engineering team focused on high-throughput, event-driven systems. In this role, you’ll design, build and optimise large-scale data processing pipelines, working with Flink, Kafka, Java, and Azure to deliver streaming solutions, deploy microservices on Kubernetes, and shape future data integration strategies.
Key technologies:
- Stream processing: Flink (SQL, DataStream API)
- Messaging: Kafka
- Backend: Java, Quarkus Framework
- Cloud: Azure (Event Hubs, Blob Storage)
- Deployment: Kubernetes, ArgoCD
Core responsibilities:
- Design and implement stream processing jobs to meet complex data processing requirements
- Build and maintain deployment pipelines for streaming and microservice workloads
- Configure checkpointing, savepointing, and integration with cloud storage
- Support infrastructure setup and operation within the Flink ecosystem
- Develop and manage data sinks, routing mechanisms, and connectors
What we’re looking for:
- Strong hands-on experience with Flink and Kafka in production environments
- Solid Java development background; familiarity with Quarkus is beneficial
- Understanding of microservices, CQRS, Saga, and event-driven architecture patterns
- Experience with Kubernetes deployments and GitOps practices
- Problem-solving mindset and willingness to engage with both technical and business teams