Our client's Integrations team is looking for a Senior Backend Engineer with a focus on distributed systems to join their ranks. You will work on an existing but rapidly evolving microservice architecture that moves data into, out of, and within the client at hyperscale. This is a chance to get in early and do things right, to solve problems with the least amount of code possible, and to answer hard questions with a scientific, data-driven approach.
What You’ll Do:
- Lead or participate in architectural discussions and decision-making. Debate and commit.
- This is a hands-on IC role. Design and implement Golang microservices within the Integrations ecosystem running in GCP over K8s.
- Own entire data pipelines that span across APIs (gRPC, REST), messaging systems (Google Pub/Sub, AWS SQS, Kafka), shared blob storage (S3, GCS), data warehouses (Bigquery, Cloudberry), and other modalities (NoSQL, Bigtable).
- Work alongside our DevOps engineers to ensure we have the latest and greatest development instrumentation and production observability (tracing, dashboards, alerting, autoscaling, triage automation).
- Create and manage ETL/ELT workflows that transform our billions of raw data points daily into quickly accessible information across our databases and data warehouses.
- Ensure our system has 99.999% reliability and uptime, and 0% data loss. Predict all risk vectors during the design stage, and implement solid mitigation strategies to ensure idempotency and transactionality as datasets flow through the pipes.
- Perform scalability and stability analysis through an empirical approach: load testing, tracer-bullet tests in production, failover scenarios, etc.
- Work with both hard requirements and an open-ended future: satisfy today’s needs, but keep in mind where the platform is ultimately going.
What You’ll Have:
- 7+ years of experience in backend development for SaaS products.
- At least 5 years of experience working with distributed microservice systems, ideally in containerized environments (Docker, k8s, ECS).
- Hands-on experience working with data warehouse technologies. Familiarity with building data pipelines and architectures and designing ETL flows.
- Experience with Golang, Rust, or similar non-scripting language. Additional breadth with scripting languages (e.g. Python, Node) is a plus, but not required.
- Familiarity with modern distributed system instrumentation: DataDog, OpenTelemetry, GitHub Actions, ArgoCD - or their equivalent brands.
- B.S. degree in Computer Science preferred.
About You
- You like to show, not tell. When debates arise, you quickly steer the conversation towards restating the problem at hand, defining success criteria, and testing all hypotheses with real data.
- You pick the right tool for each job. No shoe-horning in things just because they’re familiar and comfortable.
- You actively keep yourself up-to-date on the latest technologies: you can make prudent choices of when they’re mature enough for prime-time, and whether they’ll fit in with our stack.
- You feel responsibility over the things you build, and see them through.
- You love to solve hard problems. But you also don’t invent them for your personal benefit.
- You cringe at the very idea of tech-debt.