Overview
We're seeking a senior data engineer to design, build and scale a geospatial data pipeline and data storage, powering hundreds of thousands of wildfire simulations. You will be working with a highly skilled team of various specialties, who will look to you to own the pipeline.
This is a high-impact role—you'll need to move quickly, make pragmatic technical choices and build for scale from day one. You'll be a core technical contributor with ownership over our data flow.
Your work will directly support data-driven wildfire mitigation strategies and help shape how wildfire risk is understood and addressed.
About XyloPlan
XyloPlan helps communities, fire agencies, real estate developers, and insurers reduce wildfire risk through a science-based platform that identifies and mitigates Fire Pathways™, the routes fast-moving fires are most likely to take. Co-founded by fire service professionals and technologists, our work supports smarter resource allocation, improved insurance availability, and greater community resilience across fire-prone landscapes.
Description
Responsibilities:
- Design, build, and manage scalable data pipelines, ETL tools
- Manage and design databases and data lake/warehouse
- Work within a multidisciplinary team to help produce high quality usable data
- Ensure accurate data lineage
- Develop monitoring solutions to be able to identify and rapidly resolve issues
- Create and maintain internal documentation
- Write maintainable, well-tested Python and SQL code that other engineers can build on
Qualifications:
- 7+ years of production data engineering experience with demonstrated scale (TB+ datasets, high-volume batch processing, or equivalent)
- Strong track record building complex data pipelines from scratch—you've shipped production systems, not just maintained legacy code
- Expert-level Python and SQL for data transformation and pipeline logic
- Hands-on experience with distributed batch computing frameworks (Spark, Dask, Ray, or similar)
- Production experience with workflow orchestration tools (Airflow, Prefect, Dagster, or similar)
- Experience implementing data lake/lakehouse/warehouse patterns
- Experience with GCP services (BigQuery, Cloud Storage, Cloud SQL/Postgres, Dataproc, Cloud Run, GKE, or similar)
- Strong CI/CD, testing, and version control practices
- Excellent communication, collaboration, and teamwork abilities
- Excellent problem-solving and analytical skills
Job Type: Full-time
Pay: $140,000.00 - $180,000.00 per year
Benefits:
- Dental insurance
- Health insurance
- Paid time off
- Vision insurance
Ability to Commute:
Work Location: Remote