Job Description
We are looking for an experienced Senior Data Engineer with a strong foundation in Python, SQL, and Snowflake, and hands-on expertise in BigQuery, Databricks. In this role, you will build and maintain scalable data pipelines and architecture to support analytics, data science, and business intelligence initiatives. You’ll work closely with cross-functional teams to drive data reliability, quality, and performance.
Responsibilities:
- Contribute to the design and architecture of a lakehouse solution, potentially leveraging technologies such as Iceberg, Snowflake, and BigQuery.
- Build and maintain robust ETL/ELT workflows using Python and SQL to handle structured and semi-structured data.
- Partner with data scientists and analysts to provide high-quality, accessible, and well-structured data.
- Ensure data quality, governance, security, and compliance across pipelines and data stores.
- Monitor, troubleshoot, and improve the performance of data systems and pipelines.
- Participate in code reviews and help establish engineering best practices.
- Mentor junior data engineers and support their technical development.
Qualifications
- Studies in computer science, Engineering, or a related field.
- 5+ years of hands-on experience in data engineering, with at least 2 years working with Bigquery or Snowflake.
- Strong programming skills in Python for data processing and automation.
- Advanced proficiency in SQL for querying and transforming large datasets.
- Solid understanding of data modelling, warehousing, and performance optimization techniques.
- Proven experience in data cataloging and inventorying large-scale datasets.
- Hands-on experience implementing and working with Medallion architecture in data lakehouse environments
- Iceberg, Data Mesh experience, dbt (Those are a Plus)
null