Job Title :- Software Engineer(SQL & ETL)
Duration:- 6+ Months Contract (With Possibility of extension)
Location: Open For 100% Remote Job
W2 Role/ No C2C
Overview:
Opportunity to design and optimize large-scale data processing pipelines, by working on a high-impact migration project critical to the company's data infrastructure.
Responsibilities:
●Work with key partners like data platform, security, network engineering to define and implement ETL pipelines, and Databricks workflows for engineering teams and business intelligence.
●Lead the migration from Redshift to Databricks, ensuring data integrity and minimal downtime.
●Work with key partners like data platform, security, and network engineering to define and implement ETL pipelines and Databricks workflows for engineering teams and business intelligence.
●Understand, document, and communicate best practices for product engineering teams working in public cloud environments.
●Provide technical guidance and mentorship to junior engineers via code review and design docs.
●Contribute to improving the Databricks infrastructure stack by optimizing workflows and implementing best practices.
●Ensure BI dashboards, reporting pipelines, and financial data processing are successfully transitioned to Databricks.
●Establish ownership structures for ongoing data processing, quality assurance, and monitoring post-migration.
●Contribute to code improvement of the DBX Infrastructure Stack.
Requirements
● AWS expertise in both development and production, with a strong understanding of cloud automation and best practices.
● 5+ years of SQL experience (Spark and Snowflake-specific SQL is a plus).
● Experience designing, building, and maintaining data processing systems at scale.
● Strong Python development skills with experience in data engineering and scripting.
● Proficiency in Terraform, at least at an operational level; experience deploying and managing Terraform as a service is a plus.
● Understanding of Service-Oriented Architecture (SOA) and best practices for distributed systems.
● 5+ years of experience with schema design, dimensional data modeling, and scalable data architecture.
● Proven track record of managing, communicating, and collaborating with internal teams on data platform plans and capabilities.