Tell us about your request
Describe the Data Engineering developer you're looking for.
Interview candidates
Receive vetted candidate profiles matching your engineering needs.
Hire Data Engineering developers
When ready, select a developer to hire.
Arc helps you hire senior Data Engineering developers in Canada. With 175 Data Engineering vetted programmers in Canada available for hire on a full-time basis, we have one of the largest network of vetted talent. Our vetting process ensure you hire senior Data Engineering developers and experts in Canada that you can trust.
Meet Data Engineering developers with verified technical and communication skills who are ready to interview.
Hire Data Engineering developers in Canada who have been thoroughly vetted.
Hire a senior Data Engineering developer in 14 days for full-time employment.
Want to expand your Data Engineering development team in Canada? Here are some of our top remote senior Data Engineering developers in Canada for you to browse, interview, and hire.
An industrious, astute, and technically-focused data engineer with hands-on experience in building analytics tools that utilize the data pipeline to provide actionable insights. Ability to deal well with ambiguity, prioritize needs, and deliver results in a dynamic environment. Passionate about finding solutions for challenging and complex data problems. Interested in working with Python, SQL, Airflow, dbt, AWS, Databricks, Snowflake.
Over 6+ years of experience in Data Engineering, Data Pipeline Design, Development and Implementation as a Sr. Data Engineer and Data Developer. • Strong experience in Software Development Life Cycle (SDLC) including Requirements Analysis, Design Specification and Testing as per Cycle in both Waterfall and Agile methodologies. • Strong experience in writing scripts using Python API, PySpark API and Spark API for analyzing the data. • Experience with Azure Cloud, Azure Data Factory, Azure Data Lake Storage, Azure Synapse Analytics, Azure Analytical Service, Big Data Technologies (Apache Spark) • Experience with GCP Cloud Storage, Big Query, Composer, Cloud Dataproc, Cloud SQL, Cloud Functions, Cloud Pub/Sub • Worked on ETL Migration services by creating and deploying AWS Lambda functions to provide a serverless data pipeline that can be written to glue catalog and queried from Athena. • Extensively used Python Libraries PySpark, Pytest, Pymongo, PyExcel, Psycopg, embedPy, NumPy and Beautiful Soup. • Migrated an existing on-premises application to GCP. Used GCP services like Cloud Dataflow and Dataproc for small data sets processing and storage. • Hands On experience on Spark Core, Spark SQL, Spark Streaming and creating the Data Frames handle in SPARK with Scala. • Experience in NoSQL databases and worked on table row key design and to load and retrieve data for real time data processing and performance improvements based on data access patterns. • Experience with Unix/Linux systems with scripting experience and building data pipelines. • Extensive experience in Hadoop Architecture and various components such as HDFS, Job Tracker, Task Tracker, Name Node, Data Node, and Map Reduce concepts. • Performed complex data analysis and provided critical reports to support various departments. • Expertise in development of various reports, dashboards using various PowerBI and Tableau Visualizations • Experience in building large scale highly available Web Applications. Working knowledge of web services and other integration patterns. • Experienced with version control systems like Git, GitHub, CVS, and SVN to keep the versions and configurations of the code organized. Efficiently facilitating task tracking and issue management using JIRA. • Good communication skills, work ethics and the ability to work in a team efficiently with good leadership skills. • Experience with continuous integration and automation using Jenkins. • Experience building docker image using accelerator and scanning images using various scanning techniques • Over 6+ years of experience in Data Engineering, Data Pipeline Design, Development and Implementation as a Sr. Data Engineer and Data Developer. • Strong experience in Software Development Life Cycle (SDLC) including Requirements Analysis, Design Specification and Testing as per Cycle in both Waterfall and Agile methodologies. • Strong experience in writing scripts using Python API, PySpark API and Spark API for analyzing the data. • Experience with Azure Cloud, Azure Data Factory, Azure Data Lake Storage, Azure Synapse Analytics, Azure Analytical Service, Big Data Technologies (Apache Spark) • Experience with GCP Cloud Storage, Big Query, Composer, Cloud Dataproc, Cloud SQL, Cloud Functions, Cloud Pub/Sub • Worked on ETL Migration services by creating and deploying AWS Lambda functions to provide a serverless data pipeline that can be written to glue catalog and queried from Athena. • Extensively used Python Libraries PySpark, Pytest, Pymongo, PyExcel, Psycopg, embedPy, NumPy and Beautiful Soup. • Migrated an existing on-premises application to GCP. Used GCP services like Cloud Dataflow and Dataproc for small data sets processing and storage. • Hands On experience on Spark Core, Spark SQL, Spark Streaming and creating the Data Frames handle in SPARK with Scala. • Experience in NoSQL databases and worked on table row key design and to load and retrieve data for real time data processing and performance improvements based on data access patterns. • Experience with Unix/Linux systems with scripting experience and building data pipelines. • Extensive experience in Hadoop Architecture and various components such as HDFS, Job Tracker, Task Tracker, Name Node, Data Node, and Map Reduce concepts. • Performed complex data analysis and provided critical reports to support various departments. • Expertise in development of various reports, dashboards using various PowerBI and Tableau Visualizations • Experience in building large scale highly available Web Applications. Working knowledge of web services and other integration patterns. • Experienced with version control systems like Git, GitHub, CVS, and SVN to keep the versions and configurations of the code organized. Efficiently facilitating task tracking and issue management using JIRA. • Good communication skills, work ethics and the ability to work in a team efficiently with good leadership skills. • Experience with continuous integration and automation using Jenkins. • Experience building docker image using accelerator and scanning images using various scanning techniques
I am a software engineer with 9 years of experience in translating client requirements into end-to-end machine learning systems. Skilled in problem-solving, data modeling, and various technologies including Python, Scala, and Git.
Describe the Data Engineering developer you're looking for.
Receive vetted candidate profiles matching your engineering needs.
When ready, select a developer to hire.
Ready to hire your ideal Data Engineering developers?
Get startedArc has a large talent pool worldwide, spanning 190 countries and over 170 technologies.
Hire top 2% remote developers in Canada to assist your engineering team and deliver your projects today.
Arc helps you build your team with our network of full-time and freelance Data Engineering developers worldwide, spanning 190 countries.
We assist you in assembling your ideal team of programmers in your preferred location and timezone.