Hire top senior developers in Canada with Arc
Arc has a large talent pool worldwide, spanning 190 countries and over 170 technologies.
Hire top 2% remote developers in Canada to assist your engineering team and deliver your projects today.
Your vision deserves a great software developer to bring it to life. Finding the right fit alone is challenging, but Arc makes hiring the best freelance or full-time remote software developers in Canada easy. Save time by connecting directly with vetted developers ready to interview. Find expert software developers you need today.
Data Engineering developer in Canada (UTC-4)
An industrious, astute, and technically-focused data engineer with hands-on experience in building analytics tools that utilize the data pipeline to provide actionable insights. Ability to deal well with ambiguity, prioritize needs, and deliver results in a dynamic environment. Passionate about finding solutions for challenging and complex data problems. Interested in working with Python, SQL, Airflow, dbt, AWS, Databricks, Snowflake.
Vetted Data Engineering developer in Canada (UTC-4)
Over 6+ years of experience in Data Engineering, Data Pipeline Design, Development and Implementation as a Sr. Data Engineer and Data Developer. • Strong experience in Software Development Life Cycle (SDLC) including Requirements Analysis, Design Specification and Testing as per Cycle in both Waterfall and Agile methodologies. • Strong experience in writing scripts using Python API, PySpark API and Spark API for analyzing the data. • Experience with Azure Cloud, Azure Data Factory, Azure Data Lake Storage, Azure Synapse Analytics, Azure Analytical Service, Big Data Technologies (Apache Spark) • Experience with GCP Cloud Storage, Big Query, Composer, Cloud Dataproc, Cloud SQL, Cloud Functions, Cloud Pub/Sub • Worked on ETL Migration services by creating and deploying AWS Lambda functions to provide a serverless data pipeline that can be written to glue catalog and queried from Athena. • Extensively used Python Libraries PySpark, Pytest, Pymongo, PyExcel, Psycopg, embedPy, NumPy and Beautiful Soup. • Migrated an existing on-premises application to GCP. Used GCP services like Cloud Dataflow and Dataproc for small data sets processing and storage. • Hands On experience on Spark Core, Spark SQL, Spark Streaming and creating the Data Frames handle in SPARK with Scala. • Experience in NoSQL databases and worked on table row key design and to load and retrieve data for real time data processing and performance improvements based on data access patterns. • Experience with Unix/Linux systems with scripting experience and building data pipelines. • Extensive experience in Hadoop Architecture and various components such as HDFS, Job Tracker, Task Tracker, Name Node, Data Node, and Map Reduce concepts. • Performed complex data analysis and provided critical reports to support various departments. • Expertise in development of various reports, dashboards using various PowerBI and Tableau Visualizations • Experience in building large scale highly available Web Applications. Working knowledge of web services and other integration patterns. • Experienced with version control systems like Git, GitHub, CVS, and SVN to keep the versions and configurations of the code organized. Efficiently facilitating task tracking and issue management using JIRA. • Good communication skills, work ethics and the ability to work in a team efficiently with good leadership skills. • Experience with continuous integration and automation using Jenkins. • Experience building docker image using accelerator and scanning images using various scanning techniques • Over 6+ years of experience in Data Engineering, Data Pipeline Design, Development and Implementation as a Sr. Data Engineer and Data Developer. • Strong experience in Software Development Life Cycle (SDLC) including Requirements Analysis, Design Specification and Testing as per Cycle in both Waterfall and Agile methodologies. • Strong experience in writing scripts using Python API, PySpark API and Spark API for analyzing the data. • Experience with Azure Cloud, Azure Data Factory, Azure Data Lake Storage, Azure Synapse Analytics, Azure Analytical Service, Big Data Technologies (Apache Spark) • Experience with GCP Cloud Storage, Big Query, Composer, Cloud Dataproc, Cloud SQL, Cloud Functions, Cloud Pub/Sub • Worked on ETL Migration services by creating and deploying AWS Lambda functions to provide a serverless data pipeline that can be written to glue catalog and queried from Athena. • Extensively used Python Libraries PySpark, Pytest, Pymongo, PyExcel, Psycopg, embedPy, NumPy and Beautiful Soup. • Migrated an existing on-premises application to GCP. Used GCP services like Cloud Dataflow and Dataproc for small data sets processing and storage. • Hands On experience on Spark Core, Spark SQL, Spark Streaming and creating the Data Frames handle in SPARK with Scala. • Experience in NoSQL databases and worked on table row key design and to load and retrieve data for real time data processing and performance improvements based on data access patterns. • Experience with Unix/Linux systems with scripting experience and building data pipelines. • Extensive experience in Hadoop Architecture and various components such as HDFS, Job Tracker, Task Tracker, Name Node, Data Node, and Map Reduce concepts. • Performed complex data analysis and provided critical reports to support various departments. • Expertise in development of various reports, dashboards using various PowerBI and Tableau Visualizations • Experience in building large scale highly available Web Applications. Working knowledge of web services and other integration patterns. • Experienced with version control systems like Git, GitHub, CVS, and SVN to keep the versions and configurations of the code organized. Efficiently facilitating task tracking and issue management using JIRA. • Good communication skills, work ethics and the ability to work in a team efficiently with good leadership skills. • Experience with continuous integration and automation using Jenkins. • Experience building docker image using accelerator and scanning images using various scanning techniques
Data Engineering developer in Canada (UTC-7)
_I enjoy working as a Data Engineering Consultant in the cloud, building Analytics workflows and discovering valuable insights that help solve problems for client businesses and other types of organizations._ ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- I have a keen interest in ETL and ELT data Pipelines, Machine Learning Systems, Analytics Engineering and Data Warehousing, as well as Cloud Development Operations. I am on a career path that leads to becoming a seasoned Data and Analytics Engineer with useful Machine Learning Operations(MLOps) Engineering, and Cloud computing skills. With an Educational Background in Engineering Technology and Applied Sciences, I have acquired a broad and rich skillset that overlaps the fields of Data and Machine Learning Engineering, Software Development, and Cloud Operations. I have worked on more than a few Engineering and Cloud projects, both individually and as part of Agile Development teams. My experience covers building data products in Retail, Energy, Telco, Banking and Financial services, and also HR Analytics.
Vetted Data Engineering developer in Canada (UTC-5)
Strong verbal and written communication skills. Strong networking and collaboration skills Good control of stress and fast learning ability. Highly skilled in collecting business requirements and translating them to technical specifications Proven ability to deliver advanced analytics solutions with Databricks Experienced with CI/CD tools (Git, Azure DevOps, GitLab, GitHub) 9 years of experience dedicated to data engineering Databricks Certified Data Engineer Associate | Microsoft Certified Data Engineer Associate. Professional in building and deploying Distributed Machine Learning / Deep Learning models. Proficient in Hyperparameter tuning 4 years of experience dedicated to data science and Machine Learning, Data Science Micro Master’s Degree from University of California-San Diego ([edX.org](http://edx.org/)). Databricks Certified Machine Learning Associate Provided consultations for a range of sectors, including government, manufacturing, energy, and transportation. Excellent coaching / training skills for junior team members. Agile Software Development. Proactive self-learner, continuously prepared to embrace new challenges like Generative AI.
Data Engineering developer in Canada (UTC-5)
Data Engineer with 5+ years of experience in data modeling, ETL/ELT, and SQL/SparkSQL. Skilled in Python scripting, Apache Airflow, and cloud platforms like AWS and Azure. Experienced with Databricks and building Lakehouse architectures for efficient data processing. Expertise in data governance, quality guidelines, and managing both relational (Snowflake, PostgreSQL) and NoSQL databases (MongoDB). Focused on delivering high-quality, reliable data for actionable business insights.
Vetted Data Engineering developer in Canada (UTC-5)
I am a software engineer with 9 years of experience in translating client requirements into end-to-end machine learning systems. Skilled in problem-solving, data modeling, and various technologies including Python, Scala, and Git.
Data Engineering developer in Canada (UTC-4)
Over two decades of experience in Software Design and Development, specializing in Data Archtecture –Snowflake, Databricks, Apache Spark, DBT, Cassandra, Hadoop, Kafka, MemSQL, Neo4J, Ariflow • Strong experience in migrating existing SQL based systems to NoSQL (Cassandra, Memsql, Neo4J) systems by re-hosting, re-factoring and re-platforming. • Expertise in designing and architecting Data Platforms and EDW solutions • Experience in Timeseries solutions • Extensive experience of designing and implementing software/infrastructure architecture for large scale and enterprise applications using onsite and cloud technologies – AWS, GCP and Azure • Vast experience in software design, research, development, team management and client communications • Strong background using OOD/OOP/SOA methodologies • Excellent technical expertise in Python, Java/JEE, Scala as well as .Net Technologies • Significant experience of IMDG (In-Memory Data Grids) and Distributed Object Cache • Strong database skills in designing OLTP and OLAP System using SQL • Well versed in designing and implementing Cassandra, Memsql and Hadoop • Professional Exposure with BIG DATA framework Apache Spark and Apache Kafka • Significant Experience in containerization Docker, Kubernetes • Well aware of different development methodologies ranging from heavy weight methodologies like water fall to modern light weight agile methodologies like Agile, Scrum and Kanban
Data Engineering developer in Canada (UTC-4)
I am a data Engineer with relevant experience in many domains . I have also participated in various national and international machine learning competitions.
Data Engineering developer in Canada (UTC+1)
Analytical and detail-oriented Data Analyst with years of experience in transforming complex datasets into actionable insights. Proficient in SQL, Python, Excel, and various BI tools, with a strong ability to communicate findings to both technical and nontechnical stakeholders. Skilled at problem-solving and committed to leveraging data to drive business decisions and improve operational efficiency.
Vetted Data Engineering developer in Canada (UTC-5)
Machine Learning Engineer with over 6 years of industry experience, including digital-focused roles at Rolls-Royce, and as technical lead at East Africa's largest injection mould-makers. Holds a Master's in Machine Learning from Georgia Tech. Specialized in deploying ML/AI models for predictive analysis and process optimization that solve complex business challenges. Please visit my portfolio at www.ksbcode.com.
Meet Data Engineering developers who are fully vetted for domain expertise and English fluency.
Stop reviewing 100s of resumes. View Data Engineering developers instantly with HireAI.
Get access to 450,000 talent in 190 countries, saving up to 58% vs traditional hiring.
Feel confident hiring Data Engineering developers with hands-on help from our team of expert recruiters.
Share with us your goals, budget, job details, and location preferences.
Connect directly with your best matches, fully vetted and highly responsive.
Decide who to hire, and we'll take care of the rest. Enjoy peace of mind with secure freelancer payments and compliant global hires via trusted EOR partners.
Ready to hire your ideal Data Engineering developers?
Get startedArc has a large talent pool worldwide, spanning 190 countries and over 170 technologies.
Hire top 2% remote developers in Canada to assist your engineering team and deliver your projects today.
Arc helps you build your team with our network of full-time and freelance Data Engineering developers worldwide, spanning 190 countries.
We assist you in assembling your ideal team of programmers in your preferred location and timezone.