There are 2 different roles:
Job Description
Title - Bigdata Scala Spark Developer-ADF
Exp - 4 to 8years
Location- Pune, Chennai,Hyderabad
NP - Immediate to 30 Days
Role Overview
Job Description :
Develop and maintain scalable data pipelines using Scala within Azure Data Factory environments Design and implement data ingestion processing and transformation workflows leveraging ADF capabilities Collaborate with crossfunctional teams to understand data requirements and deliver efficient solutions Ensure data quality reliability and performance optimization throughout the data lifecycle Participate in code reviews testing and deployment activities to maintain high standards Stay updated with emerging trends and best practices in Scala programming and Azure Data Factory Document technical specifications processes and workflows for knowledge sharing and compliance
Roles and Responsibilities :
Lead the design and development of data integration solutions using Scala and Azure Data Factory Analyze complex data sets and troubleshoot issues related to data pipelines and workflows Guide and mentor junior team members on Scala coding standards and ADF best practices Collaborate with stakeholders to gather requirements and translate them into technical designs Optimize data workflows for performance scalability and cost efficiency within Azure environments Ensure adherence to project timelines quality standards and organizational policies Communicate effectively with onshore and offshore teams to coordinate project activities and resolve challenges
Title - Bigdata Scala Spark + Databricks
Exp - 5 to 8years
Location- Pune, Chennai , Hyderabad
NP - Immediate to 30 Days
Role Overview
Job Description :
Develop and maintain scalable data pipelines and workflows using Scala and Databricks Leverage Azure Data Factory to orchestrate and automate data movement and transformation processes Collaborate with data engineers and architects to build robust data integration solutions Optimize performance of data processing jobs and ensure data quality and integrity Participate in the design and implementation of data lake and data warehouse solutions Utilize best practices for coding testing and deployment within cloud environments Monitor and troubleshoot data pipeline issues to ensure minimal downtime Stay updated with emerging technologies and industry trends related to big data and cloud data platforms
Roles and Responsibilities :
Design develop and deploy endtoend data pipelines on Databricks using Scala Implement data ingestion and transformation workflows leveraging Azure Data Factory Collaborate with crossfunctional teams to gather requirements and translate them into technical solutions Conduct code reviews and provide mentorship to junior team members Ensure compliance with data governance and security standards Analyze and resolve performance bottlenecks in data processing workflows Document technical designs processes and best practices Participate in sprint planning daily standups and retrospectives as part of Agile development
Mandatory Certificate Apache SparkDatabricks Certified Associate Developer for Apache Spark
Data ScientistDatabricks Certified Data Scientist Professional
PySparkDatabricks Certified Associated Developer for Apache Spark