Job Description:
We are looking for a candidate with 5+ years of experience in a Data Engineer role, to join our growing team of analytics experts. The hire will be responsible for expanding and optimizing our data and data pipeline architecture, as well as optimizing data flow and collection for cross functional teams. The Data Engineer will support the company on data initiatives and will ensure optimal data delivery architecture is consistent throughout ongoing projects,proficiency in both English and Manderian is required to facilitate effective communication with our diverse team and international stakeholders.
Responsibilities:
Create and maintain optimal data pipeline architecture.
Assemble large, complex data sets that meet functional / non-functional business requirements.
Identify, design, and implement internal process improvements: automating manual processes, optimizing data delivery, re-designing infrastructure for greater scalability, etc.
Build the infrastructure required for optimal extraction, transformation, and loading of data from a wide variety of data sources using AWS ‘big data’ and SQL technologies.
Build analytics tools that utilize the data pipeline to provide actionable insights into customer behavior, operational efficiency and other key business performance metrics.
Work with stakeholders including the Executive, Product, Data and Design teams to assist with data-related technical issues and support their data infrastructure needs.
Keep our data separated and secure across national boundaries through multiple data centers and AWS regions.
Create data tools for analytics and data scientist team members that assist them in building and optimizing our product into an innovative industry leader.
Work with data and analytics experts to strive for greater functionality in our data systems.
Required Skills:
Minimum of 5+ years of hands-on experience with big data related ecosystems and tools such as Databricks, Snowflake
Minimum of 5+ years of hands-on experience with big data related tools such as Spark, Hive, Hadoop, EMR.
Minimum of 5+ years of hands-on experience with SQL/NonSQL knowledge and experience.
Minimum of 3+ years of hands-on experience with AWS services such as EC2, ECS, MSK, RDS, Redshift
Minimum of 3+ years python development experience
Hands-on experience building and optimizing ‘big data’ data pipelines, architectures and data sets.
Hands-on experience building and optimizing streaming/batch data pipelines.
Experience of working on large scale systems
Strong analytic skills related to working with structured/unstructured datasets.
Successful history of manipulating, processing and extracting value from large disconnected datasets.
Knowledgeable about data modeling, data access, and data storage techniques.
Degree in Computer Science, Statistics, Informatics, Information Systems or another quantitative field, or similar field; a Master’s is a plus
Fluent Manderian speaking skill is a must
Preferred Qualifications:
Familiar with Indexing solutions such as Elasticsearch, Solr
Familiar with streaming systems and tools such as Spark-streaming, Kafka-stream, Flink, etc
Familiar with Workflow management tools such as Airflow, Apache Nifi, etc
Good understanding of machine learning/numerical and analytical skills