Personal details

Meenansha S. - Remote

Meenansha S.

Timezone: Kolkata (UTC+5.5)

Summary

I am very passionate about software development and keeping myself upgraded with new technologies. I have built projects solutions ranging from batch programs to real time solutions. Let me help find the solutions to your problems too.

I am also a mentor, I love teaching programming, especially programming with Big Data. Through mentoring, I accelerate my own constant learning, because there is always something that you want to do better when teaching that knowledge to others!

Work Experience

SSE
ABC | Dec 2022 - Present
Python
MySQL
Snowflake
NA
Big Data Engineer
Optum Global Solutions | Jun 2018 - Present
Scala
MySQL
Big Data
Apache Spark
Apache Kafka
Apache Hive
Build distributed, reliable and scalable data pipelines using Sqoop and Spark to acquire data from multiple OLTP databases and ingest them into a Data Lake instead of traditional data warehouses. 1. Develop Oozie workflows to orchestrate and schedule the entire process. 2. Develop batch processes using Spring batch to extract, transform and load data into MongoDB from hive tables. 3. Developed Rest APIs using FHIR framework to expose the data stored in MongoDB to clients. 4. Develop Jenkins pipeline to automate the deployment process of docker images into Openshift to support scheduled and unscheduled releases. 5. Used Sonar to ensure good unit test code coverage and the code is written up to set quality standards. 6. Developed automated testing framework using Junit and Cucumber.

Personal Projects

DataLake for Team Loki
2018
Scala
Shell
Cucumber
HBase
Sonar
Apache Spark
Apache Kafka
Oozie
Apache Hive
1. Develop Oozie workflows to orchestrate and schedule the entire process. 2. Develop batch processes using Spring batch to extract, transform and load data into MongoDB from hive tables. 3. Developed Rest APIs using FHIR framework to expose the data stored in MongoDB to clients. 4. Develop Jenkins pipeline to automate the deployment process of docker images into Openshift to support scheduled and unscheduled releases. 5. Used Sonar to ensure good unit test code coverage and the code is written up to set quality standards. 6. Developed automated testing framework using Junit and Cucumber.
Future Clearing Model for Lloyds
2016
Scala
Shell
Apache Spark
Apache Hive
1. Develop batch processes using Scala and Spark to ingest structured and semi-structured data(XML) into Hive tables. 2. Develop transformation logics using Scala and Spark to prepare data for provisioning to downstream clients. 3. Build data validation tool to ensure data integrity. 4. Developed a data security layer to mask sensitive data in test environment. 5. Develop Oozie workflows to orchestrate and schedule the entire process. 6. Designed Hbase tables to store all the metadata related to the batch jobs. 7. Automate all the manual processes using shell scripting. 8. Developed a centralized logging framework wherein the logs would be written to Kafka topics and later stored to HDFS.