I am a Principal Software Engineer with expertise in programming languages such as Python, Spark, Rust, and JS. I have experience in data processing tools like Databricks, Hadoop, Numpy, and Pandas. Additionally, I have skills in web development with Flask, REST API, SQLAlchemy, and GraphQL. I am proficient in various databases including Snowflake, Redshift, Postgres, and MongoDB. I have experience with cloud services like AWS and Azure, as well as CI/CD tools like GIT and Jenkins. I have made significant contributions in architecting data pipelines, developing GQL resolvers, and owning backend applications. I have also improved efficiency in data ingestion and designed and developed data pipelines.
Architected number of data pipelines batch & streaming from a variety of sources. Used DLT, Autoloader in AWS Databricks. Good grasp on Kafka and its architecture. Developed GraphQL (AWS Appsync & Ariadne) resolvers. Stepped up to own backend infra involving Docker, Kubernetes, Argo, Kubectl for more than 6 months. Created Argo DAG’s, Gitlab CICD pipelines, learnt & implemented Docker in Docker services.
Improved efficiency in data ingestion following microservice architecture & asynchronous collection. Majorly designed & owned 2 varied applications. 1. Data pipelines in AWS Databricks. Data sources are Kafka & FE application. 2. Developed python Snowflake backend applications that runs in containers.