Personal details

Vinay B. - Remote

Vinay B.

Timezone: Kolkata (UTC+5.5)

Summary

Backend + Data Engineer, who loves playing with data and making system scalable and reliable. Currently working on making the workspace provisioning service scalable for solving of advanced assessments using in browser full fledged IDEs, integrating cloud engineer assessment and collaborative cloud IDE using open source projects. Built ETL pipelines and spark jobs to leverage data for supporting the business use-cases. Well versed in Python, Pyspark, Hadoop, AWS, ELK stack, Ruby on Rails, Dev-ops and building scalable systems.

Work Experience

Senior Software Engineer
HackerRank | Apr 2021 - Present
Redis
Google Cloud Platform
Vm
Microservices
Traefik
Go (Golang)
Elastic Stack
- Leading the efforts on the editors-architecture (cloud IDE), collaboration solutions for data-science and scaling our cloud assessments platform. - Reduced cloud infrastructure cost by ~55% for Developer Skills Platform through cluster changes - Document and plan tasks and action-items with stake-holders (product and engineering team) - Monitoring and dashboard setup for usage and analytics for Skills platform and Cloud assessments - Improved the scale for cloud engineering role by 500%
Software Development Engineer II
HackerRank | Jan 2020 - Mar 2021
Amazon S3
Docker
DynamoDB
Apache Spark
Spark streaming
Emr
Athena
Redash
Apache hudi
AWS (Amazon Web Services)
- Document, design and implement the ingestion pipeline for data generated from the various data sources of HackerRank for Enterprise and HackerRank for community. - Wrote spark jobs for data processing pipelines to support use-cases like Skill Rating, Benchmarking and Developer Skill Profile. - The system currently ingests data from more than 60 million submissions every year and provides insights in near real-time being the underlying module for all skills based information presented across the products. - Logging, alerts and dashboard for monitoring of the data getting ingested and processed. - Optimised spark jobs to generate computational results from 24 hours to ~2 minutes for candidate packet data. - Unit test and travis setup for the computational spark jobs using pytest and travis - Eclipse Theia CDN integration and extensions to customise it. Reduced the load time for the IDE by ~50%.