Role Overview:
We are seeking a highly skilled Big Data Software Engineer to design, build, and operate scalable, secure, cloud-native data platforms at Sightview Software LLC. In this role, you will develop high-quality production code, design end-to-end data pipelines, and influence system architecture, data frameworks, and analytics capabilities across our products. You will serve as a subject-matter expert in big-data technologies, ETL/ELT, analytics dashboards, and AWS data services, partnering closely with engineering, operations, and business stakeholders to deliver reliable, analytics-ready data solutions that support informed decision-making and product innovation.
Job responsibilities
• Regularly provides technical guidance and direction to support the business and its technical teams, contractors, and vendors
• Develops secure and high-quality production code and reviews code written by others
• Drives decisions that influence the product design, application functionality, and technical operations and processes
• Serves as a function-wide subject matter expert in areas of Big Data, ETL, Analytic Dashboards, and AWS data tooling
• Influences peers and project decision-makers to consider the use and application of leading-edge technologies
• Adds to the team culture of diversity, opportunity, inclusion, and respect
• Designs and develops end-to-end data pipelines using Python (PySpark), Scala, and AWS services
• Utilizes programming languages such as Typescript, works with NoSQL databases and SQL, and leverage container orchestration services, including Kubernetes or ECS, along with a variety of AWS tools and services
• Defines and implements backup, recovery, and archiving strategies in partnership with Operations team members
• Generates advanced data models, versioned data sets, and data analytics dashboards in QuickSight
• Acts as Technical Lead for additional team members, including mentorship, task allocation, and pair programming
Required qualifications, capabilities, and skills
• Formal training or certification in software engineering concepts and 3+ years of applied experience
• Hands-on practical experience delivering system design, application development, testing, and operational stability
• Advanced in one or more programming language(s) – Python (PySpark), Scala, Typescript
• Hands-on practical experience in developing Spark-based Frameworks for end-to-end ETL, ELT & reporting solutions using key components like Spark SQL & Spark Streaming.
• Experience with AWS cloud technologies, including S3, RedShift and Quicksight
• Experience with Relational and No SQL databases • Cloud implementation experience with AWS including:
• AWS Data Services: Proficiency in Lake formation, Glue ETL (or) EMR, S3, Glue Catalog, Athena, Airflow (or) Lambda + Step Functions + Event Bridge
• Data De/Serialization: Expertise in Parquet, Iceberg
• AWS Data Security: Good Understanding of security concepts such as Lake formation, IAM, Service roles, Encryption, KMS, Secrets Manager, and RedShift Data Sharing
• Advanced knowledge of software applications and technical processes with considerable in-depth knowledge in one or more technical disciplines (e.g., cloud, artificial intelligence, machine learning, mobile, etc.)
• Ability to tackle design and functionality problems independently with little to no oversight
• Practical cloud native experience
• Experience in Computer Science, Computer Engineering, Mathematics, or a related technical field
Preferred qualifications, capabilities, and skills
• Experience with building a data lake, building data platforms, building data frameworks, Built/Design of Data as a Service API
• In-depth knowledge of the US healthcare industry and its IT systems
Soft Skills and Cultural Fit
• Proactive problem-solving skills with a focus on delivering results
• Ability to navigate ambiguity and make data-driven decisions
• Collaborative mindset with a commitment to fostering a positive team culture