Location : BGC (Hybrid – 3x onsite per week)
Schedule : Midshift
Job Responsibilities :
- Design, build, and maintain data pipelines and ETL processes using Azure Synapse, Azure Data Factory, and Databricks.
- Develop scalable data solutions aligned with Medallion architecture, Delta Lake, and Data Lakehouse principles.
- Optimize data storage, processing, and retrieval for performance and cost efficiency.
- Collaborate with data analysts, scientists, and business teams to deliver high-quality data solutions.
- Ensure data quality, governance, and security across all processes.
- Implement CI / CD pipelines and monitor deployments using Azure DevOps.
- Write efficient code in Python and SQL for data transformation and integration.
- Support system troubleshooting, performance tuning, and documentation.
Qualifications :
Bachelor’s degree in Computer Science, IT, Engineering, or related field.5+ years of experience as a Data Engineer (mid to senior level).Strong hands-on expertise with := Azure Synapse, Azure Data Factory, Databricks
= Azure DevOps for CI / CD and version control
= Python & SQL for data engineering tasks
Solid understanding of Medallion architecture, Delta Lake, and Data Lakehouse concepts.Proven experience building scalable and reliable data pipelines in the cloud.Strong problem-solving skills and ability to work in a hybrid setup.