Azure Databricks Data Engineer
We are looking for an experienced Azure Databricks Data Engineer with deep expertise in building scalable data pipelines, Delta Lake architectures, and cloud-based ingestion frameworks. If you have strong hands‑on experience with PySpark, Databricks, and modern data engineering patterns , we want to meet you!
What You’ll Do
- Design & develop scalable ETL pipelines using Apache Spark (PySpark) on Databricks
- Build modular, high‑performing data transformation logic in Python & SQL
- Architect Delta Lake (Bronze / Silver / Gold) layers with full lineage & audit controls
- Integrate data from REST APIs
- Implement cloud data security using Unity Catalog, ACLs, encryption, credential passthrough
- Orchestrate workflows using Databricks Workflows, Azure Data Factory, or Apache Airflow
Qualifications
8+ years of strong hands‑on experience in DatabricksExpertise in PySpark, SQL, Data Lakehouse concepts & distributed computingDeep experience in building secure, scalable pipelines in AzureStrong understanding of Delta Lake architecture & data quality layersExperience with workflow orchestration (ADF, Airflow, Databricks Workflows)Strong communication, debugging, and problem‑solving skillsIf you’re passionate about building enterprise‑grade data platforms and want to join a fast‑paced engineering environment, let’s connect!
Apply now or DM ( ) me for more details!
#J-18808-Ljbffr