Job Description
This is a remote position.
Summary
We are looking for a highly skilled Data Engineer to design, build, and maintain robust data pipelines and warehouse solutions. The ideal candidate has strong experience in ETL development, data modeling (star schema), and modern data platforms such as Microsoft Fabric or Databricks .
Key Responsibilities :
Design and implement ETL pipelines for large-scale data processing using Python and PySpark .
Develop and maintain data models and schemas optimized for analytics and reporting.
Collaborate with cross-functional teams to define data requirements and integration strategies.
Optimize data performance, reliability, and scalability in Fabric or Databricks environments.
Participate in data modernization or migration projects , ensuring seamless transition and system integrity.
Qualifications :
4–5 years of experience as a Data Engineer or similar role.
Proven expertise in Python , PySpark , and Microsoft Fabric or Databricks .
Strong understanding of ETL processes , data warehousing , and star schema design .
Experience with data transformation, integration, and performance optimization .
Background in data modernization or migration initiatives is preferred.
Top 3 Non-Negotiables :
Python
PySpark
Microsoft Fabric or Databricks
JOB REQUIREMENTS
Benefits
Requirements
Benefits
Requirements
Qualifications : 4–5 years of experience as a Data Engineer or similar role. Proven expertise in Python, PySpark, and Microsoft Fabric or Databricks. Strong understanding of ETL processes, data warehousing, and star schema design. Experience with data transformation, integration, and performance optimization. Background in data modernization or migration initiatives is preferred. Top 3 Non-Negotiables : Python PySpark Microsoft Fabric or Databricks
Data Engineer • Remote, 00, ph