Position : Data Engineer (Databricks & AWS)Company Overview Citco is a global leader in financial services, delivering innovative solutions to some of the world's largest institutional clients.
We harness the power of data to drive operational efficiency and informed decision-making.
We are looking for a Data Engineer with strong Databricks expertise and AWS experience to contribute to mission-critical data initiatives.Role Summary as a Data Engineer, you will be responsible for developing and maintaining end-to-end data solutions on Databricks (Spark, Delta Lake, MLflow, etc.)
while working with core AWS services (S3, Glue, Lambda, etc.).
You will work within a technical team, implementing best practices in performance, security, and scalability.
This role requires solid understanding of Databricks and experience with cloud-based data platforms.Key Responsibilities1.Databricks Platform & DevelopmentImplement Databricks Lakehouse solutions using Delta Lake for ACID transactions and data versioningUtilize Databricks SQL Analytics for querying and report generationSupport cluster management and Spark job optimizationDevelop structured streaming pipelines for data ingestion and processingUse Databricks Repos, notebooks, and job scheduling for development workflows2.AWS Cloud IntegrationWork with Databricks and AWS S3 integration for data lake storageBuild ETL / ELT pipelines using AWS Glue catalog, AWS Lambda, and AWS Step FunctionsConfigure networking settings for secure data accessSupport infrastructure deployment using AWS CloudFormation or Terraform3.Data Pipeline & Workflow DevelopmentCreate scalable ETL frameworks using Spark (Python / Scala)Participate in workflow orchestration and CI / CD implementationDevelop Delta Live Tables for data ingestion and transformationsSupport MLflow integration for data lineage and reproducibility4.Performance & OptimizationImplement Spark job optimizations (caching, partitioning, joins)Support cluster configuration for optimal performanceOptimize data processing for large-scale datasets5.Security & GovernanceApply Unity Catalog features for governance and access controlFollow compliance requirements and security policiesImplement IAM best practices6.Team CollaborationParticipate in code reviews and knowledge-sharing sessionsWork within Agile / Scrum development frameworkCollaborate with team members and stakeholders7.Monitoring & MaintenanceHelp implement monitoring solutions for pipeline performanceSupport alert system setup and maintenanceEnsure data quality and reliability standardsQualifications1.Educational BackgroundBachelor's degree in Computer Science, Data Science, Engineering, or equivalent experience2.Technical ExperienceDatabricks Experience : 2+ years of hands-on Databricks (Spark) experienceAWS Knowledge : Experience with AWS S3, Glue, Lambda, and basic security practicesProgramming Skills : Strong proficiency in Python (PySpark) and SQLData Warehousing : Understanding of RDBMS and data modeling conceptsInfrastructure : Familiarity with infrastructure as code concepts
J-18808-Ljbffr
It.Developer • Philippines, Philippines