Job Description
This is a remote position.
Summary :
Own our data stack end-to-end. Build and maintain reliable pipelines into BigQuery, enforce data quality / monitoring, and deliver clean models that power marketing, finance (unit economics), and ops dashboards. Full-time; contractor-to-hire acceptable.
Requirements :
1. Deep expertise in Google BigQuery — advanced query writing, optimization, and performance tuning.
2. Strong command of SQL for building complex analytical queries and data transformations.
3. Understanding of data warehousing architecture and ETL (Extract, Transform, Load) workflows.
4. Experience with Google Cloud Platform (GCP) — especially services like :
5. Proficiency in database administration (DBA) tasks :
6. Ability to set up and monitor data pipelines , handle errors, and ensure data integrity.
7. Experience in query cost optimization and managing resource utilization within BigQuery.
8. Familiarity with data governance , security , and compliance best practices.
Secondary Technical Skills :
1. Python for ETL / scripting; comfortable with REST / JSON / OAuth integrations.
2. Experience with data visualization tools such as Looker Studio , Power BI , or Tableau .
3. Understanding of API integration (REST, JSON, OAuth) for data exchange between platforms.
4. Familiarity with version control systems (e.g., Git) and CI / CD pipelines (nice to have).
5. AI services Integration
Analytical & Functional Skills :
1. Strong analytical mindset with the ability to interpret large datasets.
2. Excellent skills in data validation , cleaning , and transformation .
3. Capability to translate business needs into efficient data models and queries.
4. Experience troubleshooting data performance and pipeline issues.
Soft Skills :
1. Strong attention to detail and accuracy in data handling.
2. Excellent problem-solving and documentation abilities.
3. Effective communication and collaboration with technical and non-technical teams.
4. Ability to work independently with minimal supervision and manage multiple priorities.
How to apply :
Send a short note, resume, GitHub, and (if available) portfolio.
Mandatory screening questions (answer briefly) :
1. Describe a BigQuery cost or performance issue you solved. Include the bottleneck and the fix.
2. You see a discrepancy between a platform’s dashboard metrics and your warehouse table. Walk through your triage steps.
3. What’s the most complex data pipeline you’ve built and what was it for?
Requirements
SQL, Python, Google BigQuery, Data Warehousing Architecture, ETL, Google Cloud Platform, GCP, Database Administration, DBA, Query Cost Optimization, Cloud Storage, Cloud Functions, Dataflow, Pub / Sub, Data Governance, Looker Studio, Power BI, Tableau, API Integration, Version Control Systems
Data Engineer • National Capital Region (NCR), 00, ph