Trigyn has a contractual opportunity for a Hadoop Data Engineer. This resource will be working Remotely.
Job Description :
We are looking for an experienced Hadoop Data Engineer with 4-5 years of hands-on experience in Big Data to develop and maintain scalable data processing solutions on the Hadoop platform.
Key Responsibilities
- Design, build and maintain robust data pipelines and ETL workflows using Hadoop technologies.
- Develop and optimize large-scale data processing jobs using Apache Spark.
- Manage and process structured data in HDFS, Hive.
- Ensure data quality, reliability, and performance tuning across data pipelines.
- Work closely with analytics, reporting, and downstream application teams.
- Support batch and near real-time ingestion using Sqoop or Kafka.