AC

BigData Developer

Acme Services
Noida4-8 LPA Posted 18 Jul 2025
FULL TIME
Hive
Sqoop
Spark
Bigdata Hadoop
Shell Scripting
+1 more

Job Description

We are actively seeking a highly skilled Big Data Developer with 3-7 years of strong Hadoop/Big Data experience and expertise in Enterprise Data Warehousing, to join our client's team through Acme Services. This pivotal role requires extensive hands-on experience with big data platforms including Hadoop, Spark, and Hive. The ideal candidate will possess sound knowledge of Hive/HQL and be proficient in writing complex Hive queries/jobs. Experience with data loading tools like Sqoop is essential, along with a good understanding of UNIX/LINUX and Shell Scripting. Familiarity with a programming language such as Java or Python is a valuable asset, as is experience in designing dimensions and facts for Data Warehouse (DWH) setup and architecture.

Key Responsibilities

  • Big Data Platform Development: Leverage extensive experience on big data platforms, including Hadoop, Spark, and Hive, to design, develop, and maintain robust data solutions.
  • Spark & Hive Expertise: Utilize hands-on experience with Spark for data processing and analytics. Apply sound knowledge of Hive/HQL and write complex Hive queries/jobs for data extraction and transformation.
  • Data Loading & Ingestion: Employ data loading tools like Sqoop for efficient data transfer between relational databases and Hadoop.
  • Data Warehousing Design: (If applicable) Utilize experience in designing the dimensions and facts for DWH setup and architecture, ensuring optimal data models for enterprise reporting and analytics.
  • Scripting & Automation: Demonstrate a good understanding of UNIX/LINUX and Shell Scripting to automate processes and manage big data environments.
  • Programming (Good to Have): (Optional, but beneficial) Apply knowledge of a programming language like Java or Python for developing big data applications and scripts.
  • Problem Solving: Analyze complex data requirements, troubleshoot issues, and optimize big data processes for performance and scalability.
  • Collaboration: Work effectively within a team, collaborating with data architects, analysts, and other developers to deliver comprehensive data solutions.

Skills

  • Extensive experience on big data platforms including Hadoop, Spark, Hive.
  • Hands-on experience of Spark.
  • Sound knowledge of Hive/HQL.
  • Hands-on experience of writing complex Hive queries/jobs.
  • Experience of data loading tools like Sqoop.
  • Good understanding of UNIX/LINUX, Shell Scripting.
  • Strong analytical and problem-solving abilities.

Qualifications

  • 3-7 Years of strong Hadoop/Big Data experience with Enterprise Data warehousing experience.
  • Experience in designing the dimensions, facts for DWH setup and architecture (desirable).
  • Experience of a programming language (e.g., Java/Python) is a plus.

Join WhatsApp Channel