MI

Data Engineer – PySpark / ETL |Pune | JCI

Mindteck (India) Limited
Pune4-14 LPA Posted 12 Mar 2026
FULL TIME
extract transform load
Spark
Etl
Scala
Kafka
+5 more

Job Description

Job Description

Client: JCI

Location: Pune

Experience: 5 – 8 Years

Budget: Up to 14 LPA

Interview Mode: Face-to-Face Interview (Mandatory)


Key Responsibilities

  • Design, develop, and test software solutions.
  • Work on data processing and transformation using ETL frameworks.
  • Develop and manage data pipelines for data manipulation and integration.
  • Work with data warehousing technologies and big data ecosystems.


Required Skills

Experience

  • 5 to 8 years of relevant software design, development, and testing experience.
  • Product development experience preferred.

ETL Tools

  • Experience with ETL (Extract, Transform, Load) tools and frameworks such as Spark.

Programming

  • Proficiency in PySpark, Python, Scala, Java, and SQL for data manipulation.

Database Technologies

  • Familiarity with PostgreSQL and Cloud SQL.

Data Warehouse

  • Understanding of data warehouse concepts.
  • Experience with technologies such as Snowflake and Hive.

Streaming

  • Familiar with Kafka and Event Hub.

Big Data

  • Must understand the Hadoop ecosystem.

Good to Have

  • Understanding of Azure Cloud
  • Spring Framework
  • Microsoft Fabric

Important

  • Candidate must be available for Face-to-Face interview.
Join WhatsApp Channel