PR

Azure Databricks Data Engineer

Pradeepit Consulting Services
Bangalore5-12 LPA Posted 22 Jul 2025
FULL TIME
Azure Databricks
Pyspark
Python Programming
Etl Development
Data Warehousing
+1 more

Job Description

Pradeepit Consulting Services is actively seeking an experienced Azure Databricks Data Engineer to join our client's dynamic global distribution team. This role demands a professional with a strong background in data warehousing/engineering and expertise in Azure Databricks, PySpark, and SQL. You will be instrumental in designing, developing, and optimizing robust, scalable big data solutions, particularly for cloud migration projects.

Key Responsibilities

  • Big Data Solution Design & Implementation: Design and implement Hadoop big data solutions in alignment with business needs and project schedules.
  • ETL Development & Optimization: Code, test, and document new or modified data systems to create robust and scalable applications for data analytics. This includes developing and maintaining ETL pipelines using Azure Databricks, PySpark, and Azure Data Factory (ADF).
  • Data Consistency & Collaboration: Work with other Big Data developers to ensure all data solutions are consistent. Partner with the business community to understand requirements and deliver user training.
  • Technology Research & Innovation: Perform technology and product research to better define requirements, resolve important issues, and improve the overall capability of the analytics technology stack. Evaluate and provide feedback on future technologies and new releases/upgrades.
  • Analytical Solutions Support: Support Big Data and batch/real-time analytical solutions leveraging transformational technologies.
  • Project Leadership: Work on multiple projects as a technical team member or drive user requirement analysis, software application design and development, testing and build automation tools, and research/incubation of new technologies and frameworks.
  • Cloud & Methodologies: Build solutions with public cloud providers such as Azure, leveraging experience with agile or other rapid application development methodologies and tools like Bitbucket, Jira, and Confluence.

Skills

  • Hands-on experience in Databricks stack.
  • Expertise in Data Engineering technologies (e.g., Spark, Hadoop, Kafka).
  • Proficiency in Streaming technologies.
  • Hands-on experience in Python and SQL.
  • Expertise in implementing Data Warehousing solutions.
  • Expertise in any ETL tool (e.g., SSIS, Redwood).
  • Good understanding of submitting jobs using Workflows, API CLI.
  • Strong experience in Azure Databricks, PySpark, and SQL (Mandatory).
  • Hands-on experience in Azure Data Factory (ADF) (Mandatory).
  • Project exposure to Cloud migration (Mandatory).
  • Strong analytical and problem-solving skills.
  • Ability to work independently and take ownership of tasks.

Qualifications

  • Bachelor's degree in Information Technology, Computer Science, or a related field.

Join WhatsApp Channel