ME

Senior Data Operations Engineer

Medtronic
Pune4-7 LPA Posted 24 Jun 2025
FULL TIME
Github
Devops
Aws Services
Scripting Languages

Job Description

As a Senior Data Operations Engineer at Medtronic's Global Diabetes Capability Center in Pune, you will be instrumental in transforming diabetes management by ensuring the robust, scalable, and secure operation of our data pipelines. You'll be responsible for developing and maintaining automated data workflows, optimizing ETL/ELT processes, and implementing CI/CD pipelines across various cloud platforms. Your expertise will directly contribute to our mission of reducing the burden of living with diabetes through innovative solutions and technologies.

A Day in the Life

You will ensure operational excellence and data integrity by:

  • Managing large data projects or processes that span across collaborative teams both within and beyond Digital Technology.
  • Developing and maintaining robust, scalable data pipelines using tools like GitHub, AWS, Databricks, and Azure.
  • Developing and optimizing ETL/ELT processes to ensure efficient data flow between various systems and platforms.
  • Implementing CI/CD pipelines for data workflows, ensuring seamless integration and deployment of data solutions.
  • Automating data quality checks, monitoring, and alerting systems to proactively maintain data integrity and reliability.
  • Collaborating with Data Scientists, Data Engineers, and other stakeholders to understand data requirements and implement appropriate operational solutions.
  • Optimizing data storage and processing for cost-effectiveness and performance across various Cloud platforms.
  • Implementing and maintaining data security and compliance measures across all platforms.
  • Implementing and managing automated workflows using GitHub Actions for code integration, testing, and deployment of data pipelines and related tools.
  • Designing and maintaining GitLab CI/CD pipelines to automate build, test, and deployment processes for data engineering projects, ensuring consistency across environments.
  • Operating autonomously to define, describe, diagram, and document the role and interaction of high-level technological and human components that combine to provide cost-effective and innovative data solutions.
  • Promoting, guiding, and governing good architectural practice through well-defined technology patterns and architecture mentorship.

Required Knowledge and Experience

  • 6+ years of experience in DevOps or DataOps roles, with a strong focus on data pipeline automation.
  • Strong proficiency in at least one scripting language (e.g., Python, Bash) and one infrastructure-as-code tool (e.g., Terraform, CloudFormation).
  • Extensive experience with AWS Services such as S3, RDS, EC2, Lambda, SNS, SQS, Glue, Redshift, Kinesis, MSK, and CloudWatch.
  • Experience with container orchestration platforms (e.g., Kubernetes, ECS) and CI/CD tools (e.g., GitHub, GitHub Actions, GitLab, GitLab CI).
  • Several years of experience working with Databricks, including Delta Lake, Spark, and MLflow.
  • Familiarity with data governance and compliance requirements (e.g., GDPR).
  • Excellent problem-solving skills and the ability to optimize complex data workflows.
  • Experience with real-time data operation technologies (e.g., Kafka, Kinesis).
  • Knowledge of Machine Learning Operations (MLOps) and experience integrating ML models into data pipelines.
  • Familiarity with data visualization tools (e.g., Power BI) and their integration with data platforms.

Nice to Haves

  • Certifications in relevant cloud platforms (AWS Certified DevOps Engineer, Azure DevOps Engineer, Databricks Certified Engineer).
  • Experience with graph databases and data lineage tools.
  • Contribution to open-source projects or data engineering communities.

Join WhatsApp Channel