MEMercedes Benz
SRE - Observability Expert
Bangalore ₹5-9 LPA Posted 1 Apr 2026
FULL TIME
Spark
Apache Spark
Kubernetes
Iam
Cloud Computing
+1 more
Job Description
Key Responsibilities:
- Develop and maintain infrastructure-as-code using Terraform to deploy and manage Kubernetes (AKS) clusters and Databricks environments.
- Build, operate, and optimize data pipelines using tools like Azure Data Factory, Amazon Data Pipeline, Apache Spark, and Databricks.
- Implement streaming and batch processing systems (Kafka, Apache Storm, Spark Streaming, Apache Flink, Kappa architecture).
- Design storage strategies considering data processing, accessibility, availability, and cloud cost optimization.
- Perform data transformation using KStreams, KSQL, and Processor Libraries.
- Manage data ingestion and distribution via connectors like Event Hubs, Kafka topics, ADLS2, and REST APIs.
- Deploy and maintain open-source data stack including Airflow, Druid, Kafka, OpenSearch, and Superset.
- Build high-performance APIs using FastAPI and manage automation with Python scripting.
- Handle managed cloud services, IAM, auto-scaling, high availability, elasticity, and networking.
- Implement federated access, multi-tenancy, and self-service tooling for teams.
- Manage data catalogs per topic/domain to align with services and use cases.
- Apply DevSecOps practices, including Day 2 operations monitoring with Datadog.
- Research and integrate emerging technologies to continuously evolve the data platform.
- Work effectively in Agile Scrum environments.