SI

Senior AWS Data Engineer

Sightspectrum
Bangalore4-7 LPA Posted 17 Jun 2025
FULL TIME
Pyspark
Sql
Shell Scripting
Aws
Python

Job Description

Must-Have Qualifications:

  • AWS Expertise: Strong hands-on experience with AWS data services including Glue, Redshift, Athena, S3, Lake Formation, Kinesis, Lambda, Step Functions, EMR, and CloudWatch.
  • ETL/ELT Engineering: Deep proficiency in designing robust ETL/ELT pipelines with AWS Glue (PySpark/Scala), Python, dbt, or other automation frameworks.
  • Data Modeling: Advanced knowledge of dimensional (Star/Snowflake) and normalised data modeling, optimised for Redshift and S3-based lakehouses.
  • Programming Skills: Proficient in Python, SQL, and PySpark, with automation and scripting skills for data workflows.
  • Architecture Leadership: Demonstrated experience leading large-scale AWS data engineering projects across teams and domains.
  • Pre-sales & Consulting: Proven experience working with clients, responding to technical RFPs, and designing cloud-native data solutions.
  • Advanced PySpark Expertise: Deep hands-on experience in writing optimized PySpark code for distributed data processing, including transformation pipelines using DataFrames, RDDs, and Spark SQL, with a strong grasp of lazy evaluation, catalyst optimizer, and Tungsten execution engine.
  • Performance Tuning & Partitioning: Proven ability to debug and optimize Spark jobs through custom partitioning strategies, broadcast joins, caching, and checkpointing, with proficiency in tuning executor memory, shuffle configurations, and leveraging Spark UI for performance diagnostics in large-scale data workloads (>TB scale).

Join WhatsApp Channel