TATata Consultancy Services Limited
AWS Data Engineer - IMMEDIATE JOINERS ONLY
Bangalore Posted 12 Apr 2024
FULL TIME
Olap
glue
Hive
Etl
Apache Spark
+9 more
Job Description
TCS Hiring – IMMEDIATE JOINERS ONLY!!!! VIRTUAL DRIVE – AWS Data Engineer
Role: AWS Data Engineer
Required Technical Skill: AWS Pyspark, Redshift, Glue, ETL
Experience Range: 5 to 8 Years
Location: BANGALORE / CHENNAI / HYDERABAD / PUNE / DELHI / GURGAON / MUMBAI / KOLKATA / INDORE
Mode: Virtual (Online/Teams)
Virtual Drive Date: 20th April 2024 (Saturday)
Must-Have:
• Strong hands-on experience in Python programming and PySpark
• Experience using AWS services (RedShift, Glue, EMR, S3 & Lambda)
• Experience working with Apache Spark and Hadoop ecosystem
• Experience in writing and optimizing SQL for data manipulations
• Good Exposure to scheduling tools. Airflow is preferable
• Must – Have Data Warehouse Experience with AWS Redshift or Hive
• Experience in implementing security measures for data protection
• Expertise to build/test complex data pipelines for ETL processes (batch and near real time)
• Readable documentation of all the components being developed
• Knowledge of Database technologies for OLTP and OLAP workloads
Good-To-Have:
• Good understanding of Data warehouse and Data Lakes
• Familiarize ETL tools like Netezza or Informatica
• Experience working with NoSQL databases like DynamoDB or MongoDB
• Good to have exposure to AWS services (Step Functions, Athena)
• Data Modelling exposure
• Familiarity with Investment Banking domain
Things to keep handy for interview:
• Government ID proof (Original)
Regards,
Venkatesh K
TCS – TAG HR Team
Role: AWS Data Engineer
Required Technical Skill: AWS Pyspark, Redshift, Glue, ETL
Experience Range: 5 to 8 Years
Location: BANGALORE / CHENNAI / HYDERABAD / PUNE / DELHI / GURGAON / MUMBAI / KOLKATA / INDORE
Mode: Virtual (Online/Teams)
Virtual Drive Date: 20th April 2024 (Saturday)
Must-Have:
• Strong hands-on experience in Python programming and PySpark
• Experience using AWS services (RedShift, Glue, EMR, S3 & Lambda)
• Experience working with Apache Spark and Hadoop ecosystem
• Experience in writing and optimizing SQL for data manipulations
• Good Exposure to scheduling tools. Airflow is preferable
• Must – Have Data Warehouse Experience with AWS Redshift or Hive
• Experience in implementing security measures for data protection
• Expertise to build/test complex data pipelines for ETL processes (batch and near real time)
• Readable documentation of all the components being developed
• Knowledge of Database technologies for OLTP and OLAP workloads
Good-To-Have:
• Good understanding of Data warehouse and Data Lakes
• Familiarize ETL tools like Netezza or Informatica
• Experience working with NoSQL databases like DynamoDB or MongoDB
• Good to have exposure to AWS services (Step Functions, Athena)
• Data Modelling exposure
• Familiarity with Investment Banking domain
Things to keep handy for interview:
• Government ID proof (Original)
Regards,
Venkatesh K
TCS – TAG HR Team