TalentAQ

TalentAQ

Data Engineer (AWS)

Data EngineeringFull Time6+ yearsHyderabad, Telangana

Required Skills
14 skills

AWS
S3
Glue
Lambda
EMR
Redshift
Python
PySpark
SQL
PL-SQL
Data Modeling
ETL
Agile
Scrum

Job Description

<p>We are looking for an experienced Data Engineer (AWS) to join our team. The ideal candidate will have strong expertise in cloud-based data engineering, with a focus on AWS, Python, and PySpark, along with a solid understanding of SQL/PL-SQL.</p><h3>Key Responsibilities</h3><ul><li>Design, build, and maintain scalable data pipelines on AWS.</li><li>Work with structured and unstructured datasets, ensuring data quality and consistency.</li><li>Develop and optimize ETL processes using Python and PySpark.</li><li>Collaborate with data scientists, analysts, and business teams to understand requirements and deliver solutions.</li><li>Implement best practices for data storage, transformation, and integration in cloud environments.</li><li>Ensure data security, compliance, and governance across systems.</li><li>Monitor, troubleshoot, and optimize data workflows for performance and efficiency.</li><li>Stay updated with emerging cloud and big data technologies to continuously improve solutions.</li></ul><h3>Skills & Qualifications</h3><ul><li>Strong expertise in AWS (S3, Glue, Lambda, EMR, Redshift, etc.)</li><li>Proficiency in Python for data engineering tasks.</li><li>Hands-on experience with PySpark for large-scale data processing.</li><li>Good understanding or working knowledge of SQL / PL-SQL.</li><li>Experience in data modeling, ETL design, and performance optimization.</li><li>Familiarity with Agile/Scrum methodologies.</li><li>Excellent problem-solving and communication skills.</li></ul>

We are looking for an experienced Data Engineer (AWS) to join our team. The ideal candidate will have strong expertise in cloud-based data engineering, with a focus on AWS, Python, and PySpark, along with a solid understanding of SQL/PL-SQL.

Key Responsibilities

  • Design, build, and maintain scalable data pipelines on AWS.
  • Work with structured and unstructured datasets, ensuring data quality and consistency.
  • Develop and optimize ETL processes using Python and PySpark.
  • Collaborate with data scientists, analysts, and business teams to understand requirements and deliver solutions.
  • Implement best practices for data storage, transformation, and integration in cloud environments.
  • Ensure data security, compliance, and governance across systems.
  • Monitor, troubleshoot, and optimize data workflows for performance and efficiency.
  • Stay updated with emerging cloud and big data technologies to continuously improve solutions.

Skills & Qualifications

  • Strong expertise in AWS (S3, Glue, Lambda, EMR, Redshift, etc.)
  • Proficiency in Python for data engineering tasks.
  • Hands-on experience with PySpark for large-scale data processing.
  • Good understanding or working knowledge of SQL / PL-SQL.
  • Experience in data modeling, ETL design, and performance optimization.
  • Familiarity with Agile/Scrum methodologies.
  • Excellent problem-solving and communication skills.

Similar Jobs

10000 jobs available

Engineering8-12 years
ETL Testing
Cloud Migration
Python
+5 more

AWS Data Engineer

10 positions
Engineering8-12 years
ETL Testing
Cloud Migration
Python
+5 more
Information Technology3+ yearsRemote
AWS
S3
Redshift
+7 more
Information Technology3+ yearsRemote
AWS
S3
Redshift
+7 more
CGI
EngineeringFull-Time8-11 years
Python
PySpark
SQL
+10 more
Data Engineering4-8 yearsRemote
SQL
ETL
Snowflake
+11 more