Description:
What we're Looking ForEssential:
5+ years of data engineering experience with a focus on AWS Strong proficiency in Python and SQL Hands-on experience with big data technologies (Hadoop, Spark) Knowledge of AWS data services and best practices Experience with Infrastructure as Code (Terraform or CloudFormation) Strong software engineering fundamentals Bachelor's or Master's degree in Computer Science, Engineering, IT, or related field Nice to Have:
AWS or Databricks certifications (Data Engineer, Solutions Architect, Machine Learning) Experience with lakehouse technologies (Hudi, Iceberg, DeltaLake) Multi-cloud experience (AWS + Azure) Docker/containers and CI/CD pipeline experience Open-source project contributions
What You'll Be Doing
As an AWS Data Engineer, you'll be responsible for building and operating sophisticated data pipelines primarily using AWS Cloud services. The role encompasses everything from data collection and storage to processing and analysis, with a strong emphasis on:
Designing and implementing robust ETL processes Working with big data technologies like Spark and Hadoop Building real-time and batch data pipelines Leveraging AWS services including Kinesis, Glue, EMR, Redshift, and Athena Ensuring data security and implementing best practices Orchestrating complex data workflows
Why Join This Team?
Engineering-First Culture: A place where technical excellence is valued and your ideas matter Competitive Package: Attractive salary, bonus, and incentive structure Growth Opportunities:
09 Oct 2025;
from:
gumtree.co.za