Description:
Key Responsibilities:Data Engineering & Migration Leadership
Design and deliver large-scale data migration strategies supporting identity transformation across multi-petabyte datasets Develop robust data lineage mapping and validation frameworks to ensure accuracy and integrity Build scalable batch and streaming data transformation pipelines with parallel orchestration Lead JDK and Apache Spark upgrades while maintaining production stability Implement monitoring and observability to ensure data pipeline performance and system health Create comprehensive testing and validation frameworks for complex migration scenarios Technical Leadership & Engineering Excellence
Own and provide technical leadership for critical system components Mentor engineers through pair programming, code reviews, and collaborative delivery Lead technical design discussions aligned with enterprise standards Drive delivery momentum by proactively identifying and removing blockers Collaborate cross-functionally to ensure strong technical alignment and successful outcomes Job Experience and Skills Required:
Minimum 8 years experience in data engineering or related engineering roles Bachelors or Masters degree in Computer Science, Engineering, or equivalent practical experience Proven expertise in large-scale data migration and ETL pipeline development Advanced proficiency in Scala and Java using Gradle or Maven Deep experience with Apache Spark and distributed storage systems (HDFS, S3, GCS) Production experience with cloud data platforms including Amazon Web Services, Google Cloud Platform, Microsoft Azure, or Databricks Hands-on experience with Amazon EKS and containerised workloads Strong understanding of data privacy, security, and compliance principles (GDPR preferred) Excellent distributed systems design, analytical, and problem-solving skillsApply now!
16 Jan 2026;
from:
gumtree.co.za