Overview
We are looking for a Big Data Engineer to design, develop, and maintain our organization's big data infrastructure. The ideal candidate will have a strong background in data warehousing, ETL processes, and distributed computing frameworks. This role requires expertise in handling large datasets and building scalable data pipelines.
Key Responsibilities
- Design and implement big data solutions using Hadoop, Spark, and other related technologies.
- Develop ETL processes to ingest, transform, and load data.
- Monitor and optimize the performance of data pipelines.
- Collaborate with data scientists and analysts to support their data needs.
- Ensure data quality and security.
Required Skills
- Proficiency in Hadoop, Spark, and related big data technologies.
- Experience with data warehousing and ETL processes.
- Strong programming skills in Java or Python.
- Knowledge of cloud-based data platforms (e.g., AWS, Azure).