Job Description
Title: Data Engineer II (#Remote #India/#Portugal)
Location Requirement: #Candidates must be located in India or Portugal
Overview:
Join the Data Platform team of the worlds #1 travel platform, supporting over 390M monthly users and petabytes of data.
Key Responsibilities:
Build and maintain scalable ETL pipelines using Snowflake, Databricks, and AWS
Curate and deliver reliable, validated datasets across multiple departments
Write optimized SQL and implement monitoring on critical data flows
Partner with cross-functional teams including #analytics, #CRM, and ML
Solve pipeline failures and implement #anomaly detection
Ensure enterprise-level data quality, lineage, and discoverability
Required Skills & Experience:
4+ years in data engineering or #backend software development
Expert in SQL and data modeling
Strong Python, Java, or Scala coding skills
Experience with Snowflake, Databricks, AWS (#S3, Lambda)
Background in relational and #NoSQL #databases (e.g., #Postgres)
Familiar with #Linux shell and systems administration
Solid grasp of data warehouse concepts and real-time processing
Excellent troubleshooting, documentation, and QA mindset
Nice-to-Have Skills:
Experience with data mesh architecture and governance frameworks
Hands-on with real-time streaming or event-driven pipelines
Previous contributions to observability and monitoring for data platforms
Perks & Benefits:
Remote work from anywhere in India or Portugal
Opportunities to contribute to a globally impactful product
Sponsored training & certifications (company pays for two attempts)
English lessons with native speakers
Team-building events and milestone celebrations
Tech & non-tech communities for professional support
Work equipment and support for home officeTitle: Data Engineer II (#Remote #India/#Portugal)
Location Requirement: #Candidates must be located in India or Portugal
Overview:
Join the Data Platform team of the worlds #1 travel platform, supporting over 390M monthly users and petabytes of data.
Key Responsibilities:
Build and maintain scalable ETL pipelines using Snowflake, Databricks, and AWS
Curate and deliver reliable, validated datasets across multiple departments
Write optimized SQL and implement monitoring on critical data flows
Partner with cross-functional teams including #analytics, #CRM, and ML
Solve pipeline failures and implement #anomaly detection
Ensure enterprise-level data quality, lineage, and discoverability
Required Skills & Experience:
4+ years in data engineering or #backend software development
Expert in SQL and data modeling
Strong Python, Java, or Scala coding skills
Experience with Snowflake, Databricks, AWS (#S3, Lambda)
Background in relational and #NoSQL #databases (e.g., #Postgres)
Familiar with #Linux shell and systems administration
Solid grasp of data warehouse concepts and real-time processing
Excellent troubleshooting, documentation, and QA mindset
Nice-to-Have Skills:
Experience with data mesh architecture and governance frameworks
Hands-on with real-time streaming or event-driven pipelines
Previous contributions to observability and monitoring for data platforms
Perks & Benefits:
Remote work from anywhere in India or Portugal
Opportunities to contribute to a globally impactful product
Sponsored training & certifications (company pays for two attempts)
English lessons with native speakers
Team-building events and milestone celebrations
Tech & non-tech communities for professional support
Work equipment and support for home office