Data Engineer
Anuncio original
Who we are
At Frontiers, our purpose is simple yet ambitious: to make science open. We believe open science empowers the global scientific community to accelerate discovery and develop the solutions needed for healthy lives on a healthy planet.
We are one of the world's largest and most influential open-access research publishers. Every article we publish is peer-reviewed and quality-certified, ensuring research is accessible to everyone, everywhere. To date, Frontiers research has been viewed over 4 billion times, demonstrating the real-world impact of science without barriers.
Joining Frontiers means being part of a global, mission-driven organization at the intersection of science, technology, and innovation - working alongside passionate colleagues who care deeply about advancing knowledge for the benefit of society.
To learn more about our impact and culture, please watch this video:
https://www.youtube.com/watch?v=jLJ7ZO3wOW4
About the Role:
We're looking for a Data Engineer to design, build, and optimize our data platform using Airflow, Kubernetes, Google Bigquery, and dbt. This role combines hands-on development with architectural responsibility - ensuring scalability and reliability across our infrastructure.
Core Technical Focus Areas
Data Engineering & Orchestration
- Experience in designing, building, and maintaining complex data pipelines using Airflow
- Strong understanding of ETL/ELT workflows and modern data engineering principles
- Proven experience deploying and operating workloads on K8s (e.g. Deployments, CronJobs, configs, and basic troubleshooting) in a production environment.
- Experience with Bigquery or other cloud datawarehouse for large-scale data processing
Cloud Infrastructure
- Hands-on experience with Google Cloud and/or Azure
- Experience applying DevOps practices - including Docker, container registries, infrastructure as code with Terraform, and CI/CD - in a production context.
Requirements
- 3-5 years of experience in data engineering
- Proficiency with Airflow, Kubernetes
- Advanced Python and SQL skills
- Collaborative and pragmatic mindset
- Based in or open to relocation to Madrid
Programming & Development Practices
- Advanced Python and SQL and data manipulation libraries
- Strong engineering discipline - testing, version control, CI/CD, and clean code principles
- Familiarity with data validation, logging, and monitoring best practices
Desired Additional Experience
- Experience working with data quality frameworks and implementing data validation processes
- Familiarity with data catalog management tools and metadata governance practices
Collaboration & Soft Skills
- Clear communication across technical and non-technical audiences
- Adaptable and organized under fast-paced conditions
Benefits
- We prioritise office presence and emphasise in-person collaboration, but also offer appropriate adjustments where needed, in line with company policy
- Extra wellbeing days on top of your annual leave allowance
- Up to 3 paid volunteering days each year
- 24/7 confidential Employee Assistance Programme (wellbeing, mental health, legal & financial support)
- Learning & development support via the Frontiers Learning Hub
- Competitive local benefits country dependent (e.g. healthcare and pension/retirement provision)
Frontiers actively embraces diversity and is a safe and welcoming workplace. Recruitment is free from discrimination - including based on race, national or ethnic origin, age, religion, disability, sex, gender identity or sexual orientation. With employees from more than 50 different nations, our diversity creates vibrant teams and constantly challenges us to appreciate multiple perspectives.
- Training & Development
Candidatura gestionada por Frontiers