Introduction
- 36 hours per week
- Start date: as soon as possible
- End date: 31 December 2026
- Extension is possible.
- Hybrid work.
- ZZP is not allowed.
- Candidates must live in the Netherlands.
Function
You will be joining a data engineering team building and maintaining a modular ETL framework that ingests, transforms, and delivers financial data to downstream reporting and risk systems, using Apache Spark and Azure Databricks. The codebase is modern Python, follows clean architecture principles, and is backed by a full unit and integration test suite. ETL processes are orchestrated via Apache Airflow or Azure Durable Functions (C#).
Your primary activity will be onboarding new financial data sources into the framework — covering schema definition, transformation logic, testing, and end-to-end deployment across development, acceptance, and production environments.
During the assignment, you have independently onboarded at least 2 new sources, each passing integration tests and deployed to production, while maintaining high code quality and transparency, demonstrating active participation in team quality standards as well as in future state architecture (DMF) activities.
Requirements
• Proficiency in Python with a deep understanding of software development principles (at least 5 years of experience)
• Strong knowledge of ETL best practices and data engineering methodologies
• Experience with Apache Spark/PySpark and distributed computing (preferably Azure Databricks)
• Experience with SQL and relational databases
• Experience with CI/CD and Azure DevOps
Nice-to-have:
• Experience with .Net (C#) – used in supporting components and legacy system integrations
• Experience working with data integration and orchestration tools (e.g. Azure Durable Functions, Apache Airflow, Azure Data Factory)
Information
Jobs A2Z-CM +31(0)20-3337629
Application
Jobs A2Z-CM +31(0)20-3337629