As a Senior Data Engineer, I am a creative problem solver with strong business and analytical skills. I excel in collaborating and aligning with stakeholders to gather business requirements for implementing data platforms and data warehouses.
My expertise lies in designing and developing ETL pipelines using Azure Data Factory, Synapse, Databricks, DBT, and Airflow. Additionally, I am proficient in developing and maintaining data models. I have experience in SQL, Python, PySpark and Spark SQL.
I am a self-motivated team player with excellent communication skills, and I quickly grasp new concepts. I thrive under pressure, and with my rapid learning and adaptability, I consider myself a valuable asset to any team. I enjoy engaging with the business and constantly seek ways to improve myself and the team.
Jan 2025 – Present
Designed a scalable and secure data integration solution in Azure, including high-level architecture in ArchiMate. Developed Synapse pipelines to integrate ERP systems and external APIs, advised on data cleansing and SCD Type 2, and conducted workshops to transfer knowledge to internal staff.
Jun 2024 – Present
Contributed to the transition towards a data-driven organization. Built ELT pipelines in Azure Synapse Analytics, developed Python/PySpark notebooks for data transformations, created monitoring dashboards in Grafana, deployed Azure resources via Terraform, and set up CI/CD pipelines in GitHub Actions. Mentored and trained internal employees through workshops and documentation.
Nov 2023 – May 2024
Designed and implemented a data integration solution on the Azure data platform. Exposed REST APIs in Synapse, developed ETL pipelines, transformed data using Python and PySpark, and implemented SCD Type 2 for historical data management. Supported decision-making with cleansed datasets and provided workshops and documentation to internal staff.
May 2022 – Nov 2023
Worked as a consultant on multiple Azure Data Engineering projects for clients including Port of Rotterdam and Transavia.
Jan 2023 – Nov 2023
Migrated SAP BW data to Azure Databricks using Synapse pipelines and Airflow DAGs. Applied medallion architecture with Data Vault 2.0 and dbt. Developed Python automation scripts for ERDs and data quality checks.
May 2022 – Dec 2022
Developed ETL pipelines in Azure Data Factory, Airflow DAGs, and PySpark scripts in Databricks for social media and operational data. Built dbt entities and a serverless Azure Functions API for data integration. Delivered Power BI data models and workshops for the internal team.
Apr 2019 – Apr 2022
Worked as a consultant delivering data engineering solutions for clients including ASML and Shimano Europe.
Nov 2019 – Apr 2022
Developed Azure APIs for real-time and batch data exchange, implemented CI/CD in Azure DevOps, and improved security with OAuth2, JWT, and client certificates. Built Azure Functions for B2C integrations with Adobe Campaign and provided monitoring and documentation.
Apr 2019 – Apr 2022
Developed new features for the Alert & Health Monitoring application in Splunk Enterprise. Created KPIs, dashboards, and ITSI alerts, and used Python and Bash scripts for data via REST APIs. Worked in Agile SAFe teams and presented deliverables in ART demos.
Graduated Apr 2019
Specialized in data-driven business and IT management. Completed minors in Big Data and Cyber Security.
I help organizations become data-driven by designing and implementing secure, scalable Azure data platforms. From data engineering and architecture to DevOps and consulting, I enable actionable insights and better decision-making.
Design and build ELT/ETL pipelines in Azure Synapse Analytics and Databricks. Transform raw data into clean, structured datasets using Python, PySpark, and SQL for analytics and machine learning use cases.
Implement CI/CD pipelines for Azure Data Factory, Synapse, and Databricks using Azure DevOps and GitHub Actions. Automate workflows with Airflow, monitor data pipelines with Grafana, and ensure data quality and governance.
Support organizations with data strategy, solution design, and hands-on workshops. Train internal teams, document solutions, and ensure smooth handovers to enable self-sufficient data teams.