Logo

Resume

My professional background and qualifications

Adrian Villanueva Martinez

Senior Software Engineer

Experienced Software Engineer with deep expertise in cloud-native data platforms, data engineering, and MLOps. Based in Tokyo, delivering European engineering excellence and driving large-scale adoption of high-impact systems across organizations. Skilled in designing resilient infrastructure, building powerful SDKs, and enabling company-wide platform usability.

Skills & Technologies

PythonTypeScriptRustJavaSQLGoBashCNext.jsReactFastAPIApache AirflowPysparkAWSAzureGCPDatabricksDockerTerraformKubernetesMLflowAWS AthenaSQL-based databasesRedisOpenTelemetryPrometheusGrafanaGitHub ActionsJenkinsArgoCDLinux (Ubuntu, CentOS)Windows Server

Professional Experience

Data Engineer Woven by Toyota

2024-08 - Present

Tokyo, Japan

  • Built a scalable, cloud-native data mesh platform on AWS and Databricks, adopted company-wide to enable governed, high-quality data sharing across domains
  • Developed a multi-language Kafka ingestion SDK (Rust core with Python, Java, TypeScript, and Go bindings), deployed org-wide for real-time ingestion across heterogeneous systems
  • Designed and implemented CI/CD pipelines for ML and data workflows in Databricks, integrating deployment and lifecycle tracking with MLflow
  • Improved platform resilience through automated data reconciliation, OpenTelemetry instrumentation, and enforcement of data contracts
  • Led development of self-service capabilities, including automated provisioning of Kafka topics, access control groups, and data product registration, reducing onboarding friction across the org
  • Created platform documentation, naming conventions, and onboarding guides to support self-serve adoption by engineers, analysts, and ML practitioners

Data Platform Engineer Albert Heijn

2022-03 - 2024-06

Amsterdam, Netherlands

  • Managed a company-wide data platform built on Azure and Databricks, supporting large-scale batch and real-time pipelines
  • Built reusable Terraform modules and automated infrastructure provisioning to reduce deployment time and enforce compliance
  • Created observability tooling using Python and Kusto to ensure data quality and regulatory compliance across data products
  • Implemented CI/CD with GitHub Actions and ArgoCD, automating deployment of services and pipelines to Kubernetes clusters
  • Worked closely with analysts and data scientists to productionize ML pipelines and deploy feature engineering workflows

Data Engineer Dashmote

2021-09 - 2022-03

Amsterdam, Netherlands

  • Migrated legacy Airflow pipelines to PySpark for scalable data processing and analytics workflows
  • Optimized Docker builds and CI pipelines to cut build time and reduce cloud costs by applying multi-stage techniques
  • Designed and deployed a governed data lake on S3 with robust schema management and access controls
  • Collaborated with data scientists to automate training data pipelines and streamline model experimentation

Software Developer (Faas Tech) Ernst & Young (EY)

2019-06 - 2020-06

Madrid, Spain

  • Built ETL pipelines in Python, SQL, and Java to automate financial reporting workflows across global client accounts
  • Trained and deployed ML models for forecasting and risk scoring in regulated environments
  • Built and deployed full-stack data visualization tools on Linux for internal analytics teams
  • Participated in project planning with clients, translating business goals into deliverable data products

Education

Bachelor's in Computer Science Universidad Europea de Madrid

2015 - 2020

Languages

Spanish:Native
English:Professional
Japanese:Basic
Dutch:Basic