← Back

RESEARCH

Domain Adaptation

Classical machine learning usually assumes that training and test data are drawn from the same distribution. In practice, however, this assumption is often violated, as data may vary across environments, tasks, or acquisition settings.

Domain adaptation addresses this distribution shift by seeking ways to transfer knowledge from a source domain to a related but different target domain. In my research, this question is approached through Optimal Transport, which provides a principled geometric framework for comparing distributions and studying how they may be aligned, while preserving their underlying structure.

Related Publications

Journal

Theoretical Guarantees for Domain Adaptation with Hierarchical Optimal Transport

El Hamri, Bennani, Falih

Machine Learning Journal, 2025

Machine Learning Journal
Conference

McCann’s Interpolation for Gradual Domain Adaptation on the Wasserstein Geodesic

El Hamri, Falih, Rozenholc

IJCNN 2025 — IEEE International Joint Conference on Neural Networks

IEEE
Conference

Hierarchical Representation for Multi-Source Domain Adaptation Using Wasserstein Barycenter

El Hamri, Falih, Rozenholc

ICMLA 2024 — International Conference on Machine Learning and Applications

IEEE
Journal

Incremental Confidence Sampling with Optimal Transport for Domain Adaptation

El Hamri, Bennani, Falih

International Journal of Neural Systems, 2024

World Scientific
Journal

Hierarchical Optimal Transport for Unsupervised Domain Adaptation

El Hamri, Bennani, Falih

Machine Learning Journal, 2022

Machine Learning Journal
Conference

Incremental Unsupervised Domain Adaptation Through Optimal Transport

El Hamri, Bennani, Falih

WCCI / IJCNN 2022 — IEEE World Congress on Computational Intelligence

IEEE
Conference

When Domain Adaptation Meets Semi-Supervised Learning Through Optimal Transport

El Hamri, Bennani, Falih

AIAI 2022 — IFIP International Conference on Artificial Intelligence Applications & Innovations

Springer