Theoretical Guarantees for Domain Adaptation with Hierarchical Optimal Transport
RESEARCH
Classical machine learning usually assumes that training and test data are drawn from the same distribution. In practice, however, this assumption is often violated, as data may vary across environments, tasks, or acquisition settings.
Domain adaptation addresses this distribution shift by seeking ways to transfer knowledge from a source domain to a related but different target domain. In my research, this question is approached through Optimal Transport, which provides a principled geometric framework for comparing distributions and studying how they may be aligned, while preserving their underlying structure.