Contrastive learning is like teaching by comparison: these are similar, those are different, until the AI learns the patterns on its own. The goal is that AI develops a useful sense of similarity that can be extrapolated to new data and new tasks.
This approach has driven big advances in computer vision and language models, but many questions remain. For example, researchers are still trying to understand puzzling phenomena like neural collapse, where the hidden representations of data start to converge to a single degenerate structure, making AI learn that everything is the same.
We study these puzzling phenomena, trying to understand why they happen, and how to fix them.
Publications |
---|
Huanran Li, M. Nguyen, and D. Pimentel-Alarcón. "Semi-Supervised Contrastive Learning with Orthonormal Prototypes". Under review. 2025. |
Huanran Li and D. Pimentel-Alarcón. "Subspace clustering on incomplete data with self-supervised contrastive learning". Under review. 2025. |
Huanran Li, D. Pimentel-Alarcón. "Deep fusion: Capturing dependencies in contrastive learning via transformer projection heads". IEEE International Symposium on Information Theory (ISIT). 2025. [Link] |