Co-authored CLIP: a core reference for contrastive multimodal pretraining.
Researcher Profile
Chris Hallacy
Vision-language pretraining (CLIP)
Co-author, CLIP
Co-authored CLIP: a core reference for contrastive multimodal pretraining.
Labs
Topics
About This Page
This profile is meant to help you get oriented quickly: why this researcher matters, what to read first, and where to explore next.
Last updated
March 20, 2026
Official And External Links
Known For
The ideas, systems, and research directions that make this person worth knowing.
01
Vision-language pretraining (CLIP)
02
Learning Transferable Visual Models From Natural Language Supervision
03
OpenAI
04
CLIP
05
Vision-Language
06
Multimodal
Start Here
Canonical papers, project pages, or repositories that anchor this profile.
Signature Works
Additional papers, projects, or repositories that help flesh out the profile.
Related Researchers
People worth exploring next because they share topics, labs, or source material with this profile.
Co-authored CLIP: a core reference for contrastive multimodal pretraining.
Co-authored CLIP: a core reference for contrastive multimodal pretraining.
Important because several of the modern foundation-model playbooks trace back to work he helped drive, especially around generative pretraining and multimodal transfer.
Co-authored the original DALL·E paper: zero-shot text-to-image generation.
Co-authored the original DALL·E paper: zero-shot text-to-image generation.