A strong page to keep because he sits on both sides of a major shift in open models: he appears on Meta's LLaMA 2 paper and then on Mistral 7B and Mixtral, which makes him part of the early handoff from the first LLaMA wave into Mistral's open-weight model line.
Researcher Profile
Editor reviewedHugo Touvron
Open-weight foundation models (LLaMA)
Researcher behind DINO and Meta's LLaMA model line
One of the cleaner bridge figures between the vision-transformer era and the open-weight LLaMA era: his public paper trail runs from influential self-supervised vision work into the first LLaMA release, Llama 2, and Code Llama.
Organizations
Labs
About This Page
This profile is meant to help you get oriented quickly: why this researcher matters, what to read first, and where to explore next.
Last reviewed
March 18, 2026
Known For
The ideas, systems, and research directions that make this person worth knowing.
01
DINO and self-supervised vision transformers
02
LLaMA and Llama 2
03
Code Llama
04
Open-weight foundation models (LLaMA)
05
LLaMA: Open and Efficient Foundation Language Models
06
LLaMA
Start Here
Canonical papers, project pages, or repositories that anchor this profile.
Signature Works
Additional papers, projects, or repositories that help flesh out the profile.
Supporting Sources
Additional links that help verify and flesh out this profile.
Related Researchers
People worth exploring next because they share topics, labs, or source material with this profile.
Worth upgrading because he is present across multiple major generations of the LLaMA family, which makes his page more useful as a stable thread through Meta's open-model program than as a one-paper author stub.
Important for the open-weight frontier-model story because her paper trail runs through both the original LLaMA releases and the early Mistral efficiency push.
Important for the code-model side of the open-weight ecosystem, especially where general-purpose LLaMA work turns into stronger coding systems.
A useful page for the code-model branch of Meta’s open-weight work, especially where the broader LLaMA effort turned into stronger code-specialized systems.
Useful to follow for the scaling and productization layer of the LLaMA line, especially as it moved from the first paper into the broader Llama 3 release wave.