One of the most important newer names to track in alignment-flavored language-model work because he sits directly on the line from DPO into newer attempts to turn language models into better optimizers and agents.
Researcher Profile
Editor reviewedStefano Ermon
Direct preference optimization (DPO)
Associate professor of computer science at Stanford
A high-signal researcher for the probabilistic and generative-modeling side of modern AI, and an important bridge into the Stanford preference-optimization cluster that helped make DPO mainstream.
Organizations
About This Page
This profile is meant to help you get oriented quickly: why this researcher matters, what to read first, and where to explore next.
Last reviewed
March 18, 2026
Known For
The ideas, systems, and research directions that make this person worth knowing.
01
Generative modeling
02
Diffusion and probabilistic methods
03
Direct preference optimization
04
Direct preference optimization (DPO)
05
Direct Preference Optimization: Your Language Model is Secretly a Reward Model
06
DPO
Start Here
Canonical papers, project pages, or repositories that anchor this profile.
Signature Works
Additional papers, projects, or repositories that help flesh out the profile.
Supporting Sources
Additional links that help verify and flesh out this profile.
Related Researchers
People worth exploring next because they share topics, labs, or source material with this profile.
One of the clearest people to follow for the overlap between modern robotics, meta-learning, and preference-optimization-era alignment research.
A useful person to follow for the bridge between reinforcement-learning instincts and later alignment methods like DPO, especially where preference optimization is treated as a core learning problem rather than a bolt-on finetuning trick.
A high-signal name for the current alignment toolkit, especially if you want to understand how preference optimization connects back to broader language-model adaptation work.
One of the clearest people to follow if you care about scaling laws, training efficiency, and the systems choices that quietly shape frontier-model progress.
A strong person to follow for the point where machine learning research starts shaping the compute stack itself, especially in chip placement and systems-aware optimization.