Home/Researchers/Eric Mitchell

Researcher Profile

Editor reviewed

Eric Mitchell

Direct preference optimization (DPO)

Stanford researcher on preference optimization and language-model adaptation

A high-signal name for the current alignment toolkit, especially if you want to understand how preference optimization connects back to broader language-model adaptation work.

Organizations

Stanford University

About This Page

This profile is meant to help you get oriented quickly: why this researcher matters, what to read first, and where to explore next.

Known For

The ideas, systems, and research directions that make this person worth knowing.

01

Direct preference optimization

02

Language-model adaptation

03

Alignment methods that avoid heavier RLHF pipelines

04

Direct preference optimization (DPO)

05

Direct Preference Optimization: Your Language Model is Secretly a Reward Model

06

DPO

Start Here

Canonical papers, project pages, or repositories that anchor this profile.

Related Researchers

People worth exploring next because they share topics, labs, or source material with this profile.