Home/Researchers/Rafael Rafailov

Researcher Profile

Editor reviewed

Rafael Rafailov

Direct preference optimization (DPO)

Stanford researcher on preference optimization and agentic reasoning

One of the most important newer names to track in alignment-flavored language-model work because he sits directly on the line from DPO into newer attempts to turn language models into better optimizers and agents.

Organizations

Stanford University

About This Page

This profile is meant to help you get oriented quickly: why this researcher matters, what to read first, and where to explore next.

Known For

The ideas, systems, and research directions that make this person worth knowing.

01

Direct preference optimization

02

Preference-optimization variants

03

Agentic reasoning and optimization

04

Direct preference optimization (DPO)

05

Direct Preference Optimization: Your Language Model is Secretly a Reward Model

06

DPO

Start Here

Canonical papers, project pages, or repositories that anchor this profile.

Related Researchers

People worth exploring next because they share topics, labs, or source material with this profile.