Home/Researchers/Archit Sharma

Researcher Profile

Editor reviewed

Archit Sharma

Direct preference optimization (DPO)

Stanford researcher on preference optimization and autonomous RL

A useful person to follow for the bridge between reinforcement-learning instincts and later alignment methods like DPO, especially where preference optimization is treated as a core learning problem rather than a bolt-on finetuning trick.

Organizations

Stanford UniversityUniversity of Iowa

About This Page

This profile is meant to help you get oriented quickly: why this researcher matters, what to read first, and where to explore next.

Official And External Links

Known For

The ideas, systems, and research directions that make this person worth knowing.

01

Direct preference optimization

02

Autonomous reinforcement learning

03

Evaluating AI feedback for alignment

04

Direct preference optimization (DPO)

05

Direct Preference Optimization: Your Language Model is Secretly a Reward Model

06

DPO

Start Here

Canonical papers, project pages, or repositories that anchor this profile.

Signature Works

Additional papers, projects, or repositories that help flesh out the profile.

Supporting Sources

Additional links that help verify and flesh out this profile.

Related Researchers

People worth exploring next because they share topics, labs, or source material with this profile.