One of the earlier Anthropic contributors worth tracking if you care about the transition from RLHF-style assistant training into scaling and evaluation work.
Researcher Profile
Editor reviewedDawn Drain
Alignment via AI feedback (Constitutional AI)
Alignment researcher at Anthropic
Useful for the seam between Anthropic’s earlier alignment papers and its later audit-oriented safety work, where interpretability and evaluation start feeding into deployment practice.
Organizations
Labs
About This Page
This profile is meant to help you get oriented quickly: why this researcher matters, what to read first, and where to explore next.
Last reviewed
March 18, 2026
Official And External Links
Known For
The ideas, systems, and research directions that make this person worth knowing.
01
Assistant alignment research
02
Scaling and interpretability analysis
03
Audit-oriented safety work
04
Alignment via AI feedback (Constitutional AI)
05
Constitutional AI: Harmlessness from AI Feedback
06
Training a Helpful and Harmless Assistant with Reinforcement Learning from Human Feedback
Start Here
Canonical papers, project pages, or repositories that anchor this profile.
Signature Works
Additional papers, projects, or repositories that help flesh out the profile.
Supporting Sources
Additional links that help verify and flesh out this profile.
Related Researchers
People worth exploring next because they share topics, labs, or source material with this profile.
A good person to follow for the evaluation-heavy side of Anthropic alignment work, especially where early assistant training later feeds into reasoning-faithfulness and model-written testing.
Useful for the evaluation-heavy side of Anthropic’s research, especially where the lab moved from RLHF and Constitutional AI into broader behavior discovery.
One of the earlier Anthropic contributors worth tracking for the path from RLHF assistant training into Constitutional AI and later model evaluation work.
Useful for the arc from early RLHF assistant work into the later evaluation-heavy safety layer Anthropic built on top of it.
A useful profile for the operational side of alignment work, especially where RL systems and evaluation loops have to be robust enough to support day-to-day model development.