A good person to follow for the evaluation-heavy side of Anthropic alignment work, especially where early assistant training later feeds into reasoning-faithfulness and model-written testing.
Researcher Profile
Editor reviewedZac Hatfield-Dodds
Alignment via AI feedback (Constitutional AI)
Contributor to Anthropic's evaluation and reasoning-faithfulness work
Worth tracking for the practical evaluation layer around frontier models, especially where safety claims have to survive contact with real tests and faithful-reasoning checks.
Organizations
Labs
About This Page
This profile is meant to help you get oriented quickly: why this researcher matters, what to read first, and where to explore next.
Known For
The ideas, systems, and research directions that make this person worth knowing.
01
Helpful and harmless assistant training
02
Model-written evaluations
03
Question decomposition
04
Alignment via AI feedback (Constitutional AI)
05
Constitutional AI: Harmlessness from AI Feedback
06
Training a Helpful and Harmless Assistant with Reinforcement Learning from Human Feedback
Start Here
Canonical papers, project pages, or repositories that anchor this profile.
Related Researchers
People worth exploring next because they share topics, labs, or source material with this profile.
Worth following for the thread inside Anthropic that connects assistant training to more explicit work on reasoning faithfulness and evaluation.
A useful page if you care about the harder question of whether a model’s visible chain of reasoning is actually faithful, not just plausible-looking.
Useful for the seam between Anthropic’s earlier alignment papers and its later audit-oriented safety work, where interpretability and evaluation start feeding into deployment practice.
Useful for the evaluation-heavy side of Anthropic’s research, especially where the lab moved from RLHF and Constitutional AI into broader behavior discovery.
One of the earlier Anthropic contributors worth tracking for the path from RLHF assistant training into Constitutional AI and later model evaluation work.