A useful page for the more evaluation-heavy side of Anthropic’s alignment program, especially where constitutional methods, model-written evals, and faithfulness checks start to connect.
Researcher Profile
Editor reviewedDustin Li
Alignment via AI feedback (Constitutional AI)
Alignment and evaluation researcher at Anthropic
Worth tracking for the newer evaluation thread at Anthropic, especially where failure-mode discovery and faithfulness measurement extend beyond the original RLHF papers.
Organizations
Labs
About This Page
This profile is meant to help you get oriented quickly: why this researcher matters, what to read first, and where to explore next.
Known For
The ideas, systems, and research directions that make this person worth knowing.
01
Model-written evaluations
02
Reasoning-faithfulness measurement
03
Failure-mode discovery in aligned models
04
Alignment via AI feedback (Constitutional AI)
05
Constitutional AI: Harmlessness from AI Feedback
06
Training a Helpful and Harmless Assistant with Reinforcement Learning from Human Feedback
Start Here
Canonical papers, project pages, or repositories that anchor this profile.
Related Researchers
People worth exploring next because they share topics, labs, or source material with this profile.
A useful profile for the people building Anthropic’s evaluation stack, especially the model-written-evals line that tries to surface behaviors faster than hand-built test sets can.
Important because he is right at the center of the model-written-evals line, which became one of Anthropic’s clearest attempts to discover behaviors faster than manual evaluation can.
A strong person to follow for how Anthropic moved from assistant training into more explicit evaluation work around model behavior, red-teaming, and chain-of-thought faithfulness.
A good person to follow for the part of alignment work that becomes concrete measurement: model-written tests, chain-of-thought faithfulness, and behavior-shaping methods that can actually be audited.
Useful for the attack-and-evaluation side of alignment work, especially long-context jailbreak research and the measurement work that turns safety concerns into concrete tests.