Useful for the attack-and-evaluation side of alignment work, especially long-context jailbreak research and the measurement work that turns safety concerns into concrete tests.
Researcher Profile
Editor reviewedEthan Perez
Alignment via AI feedback (Constitutional AI)
Alignment and evaluation researcher at Anthropic
Important because he sits near the boundary between alignment theory and concrete failure-mode discovery, especially jailbreaks, preference training, and behavior evaluations.
Organizations
Labs
About This Page
This profile is meant to help you get oriented quickly: why this researcher matters, what to read first, and where to explore next.
Last reviewed
March 18, 2026
Official And External Links
Known For
The ideas, systems, and research directions that make this person worth knowing.
01
Constitutional AI
02
Jailbreak and robustness research
03
Behavior discovery and safety evaluation
04
Alignment via AI feedback (Constitutional AI)
05
Constitutional AI: Harmlessness from AI Feedback
06
Training a Helpful and Harmless Assistant with Reinforcement Learning from Human Feedback
Start Here
Canonical papers, project pages, or repositories that anchor this profile.
Signature Works
Additional papers, projects, or repositories that help flesh out the profile.
Supporting Sources
Additional links that help verify and flesh out this profile.
Related Researchers
People worth exploring next because they share topics, labs, or source material with this profile.
Important for understanding how Anthropic’s assistant-training stack evolved from early RLHF into Constitutional AI and later robustness work around jailbreaks and behavior control.
A high-signal person to follow for the part of alignment research that asks whether a model’s stated reasoning can actually be trusted and measured.
Worth tracking for the newer evaluation thread at Anthropic, especially where failure-mode discovery and faithfulness measurement extend beyond the original RLHF papers.
A useful profile for the people building Anthropic’s evaluation stack, especially the model-written-evals line that tries to surface behaviors faster than hand-built test sets can.
Important because he is right at the center of the model-written-evals line, which became one of Anthropic’s clearest attempts to discover behaviors faster than manual evaluation can.