A good person to follow for the part of alignment work that becomes concrete measurement: model-written tests, chain-of-thought faithfulness, and behavior-shaping methods that can actually be audited.
Researcher Profile
Editor reviewedDanny Hernandez
Alignment via AI feedback (Constitutional AI)
Alignment and evaluations researcher at Anthropic
A strong person to follow for how Anthropic moved from assistant training into more explicit evaluation work around model behavior, red-teaming, and chain-of-thought faithfulness.
Organizations
Labs
About This Page
This profile is meant to help you get oriented quickly: why this researcher matters, what to read first, and where to explore next.
Last reviewed
March 18, 2026
Official And External Links
Known For
The ideas, systems, and research directions that make this person worth knowing.
01
Helpful and harmless assistant training
02
Model-written evaluations
03
Faithfulness and safety measurement
04
Alignment via AI feedback (Constitutional AI)
05
Constitutional AI: Harmlessness from AI Feedback
06
Training a Helpful and Harmless Assistant with Reinforcement Learning from Human Feedback
Start Here
Canonical papers, project pages, or repositories that anchor this profile.
Signature Works
Additional papers, projects, or repositories that help flesh out the profile.
Supporting Sources
Additional links that help verify and flesh out this profile.
Related Researchers
People worth exploring next because they share topics, labs, or source material with this profile.
Worth following for the evaluation side of Anthropic’s alignment program, especially where model-written tests and public-input methods become practical tooling rather than just ideas.
Useful for the attack-and-evaluation side of alignment work, especially long-context jailbreak research and the measurement work that turns safety concerns into concrete tests.
A good person to follow for the evaluation-heavy side of Anthropic alignment work, especially where early assistant training later feeds into reasoning-faithfulness and model-written testing.
Worth tracking for the newer evaluation thread at Anthropic, especially where failure-mode discovery and faithfulness measurement extend beyond the original RLHF papers.
Worth following for the thread inside Anthropic that connects assistant training to more explicit work on reasoning faithfulness and evaluation.