A strong person to follow for how Anthropic moved from assistant training into more explicit evaluation work around model behavior, red-teaming, and chain-of-thought faithfulness.
Researcher Profile
Neel Nanda
Training helpful, harmless assistants via RLHF
Co-author, Helpful & Harmless RLHF
Co-authored an early RLHF recipe for helpful + harmless assistants.
Labs
About This Page
This profile is meant to help you get oriented quickly: why this researcher matters, what to read first, and where to explore next.
Last updated
March 20, 2026
Known For
The ideas, systems, and research directions that make this person worth knowing.
01
Training helpful, harmless assistants via RLHF
02
Training a Helpful and Harmless Assistant with Reinforcement Learning from Human Feedback
03
Anthropic
04
RLHF
05
Alignment
Start Here
Canonical papers, project pages, or repositories that anchor this profile.
Related Researchers
People worth exploring next because they share topics, labs, or source material with this profile.
Worth following for the evaluation side of Anthropic’s alignment program, especially where model-written tests and public-input methods become practical tooling rather than just ideas.
One of the clearest people to follow if you care about scaling laws, training efficiency, and the systems choices that quietly shape frontier-model progress.
Important for understanding how Anthropic’s assistant-training stack evolved from early RLHF into Constitutional AI and later robustness work around jailbreaks and behavior control.
A good person to follow for the part of alignment work that becomes concrete measurement: model-written tests, chain-of-thought faithfulness, and behavior-shaping methods that can actually be audited.
Useful for the attack-and-evaluation side of alignment work, especially long-context jailbreak research and the measurement work that turns safety concerns into concrete tests.