One of the clearest people to follow if you want the mechanistic-interpretability thread at Anthropic rather than only its safety-policy surface.
Researcher Profile
Editor reviewedNelson Elhage
Alignment via AI feedback (Constitutional AI)
Mechanistic interpretability researcher at Anthropic
One of the most important people to follow for mechanistic interpretability and transformer-circuits-style attempts to reverse engineer how large language models work.
Organizations
Labs
About This Page
This profile is meant to help you get oriented quickly: why this researcher matters, what to read first, and where to explore next.
Last reviewed
March 18, 2026
Known For
The ideas, systems, and research directions that make this person worth knowing.
01
Transformer circuits
02
Induction heads and in-context learning
03
Toy models of superposition
04
Alignment via AI feedback (Constitutional AI)
05
Constitutional AI: Harmlessness from AI Feedback
06
Training a Helpful and Harmless Assistant with Reinforcement Learning from Human Feedback
Start Here
Canonical papers, project pages, or repositories that anchor this profile.
Supporting Sources
Additional links that help verify and flesh out this profile.
Related Researchers
People worth exploring next because they share topics, labs, or source material with this profile.
One of the clearest people to follow if you care about scaling laws, training efficiency, and the systems choices that quietly shape frontier-model progress.
One of the earlier Anthropic contributors worth tracking if you care about the transition from RLHF-style assistant training into scaling and evaluation work.
Useful for the seam between Anthropic’s earlier alignment papers and its later audit-oriented safety work, where interpretability and evaluation start feeding into deployment practice.
A strong person to follow for how Anthropic moved from assistant training into more explicit evaluation work around model behavior, red-teaming, and chain-of-thought faithfulness.
Worth following for the evaluation side of Anthropic’s alignment program, especially where model-written tests and public-input methods become practical tooling rather than just ideas.