A good person to follow for the evaluation-heavy side of Anthropic alignment work, especially where early assistant training later feeds into reasoning-faithfulness and model-written testing.
Researcher Profile
Editor reviewedNicholas Joseph
Alignment via AI feedback (Constitutional AI)
Contributor to Anthropic's pretraining and evaluation work
Important for the less visible infrastructure side of Anthropic, especially where pretraining, assistant tuning, and reasoning-faithfulness work meet.
Organizations
Labs
About This Page
This profile is meant to help you get oriented quickly: why this researcher matters, what to read first, and where to explore next.
Known For
The ideas, systems, and research directions that make this person worth knowing.
01
Helpful and harmless assistant training
02
Constitutional AI
03
Reasoning-faithfulness work
04
Alignment via AI feedback (Constitutional AI)
05
Constitutional AI: Harmlessness from AI Feedback
06
Training a Helpful and Harmless Assistant with Reinforcement Learning from Human Feedback
Start Here
Canonical papers, project pages, or repositories that anchor this profile.
Supporting Sources
Additional links that help verify and flesh out this profile.
Related Researchers
People worth exploring next because they share topics, labs, or source material with this profile.
Worth following for the thread inside Anthropic that connects assistant training to more explicit work on reasoning faithfulness and evaluation.
Worth tracking for the practical evaluation layer around frontier models, especially where safety claims have to survive contact with real tests and faithful-reasoning checks.
A strong person to follow for the evaluation-heavy side of Anthropic, especially where behavior discovery, reasoning faithfulness, and concrete safety testing come together.
A strong person to follow for how Anthropic moved from assistant training into more explicit evaluation work around model behavior, red-teaming, and chain-of-thought faithfulness.
A high-signal person to follow for the part of alignment research that asks whether a model’s stated reasoning can actually be trusted and measured.