A strong person to follow for the evaluation-heavy side of Anthropic, especially where behavior discovery, reasoning faithfulness, and concrete safety testing come together.
Researcher Profile
Editor reviewedKamile Lukosuite
Alignment via AI feedback (Constitutional AI)
Alignment and evaluation researcher at Anthropic
Worth following for the evaluation side of alignment work, especially where model-written tests and more faithful reasoning traces are used to make model behavior easier to inspect.
Organizations
Labs
About This Page
This profile is meant to help you get oriented quickly: why this researcher matters, what to read first, and where to explore next.
Known For
The ideas, systems, and research directions that make this person worth knowing.
01
Model-written evaluations
02
Reasoning faithfulness
03
Alignment evaluation methods
04
Alignment via AI feedback (Constitutional AI)
05
Constitutional AI: Harmlessness from AI Feedback
06
Training a Helpful and Harmless Assistant with Reinforcement Learning from Human Feedback
Start Here
Canonical papers, project pages, or repositories that anchor this profile.
Supporting Sources
Additional links that help verify and flesh out this profile.
Related Researchers
People worth exploring next because they share topics, labs, or source material with this profile.
A good person to follow for the evaluation-heavy side of Anthropic alignment work, especially where early assistant training later feeds into reasoning-faithfulness and model-written testing.
Worth following for the thread inside Anthropic that connects assistant training to more explicit work on reasoning faithfulness and evaluation.
A useful page if you care about the harder question of whether a model’s visible chain of reasoning is actually faithful, not just plausible-looking.
Worth tracking for the practical evaluation layer around frontier models, especially where safety claims have to survive contact with real tests and faithful-reasoning checks.
A high-signal person to follow for the part of alignment research that asks whether a model’s stated reasoning can actually be trusted and measured.