A good person to follow for the part of evaluation work that goes beyond leaderboard scores and asks how models generalize across cultures, languages, and shifting social context.
Researcher Profile
Editor reviewedKyle McDonell
Open-source LLMs (EleutherAI)
Open-model evaluation and governance contributor
Worth tracking if you care about the seam between open-model benchmarking and the harder question of what frontier systems should actually be evaluated for.
Organizations
Labs
About This Page
This profile is meant to help you get oriented quickly: why this researcher matters, what to read first, and where to explore next.
Last reviewed
March 18, 2026
Known For
The ideas, systems, and research directions that make this person worth knowing.
01
LM Evaluation Harness
02
Open-model benchmark infrastructure
03
Technical AI governance collaboration
04
Open-source LLMs (EleutherAI)
05
GPT-NeoX (GitHub)
06
EleutherAI (GitHub)
Start Here
Canonical papers, project pages, or repositories that anchor this profile.
Supporting Sources
Additional links that help verify and flesh out this profile.
Related Researchers
People worth exploring next because they share topics, labs, or source material with this profile.
One of the quieter but still important contributors in the open-data and open-evaluation lineage behind The Pile, GPT-NeoX, and later benchmarking infrastructure.
A useful person to follow for the evaluation layer of open models, especially where benchmark infrastructure and RLHF tooling become reusable community assets rather than one-off lab code.
Important if you care about the European sovereign-AI track, especially the attempt to build multilingual, explainable, and compliance-conscious frontier systems outside the US lab stack.
A worthwhile long-tail open-model page because it captures one of the quieter GPT-NeoX contributors with an explicit EleutherAI paper trail instead of leaving the profile as a generic coauthor stub.
Useful to follow if you care about the practical evaluation layer of open models, especially where benchmark tooling and reproducible comparisons actually shape what the ecosystem measures.