Worth tracking if you care about the seam between open-model benchmarking and the harder question of what frontier systems should actually be evaluated for.
Researcher Profile
Editor reviewedLaria Reynolds
Open-source LLMs (EleutherAI)
Cultural benchmarking and evaluation contributor at EleutherAI
A good person to follow for the part of evaluation work that goes beyond leaderboard scores and asks how models generalize across cultures, languages, and shifting social context.
Organizations
Labs
About This Page
This profile is meant to help you get oriented quickly: why this researcher matters, what to read first, and where to explore next.
Last reviewed
March 18, 2026
Official And External Links
Known For
The ideas, systems, and research directions that make this person worth knowing.
01
LM Evaluation Harness
02
Cross-cultural reasoning benchmarks
03
Open-model evaluation infrastructure
04
Open-source LLMs (EleutherAI)
05
GPT-NeoX (GitHub)
06
EleutherAI (GitHub)
Start Here
Canonical papers, project pages, or repositories that anchor this profile.
Supporting Sources
Additional links that help verify and flesh out this profile.
Related Researchers
People worth exploring next because they share topics, labs, or source material with this profile.
A useful person to follow for the evaluation layer of open models, especially where benchmark infrastructure and RLHF tooling become reusable community assets rather than one-off lab code.
One of the quieter but still important contributors in the open-data and open-evaluation lineage behind The Pile, GPT-NeoX, and later benchmarking infrastructure.
A worthwhile long-tail open-model page because it captures one of the quieter GPT-NeoX contributors with an explicit EleutherAI paper trail instead of leaving the profile as a generic coauthor stub.
Useful to follow if you care about the practical evaluation layer of open models, especially where benchmark tooling and reproducible comparisons actually shape what the ecosystem measures.
A useful person to track for the evaluation side of AI risk work, especially where open-model benchmarking meets the question of which measurements are actually trustworthy enough to inform decisions.