A strong profile for the line from classic semantic parsing into modern tool use, retrieval, and language-model adaptation at scale.
Researcher Profile
Editor reviewedAri Holtzman
Efficient finetuning of quantized LLMs
Assistant professor at the University of Chicago working on generation and evaluation
Important because he helped define how people think about language-model decoding quality, and his work keeps showing up where practical generation behavior matters more than benchmark theater.
Organizations
About This Page
This profile is meant to help you get oriented quickly: why this researcher matters, what to read first, and where to explore next.
Last reviewed
March 20, 2026
Official And External Links
Known For
The ideas, systems, and research directions that make this person worth knowing.
01
Nucleus sampling
02
Generation quality and degeneration analysis
03
QLoRA-era practical finetuning work
04
Efficient finetuning of quantized LLMs
05
QLoRA: Efficient Finetuning of Quantized LLMs
06
QLoRA
Start Here
Canonical papers, project pages, or repositories that anchor this profile.
Signature Works
Additional papers, projects, or repositories that help flesh out the profile.
Supporting Sources
Additional links that help verify and flesh out this profile.
Related Researchers
People worth exploring next because they share topics, labs, or source material with this profile.
A core person to know for making serious language-model finetuning and inference feasible on smaller hardware, especially through quantization and optimizer tooling that working builders actually use.
Co-authored QLoRA: made high-quality fine-tuning feasible on modest hardware.
A good person to follow if you care about what deployment-minded safety work looks like inside a frontier lab, especially around moderation, image systems, and system-card style evaluation.
A useful person to follow if you want to understand the engineering side of frontier language models, especially the line running from Codex and GPT-style systems into later open-weight and product-facing deployments.
A strong person to follow if you care about open-weight language models and retrieval-heavy NLP systems, especially the line from RoBERTa and RAG into LLaMA-era model development.