Important because he helped define how people think about language-model decoding quality, and his work keeps showing up where practical generation behavior matters more than benchmark theater.
Researcher Profile
Editor reviewedLuke Zettlemoyer
Efficient finetuning of quantized LLMs
Meta AI research manager, FAIR Seattle site lead, and University of Washington professor
A strong profile for the line from classic semantic parsing into modern tool use, retrieval, and language-model adaptation at scale.
Organizations
About This Page
This profile is meant to help you get oriented quickly: why this researcher matters, what to read first, and where to explore next.
Last reviewed
March 20, 2026
Known For
The ideas, systems, and research directions that make this person worth knowing.
01
Toolformer
02
Retrieval and entity-linking systems
03
Bridging classic NLP structure with modern large-model practice
04
Efficient finetuning of quantized LLMs
05
QLoRA: Efficient Finetuning of Quantized LLMs
06
QLoRA
Start Here
Canonical papers, project pages, or repositories that anchor this profile.
Signature Works
Additional papers, projects, or repositories that help flesh out the profile.
Supporting Sources
Additional links that help verify and flesh out this profile.
Related Researchers
People worth exploring next because they share topics, labs, or source material with this profile.
A core person to know for making serious language-model finetuning and inference feasible on smaller hardware, especially through quantization and optimizer tooling that working builders actually use.
Co-authored QLoRA: made high-quality fine-tuning feasible on modest hardware.
A useful person to follow for the evaluation layer of open models, especially where benchmark infrastructure and RLHF tooling become reusable community assets rather than one-off lab code.
Worth knowing because his work links earlier dense-retrieval research to later MRKL and Jamba systems, which makes his page a good bridge between classic NLP retrieval and newer hybrid LLM stacks.
An especially valuable page for understanding how AI systems get judged in practice, because it puts human evaluation and rubric design at the center rather than treating them as an afterthought to model building.