Important because he helped define how people think about language-model decoding quality, and his work keeps showing up where practical generation behavior matters more than benchmark theater.
Researcher Profile
Artidoro Pagnoni
Efficient finetuning of quantized LLMs
Co-author, QLoRA
Co-authored QLoRA: made high-quality fine-tuning feasible on modest hardware.
Topics
About This Page
This profile is meant to help you get oriented quickly: why this researcher matters, what to read first, and where to explore next.
Last updated
March 20, 2026
Best First Clicks
Official And External Links
Known For
The ideas, systems, and research directions that make this person worth knowing.
01
Efficient finetuning of quantized LLMs
02
QLoRA: Efficient Finetuning of Quantized LLMs
03
QLoRA
04
Finetuning
05
Quantization
Start Here
Canonical papers, project pages, or repositories that anchor this profile.
Signature Works
Additional papers, projects, or repositories that help flesh out the profile.
Related Researchers
People worth exploring next because they share topics, labs, or source material with this profile.
A strong profile for the line from classic semantic parsing into modern tool use, retrieval, and language-model adaptation at scale.
A core person to know for making serious language-model finetuning and inference feasible on smaller hardware, especially through quantization and optimizer tooling that working builders actually use.
A high-signal person to study if you care about the practical mechanics of adapting large models, especially where scaling theory turns into techniques that actually spread across the industry.
Useful because his work spans the older machine-comprehension era at Microsoft and the later LoRA-style adaptation line that became core infrastructure for modern finetuning.
A useful profile for the path from parameter-efficient finetuning into newer agent-safety work, especially if you want people whose contributions span both model customization and tool-using systems security.