A high-signal person to study if you care about the practical mechanics of adapting large models, especially where scaling theory turns into techniques that actually spread across the industry.
Researcher Profile
Lu Wang
Parameter-efficient finetuning
Co-author, LoRA
Co-authored LoRA: one of the core techniques behind modern fine-tuning pipelines.
Topics
About This Page
This profile is meant to help you get oriented quickly: why this researcher matters, what to read first, and where to explore next.
Last updated
March 20, 2026
Best First Clicks
Known For
The ideas, systems, and research directions that make this person worth knowing.
01
Parameter-efficient finetuning
02
LoRA: Low-Rank Adaptation of Large Language Models
03
LoRA
04
Finetuning
05
Adaptation
Start Here
Canonical papers, project pages, or repositories that anchor this profile.
Related Researchers
People worth exploring next because they share topics, labs, or source material with this profile.
Useful because his work spans the older machine-comprehension era at Microsoft and the later LoRA-style adaptation line that became core infrastructure for modern finetuning.
A useful profile for the path from parameter-efficient finetuning into newer agent-safety work, especially if you want people whose contributions span both model customization and tool-using systems security.
One of the clearer people to follow if you want the bridge between deep-learning theory, practical adaptation methods like LoRA, and broader attempts to explain how large language models actually work.
A useful profile for the seam between deep-learning theory and practical large-model methods, especially if you want someone whose work spans convergence theory, small-language-model data design, and LoRA.
Co-authored LoRA: one of the core techniques behind modern fine-tuning pipelines.