Useful because his work spans the older machine-comprehension era at Microsoft and the later LoRA-style adaptation line that became core infrastructure for modern finetuning.
Researcher Profile
Editor reviewedWeizhu Chen
Parameter-efficient finetuning
Microsoft researcher across reading systems, NLU, and low-rank adaptation
A good person to know for the longer Microsoft line that runs from machine-comprehension systems into more recent adaptation work like LoRA and MTL-LoRA.
Organizations
Topics
About This Page
This profile is meant to help you get oriented quickly: why this researcher matters, what to read first, and where to explore next.
Known For
The ideas, systems, and research directions that make this person worth knowing.
01
LoRA
02
FusionNet and ReasoNet
03
Long-running Microsoft NLU research
04
Parameter-efficient finetuning
05
LoRA: Low-Rank Adaptation of Large Language Models
06
Finetuning
Start Here
Canonical papers, project pages, or repositories that anchor this profile.
Signature Works
Additional papers, projects, or repositories that help flesh out the profile.
Supporting Sources
Additional links that help verify and flesh out this profile.
Related Researchers
People worth exploring next because they share topics, labs, or source material with this profile.
A high-signal person to study if you care about the practical mechanics of adapting large models, especially where scaling theory turns into techniques that actually spread across the industry.
A useful profile for the path from parameter-efficient finetuning into newer agent-safety work, especially if you want people whose contributions span both model customization and tool-using systems security.
One of the clearer people to follow if you want the bridge between deep-learning theory, practical adaptation methods like LoRA, and broader attempts to explain how large language models actually work.
A useful profile for the seam between deep-learning theory and practical large-model methods, especially if you want someone whose work spans convergence theory, small-language-model data design, and LoRA.
Co-authored LoRA: one of the core techniques behind modern fine-tuning pipelines.