Home/Researchers/Weizhu Chen

Researcher Profile

Editor reviewed

Weizhu Chen

Parameter-efficient finetuning

Microsoft researcher across reading systems, NLU, and low-rank adaptation

A good person to know for the longer Microsoft line that runs from machine-comprehension systems into more recent adaptation work like LoRA and MTL-LoRA.

Organizations

Microsoft

About This Page

This profile is meant to help you get oriented quickly: why this researcher matters, what to read first, and where to explore next.

Known For

The ideas, systems, and research directions that make this person worth knowing.

01

LoRA

02

FusionNet and ReasoNet

03

Long-running Microsoft NLU research

04

Parameter-efficient finetuning

05

LoRA: Low-Rank Adaptation of Large Language Models

06

Finetuning

Start Here

Canonical papers, project pages, or repositories that anchor this profile.

Signature Works

Additional papers, projects, or repositories that help flesh out the profile.

Supporting Sources

Additional links that help verify and flesh out this profile.

Related Researchers

People worth exploring next because they share topics, labs, or source material with this profile.

Shared canonical source

Edward J. Hu

Parameter-efficient finetuning

3 sources

A high-signal person to study if you care about the practical mechanics of adapting large models, especially where scaling theory turns into techniques that actually spread across the industry.

Shared canonical source

Zeyuan Allen-Zhu

Parameter-efficient finetuning

3 sources

One of the clearer people to follow if you want the bridge between deep-learning theory, practical adaptation methods like LoRA, and broader attempts to explain how large language models actually work.

Shared canonical source

Yuanzhi Li

Parameter-efficient finetuning

3 sources

A useful profile for the seam between deep-learning theory and practical large-model methods, especially if you want someone whose work spans convergence theory, small-language-model data design, and LoRA.