Home/Researchers/Yelong Shen

Researcher Profile

Editor reviewed

Yelong Shen

Parameter-efficient finetuning

Microsoft researcher across reading systems and low-rank model adaptation

Useful because his work spans the older machine-comprehension era at Microsoft and the later LoRA-style adaptation line that became core infrastructure for modern finetuning.

Organizations

Microsoft

About This Page

This profile is meant to help you get oriented quickly: why this researcher matters, what to read first, and where to explore next.

Known For

The ideas, systems, and research directions that make this person worth knowing.

01

LoRA and low-rank adaptation

02

Machine-comprehension systems at Microsoft

03

Multi-task adaptation methods for large models

04

Parameter-efficient finetuning

05

LoRA: Low-Rank Adaptation of Large Language Models

06

LoRA

Start Here

Canonical papers, project pages, or repositories that anchor this profile.

Related Researchers

People worth exploring next because they share topics, labs, or source material with this profile.

Shared canonical source

Edward J. Hu

Parameter-efficient finetuning

3 sources

A high-signal person to study if you care about the practical mechanics of adapting large models, especially where scaling theory turns into techniques that actually spread across the industry.

Shared canonical source

Zeyuan Allen-Zhu

Parameter-efficient finetuning

3 sources

One of the clearer people to follow if you want the bridge between deep-learning theory, practical adaptation methods like LoRA, and broader attempts to explain how large language models actually work.

Shared canonical source

Yuanzhi Li

Parameter-efficient finetuning

3 sources

A useful profile for the seam between deep-learning theory and practical large-model methods, especially if you want someone whose work spans convergence theory, small-language-model data design, and LoRA.