Home/Researchers/Chunyuan Li

Researcher Profile

Chunyuan Li

Visual instruction tuning (LLaVA)

Researcher at Microsoft

Co-authored Visual Instruction Tuning: a widely-cited recipe for LLaVA-style multimodal assistants.

Organizations

Microsoft

About This Page

This profile is meant to help you get oriented quickly: why this researcher matters, what to read first, and where to explore next.

Last updated

March 20, 2026

Official And External Links

Known For

The ideas, systems, and research directions that make this person worth knowing.

01

Visual instruction tuning (LLaVA)

02

Visual Instruction Tuning

03

LLaVA (GitHub)

04

LLaVA

05

Multimodal

06

Vision-language

Start Here

Canonical papers, project pages, or repositories that anchor this profile.

Signature Works

Additional papers, projects, or repositories that help flesh out the profile.

Supporting Sources

Additional links that help verify and flesh out this profile.

Related Researchers

People worth exploring next because they share topics, labs, or source material with this profile.

Shared topics

Connor Leahy

Open models, governance, communication

4 sources

An important bridge figure between open-weight language-model communities and the modern alignment debate, especially when you want to understand how frontier capability, openness, and control arguments collide in practice.

Start HereConjecture