Co-authored MT-Bench / LLM-as-a-judge: a widely used template for scalable multi-turn evaluation.
Researcher Profile
Yonghao Zhuang
LLM-as-a-judge evaluation (MT-Bench)
Co-author, MT-Bench
Co-authored MT-Bench / LLM-as-a-judge: a widely used template for scalable multi-turn evaluation.
Topics
About This Page
This profile is meant to help you get oriented quickly: why this researcher matters, what to read first, and where to explore next.
Last updated
March 20, 2026
Best First Clicks
Known For
The ideas, systems, and research directions that make this person worth knowing.
01
LLM-as-a-judge evaluation (MT-Bench)
02
Judging LLM-as-a-Judge with MT-Bench and Chatbot Arena
03
Evaluation
04
LMSys
05
LLM-as-a-judge
Start Here
Canonical papers, project pages, or repositories that anchor this profile.
Related Researchers
People worth exploring next because they share topics, labs, or source material with this profile.
Co-authored MT-Bench / LLM-as-a-judge: a widely used template for scalable multi-turn evaluation.
Co-authored MT-Bench / LLM-as-a-judge: a widely used template for scalable multi-turn evaluation.
Co-authored Chatbot Arena: a high-impact human-preference evaluation platform for LLMs.
Co-authored Chatbot Arena: a high-impact human-preference evaluation platform for LLMs.
Co-authored Chatbot Arena: a high-impact human-preference evaluation platform for LLMs.