One of the central Google researchers to follow for the line from large-scale language modeling into instruction tuning, multilingual systems, and practical model scaling.
Researcher Profile
Editor reviewedMelvin Johnson
Gemini (multimodal foundation models)
Researcher on machine translation and natural language processing at Google
A strong person to study for multilingual systems and instruction tuning, especially where translation, speech, and large-model post-training intersect.
Organizations
About This Page
This profile is meant to help you get oriented quickly: why this researcher matters, what to read first, and where to explore next.
Last reviewed
March 18, 2026
Known For
The ideas, systems, and research directions that make this person worth knowing.
01
Multilingual neural machine translation
02
Cross-lingual speech and language evaluation
03
Instruction tuning work around Gemini
04
Gemini (multimodal foundation models)
05
Gemini: A Family of Highly Capable Multimodal Models
06
Gemini
Start Here
Canonical papers, project pages, or repositories that anchor this profile.
Signature Works
Additional papers, projects, or repositories that help flesh out the profile.
Supporting Sources
Additional links that help verify and flesh out this profile.
Related Researchers
People worth exploring next because they share topics, labs, or source material with this profile.
One of the more useful people to study for the Gemini era because his work spans both the text-core of multimodal frontier models and the optimization tricks that make those systems cheaper and more stable to train.
A high-signal researcher for understanding the modern scaling playbook, especially around compute-optimal training, retrieval-augmented language models, and the text side of Gemini-era multimodal systems.
A good researcher to follow for the infrastructure side of frontier language models, especially mixture-of-experts scaling, instruction tuning, and the data systems that make very large models usable.
A high-signal reinforcement-learning researcher whose work sits on the path from AlphaGo-era planning systems to Gemini-era reasoning and post-training techniques.
A high-signal researcher for grounded language and retrieval-heavy systems, especially if you want to understand how language models stay useful as the world changes around them.