Important for the DeepMind large-model lineage because his work sits inside the sequence from compute-optimal scaling into Gemini rather than only the headline launch moment.
Researcher Profile
Editor reviewedKaren Simonyan
Compute-optimal scaling for LLM training
Google DeepMind researcher spanning vision and frontier multimodal systems
A foundational vision researcher who also matters for the more recent DeepMind language-model lineage, making him a good bridge between classic deep-learning milestones and the Gemini era.
Organizations
Labs
About This Page
This profile is meant to help you get oriented quickly: why this researcher matters, what to read first, and where to explore next.
Known For
The ideas, systems, and research directions that make this person worth knowing.
01
VGG-style deep vision models
02
Frontier multimodal systems at DeepMind
03
Large-model scaling work
04
Compute-optimal scaling for LLM training
05
Training Compute-Optimal Large Language Models
06
DeepMind
Start Here
Canonical papers, project pages, or repositories that anchor this profile.
Supporting Sources
Additional links that help verify and flesh out this profile.
Related Researchers
People worth exploring next because they share topics, labs, or source material with this profile.
One of the clearest people to follow for the sequence from retrieval-augmented language models to compute-optimal scaling and then into Gemini.
Worth tracking for the DeepMind thread that links large-model scaling research to the multimodal Gemini stack, rather than treating those as separate eras.
A useful profile for the core DeepMind contributor layer behind Chinchilla, Gopher, and Gemini rather than only the more public faces of those systems.
A useful profile for the DeepMind researchers who helped carry the lab’s language-model program from scaling-law work into Gemini rather than appearing only on the final product layer.
A useful page for the DeepMind work that connected large-language-model scaling to the multimodal Gemini push, with a clearer safety-and-evaluation flavor than many purely scaling-focused pages.