Worth tracking for the DeepMind thread that links large-model scaling research to the multimodal Gemini stack, rather than treating those as separate eras.
Researcher Profile
Editor reviewedGeorge van den Driessche
Compute-optimal scaling for LLM training
Contributor to Google DeepMind's Gemini and large-language-model program
A useful page for the long-running DeepMind contributor layer behind large-model training, especially across the Gopher, Chinchilla, and Gemini sequence.
Organizations
Labs
Topics
About This Page
This profile is meant to help you get oriented quickly: why this researcher matters, what to read first, and where to explore next.
Known For
The ideas, systems, and research directions that make this person worth knowing.
01
Gemini
02
Chinchilla and compute-optimal scaling
03
Gopher-era large-language-model work
04
Compute-optimal scaling for LLM training
05
Training Compute-Optimal Large Language Models
06
DeepMind
Start Here
Canonical papers, project pages, or repositories that anchor this profile.
Related Researchers
People worth exploring next because they share topics, labs, or source material with this profile.
A useful profile for the core DeepMind contributor layer behind Chinchilla, Gopher, and Gemini rather than only the more public faces of those systems.
A useful profile for the DeepMind researchers who helped carry the lab’s language-model program from scaling-law work into Gemini rather than appearing only on the final product layer.
A useful page for the DeepMind work that connected large-language-model scaling to the multimodal Gemini push, with a clearer safety-and-evaluation flavor than many purely scaling-focused pages.
A useful profile for the DeepMind researchers who sat inside the core language-model program as it moved from scaling-law analysis into the Gemini family.
A useful page for the less visible engineering and research work behind DeepMind’s modern language-model stack, especially across the jump from Gopher and Chinchilla into Gemini.