A high-signal researcher for the latency and systems side of modern language models, especially where clever decoding tricks turn frontier models into usable products.
Researcher Profile
Editor reviewedYossi Matias
Faster LLM inference via speculative decoding
Vice President at Google and Head of Google Research
Important because his profile sits at the intersection of field-level research leadership and concrete systems work such as speculative decoding that directly changed how modern LLM inference gets deployed.
Organizations
Topics
About This Page
This profile is meant to help you get oriented quickly: why this researcher matters, what to read first, and where to explore next.
Last reviewed
March 18, 2026
Known For
The ideas, systems, and research directions that make this person worth knowing.
01
Leadership of Google Research
02
Speculative decoding for faster LLM inference
03
Gemma-era foundation-model work
04
Faster LLM inference via speculative decoding
05
Fast Inference from Transformers via Speculative Decoding
06
Inference
Start Here
Canonical papers, project pages, or repositories that anchor this profile.
Signature Works
Additional papers, projects, or repositories that help flesh out the profile.
Supporting Sources
Additional links that help verify and flesh out this profile.
Related Researchers
People worth exploring next because they share topics, labs, or source material with this profile.
An important systems page because he is one of the named authors on speculative decoding, a technique that became part of the mainstream conversation about making large-model inference materially faster without changing outputs.
Co-authored DeepSpeed Inference: practical inference optimizations for serving large transformer models.
Co-authored DeepSpeed Inference: practical inference optimizations for serving large transformer models.
Co-authored DeepSpeed Inference: practical inference optimizations for serving large transformer models.
Co-authored DeepSpeed Inference: practical inference optimizations for serving large transformer models.