A foundational transformer researcher whose work still matters because it connects the original architecture shift to later efforts on efficiency, scaling, and sequence modeling infrastructure.
Researcher Profile
Editor reviewedJakob Uszkoreit
Transformers
Transformer coauthor and early Google Brain sequence-model researcher
A high-signal person to follow for the research arc from early transformer work into later sequence, vision, and multimodal model design.
Organizations
Topics
About This Page
This profile is meant to help you get oriented quickly: why this researcher matters, what to read first, and where to explore next.
Last reviewed
March 18, 2026
Best First Clicks
Known For
The ideas, systems, and research directions that make this person worth knowing.
01
Attention Is All You Need
02
Image and music transformers
03
Sequence-model architecture work beyond the original transformer paper
04
Transformers
05
Foundational
06
Computer Vision
Start Here
Canonical papers, project pages, or repositories that anchor this profile.
Signature Works
Additional papers, projects, or repositories that help flesh out the profile.
Supporting Sources
Additional links that help verify and flesh out this profile.
Related Researchers
People worth exploring next because they share topics, labs, or source material with this profile.
A foundational figure in modern sequence modeling whose work on the Transformer changed the technical direction of language and multimodal systems.
One of the most important architecture-level thinkers in modern AI, with influence spanning Transformers, efficient scaling, and mixture-of-experts systems.
A foundational transformer co-author who is now worth following for a very different reason: he is one of the few people trying to build a serious frontier lab around alternatives to the default scaling path.
A foundational figure in the transformer era who also matters because he helped carry that research lineage into one of the most important enterprise language-model companies.
One of the clearest people to study if you want the throughline from early neural sequence models to transformers, efficient long-context variants, and modern reasoning systems.