A high-signal researcher for the post-attention design space, especially if you care about the line of work trying to make linear-attention and Delta-rule models actually competitive in real language-model systems.
Researcher Profile
Editor reviewedYoon Kim
Linear transformers via the delta rule
Associate professor at MIT working on natural language processing and machine learning
A useful researcher to study for the line from classic neural NLP into today’s efficient large-model work, with papers that span early sentence models, character-aware language modeling, and current sequence-model efficiency research.
Organizations
Topics
About This Page
This profile is meant to help you get oriented quickly: why this researcher matters, what to read first, and where to explore next.
Last reviewed
March 18, 2026
Official And External Links
Known For
The ideas, systems, and research directions that make this person worth knowing.
01
Neural NLP and sentence classification
02
Character-aware language models
03
Efficient training and deployment of large-scale models
04
Linear transformers via the delta rule
05
Parallelizing Linear Transformers with the Delta Rule over Sequence Length
06
DeltaNet
Start Here
Canonical papers, project pages, or repositories that anchor this profile.
Related Researchers
People worth exploring next because they share topics, labs, or source material with this profile.
A good page to have because he is one of the recurring names in the recent MIT line of work on linear-attention alternatives, especially where hardware-efficient training meets practical long-context sequence modeling.
Worth surfacing because he leads the Gated Slot Attention paper, which is one of the clearer attempts to push the RWKV-adjacent efficient-sequence line toward stronger memory and retrieval behavior rather than stopping at architecture novelty.
Useful because his work links two strands that usually get discussed separately: efficient sequence-model architectures on one side and multimodal alignment work on the other.
One of the more useful people to follow for the systems side of modern model building, especially where better kernels and sequence methods translate directly into frontier-model training and inference speed.
Worth following because he brings a real theory background into the model-systems layer, especially where structured linear algebra and sequence methods end up mattering for practical modern architectures.