Important within the RWKV cluster because his name carries from the original RWKV paper into Gated Slot Attention, making him part of the small set of contributors who reappear as this sequence-model thread evolves.
Researcher Profile
Editor reviewedYu Zhang
Linear transformers via the delta rule
Researcher at Soochow University working on efficient linear-time sequence modeling
Worth surfacing because he leads the Gated Slot Attention paper, which is one of the clearer attempts to push the RWKV-adjacent efficient-sequence line toward stronger memory and retrieval behavior rather than stopping at architecture novelty.
Organizations
About This Page
This profile is meant to help you get oriented quickly: why this researcher matters, what to read first, and where to explore next.
Last reviewed
March 18, 2026
Known For
The ideas, systems, and research directions that make this person worth knowing.
01
Gated Slot Attention
02
Linear-time sequence modeling
03
Bridging academic sequence-model research with practical FLA tooling
04
Linear transformers via the delta rule
05
Parallelizing Linear Transformers with the Delta Rule over Sequence Length
06
DeltaNet
Start Here
Canonical papers, project pages, or repositories that anchor this profile.
Supporting Sources
Additional links that help verify and flesh out this profile.
Related Researchers
People worth exploring next because they share topics, labs, or source material with this profile.
A high-signal researcher for the post-attention design space, especially if you care about the line of work trying to make linear-attention and Delta-rule models actually competitive in real language-model systems.
A good page to have because he is one of the recurring names in the recent MIT line of work on linear-attention alternatives, especially where hardware-efficient training meets practical long-context sequence modeling.
Useful because his work links two strands that usually get discussed separately: efficient sequence-model architectures on one side and multimodal alignment work on the other.
A useful researcher to study for the line from classic neural NLP into today’s efficient large-model work, with papers that span early sentence models, character-aware language modeling, and current sequence-model efficiency research.
A strong RWKV page to have because he recurs across the original RWKV paper, Eagle and Finch, and Gated Slot Attention, which makes him one of the clearer repeat contributors to this whole sequence-model line rather than a one-off coauthor.