A useful RWKV page because he is present on the original paper, Eagle/Finch, and RWKV-7, making him part of the smaller set of contributors who stayed with the architecture as it evolved rather than only appearing at launch.
Researcher Profile
Editor reviewedPeng Zhou
RWKV and efficient sequence modeling
Researcher at LuxiTech working on RWKV-family and linear-time sequence models
A strong RWKV page to have because he recurs across the original RWKV paper, Eagle and Finch, and Gated Slot Attention, which makes him one of the clearer repeat contributors to this whole sequence-model line rather than a one-off coauthor.
Organizations
About This Page
This profile is meant to help you get oriented quickly: why this researcher matters, what to read first, and where to explore next.
Known For
The ideas, systems, and research directions that make this person worth knowing.
01
Repeated contributions across the RWKV family
02
Linear-time sequence modeling and recurrent alternatives to Transformers
03
Practical model work spanning RWKV and Gated Slot Attention
04
RWKV and efficient sequence modeling
05
RWKV: Reinventing RNNs for the Transformer Era
06
RWKV (project)
Start Here
Canonical papers, project pages, or repositories that anchor this profile.
Supporting Sources
Additional links that help verify and flesh out this profile.
Related Researchers
People worth exploring next because they share topics, labs, or source material with this profile.
Important in the long tail because he is another contributor whose work spans both the RWKV sequence-model thread and the Polish PLLuM effort, which makes his page more informative than a generic single-paper profile.
A good page to surface because it connects two otherwise separate maps: the open RWKV sequence-model line and the newer Polish-language model ecosystem around PLLuM.
A strong page to keep because he links the early RWKV work to the later Wrocław-centered PLLuM effort, which makes him one of the clearer continuity threads between open sequence models and Polish-language LLM development.
Useful because he connects an earlier line of conversational-AI work at Nextremer with later authorship on both the original RWKV paper and Eagle/Finch, which makes this page more than a stray coauthor stub.
Worth tracking because he is one of the contributors who stays with the RWKV line from the original paper through Eagle/Finch, GoldFinch, and into RWKV-7, which is exactly the kind of repeated authorship signal that makes these long-tail pages valuable.