Probably the strongest page in this batch because he spans the original RWKV paper, Eagle/Finch-adjacent work, and later efficient-language-model papers like SpikeGPT and Gated Slot Attention instead of ending at a single coauthor credit.
Researcher Profile
Editor reviewedQihang Zhao
RWKV and efficient sequence modeling
Researcher contributing to RWKV, Eagle/Finch, and RWKV-inspired efficient language models
Useful because his work connects the main RWKV sequence-model line with the RWKV-inspired SpikeGPT branch, making the page more informative than a single coauthor record.
Organizations
About This Page
This profile is meant to help you get oriented quickly: why this researcher matters, what to read first, and where to explore next.
Known For
The ideas, systems, and research directions that make this person worth knowing.
01
Original RWKV authorship
02
Eagle and Finch
03
SpikeGPT and efficient language modeling inspired by RWKV
04
RWKV and efficient sequence modeling
05
RWKV: Reinventing RNNs for the Transformer Era
06
RWKV (project)
Start Here
Canonical papers, project pages, or repositories that anchor this profile.
Related Researchers
People worth exploring next because they share topics, labs, or source material with this profile.
A useful RWKV page because he is present on the original paper, Eagle/Finch, and RWKV-7, making him part of the smaller set of contributors who stayed with the architecture as it evolved rather than only appearing at launch.
Important in the long tail because he is another contributor whose work spans both the RWKV sequence-model thread and the Polish PLLuM effort, which makes his page more informative than a generic single-paper profile.
A good page to surface because it connects two otherwise separate maps: the open RWKV sequence-model line and the newer Polish-language model ecosystem around PLLuM.
A strong page to keep because he links the early RWKV work to the later Wrocław-centered PLLuM effort, which makes him one of the clearer continuity threads between open sequence models and Polish-language LLM development.
Useful because he connects an earlier line of conversational-AI work at Nextremer with later authorship on both the original RWKV paper and Eagle/Finch, which makes this page more than a stray coauthor stub.