Worth tracking if you care about alternatives to the standard transformer playbook, especially the line of work trying to keep strong language-model performance while making inference and memory use much cheaper.
Researcher Profile
Editor reviewedQinghua Zhou
RWKV and efficient sequence modeling
Research Associate at King’s College London working on trustworthy and computational AI
Worth keeping because it turns an otherwise ghostlike RWKV byline into a real researcher page with a visible current program in trustworthy AI, neural-network theory, and high-dimensional learning.
Organizations
About This Page
This profile is meant to help you get oriented quickly: why this researcher matters, what to read first, and where to explore next.
Last reviewed
March 18, 2026
Known For
The ideas, systems, and research directions that make this person worth knowing.
01
Original RWKV authorship
02
Trustworthy and computational AI
03
Theory-oriented work on learning in high dimensions
04
RWKV and efficient sequence modeling
05
RWKV: Reinventing RNNs for the Transformer Era
06
RWKV (project)
Start Here
Canonical papers, project pages, or repositories that anchor this profile.
Signature Works
Additional papers, projects, or repositories that help flesh out the profile.
Related Researchers
People worth exploring next because they share topics, labs, or source material with this profile.
A distinctive page because his work bridges open-sequence-model experimentation with applied machine learning for molecules, proteins, and structural biology, and he shows up on multiple RWKV-family papers including the hybrid GoldFinch branch rather than only the first release.
A strong open-model and data-centric page because his work sits close to the infrastructure that made OLMo and Dolma useful to the broader research community rather than just another benchmark-driven model release.
Co-authored RWKV: Reinventing RNNs for the Transformer Era.
Useful because it turns an otherwise thin RWKV byline into a real systems profile: after the original paper, his public work tracks toward large-scale pretraining infrastructure, pipeline parallelism, and systems support for frontier-scale models.
Co-authored RWKV: Reinventing RNNs for the Transformer Era.