Home/Researchers/Mike Lewis

Researcher Profile

Editor reviewed

Mike Lewis

Streaming + long-context stability (attention sinks)

Researcher behind BART, retrieval-augmented generation, and long-context language-model work

A strong person to study for the modern NLP stack because his work spans denoising pretraining, retrieval-augmented generation, and later long-context inference tricks rather than only one phase of the language-model pipeline.

Also Known As

Michael Lewis

Organizations

Meta

About This Page

This profile is meant to help you get oriented quickly: why this researcher matters, what to read first, and where to explore next.

Known For

The ideas, systems, and research directions that make this person worth knowing.

01

BART and sequence-to-sequence pretraining

02

Retrieval-augmented generation

03

Long-context language-model inference

04

Streaming + long-context stability (attention sinks)

05

Efficient Streaming Language Models with Attention Sinks

06

Long context

Start Here

Canonical papers, project pages, or repositories that anchor this profile.

Supporting Sources

Additional links that help verify and flesh out this profile.

Related Researchers

People worth exploring next because they share topics, labs, or source material with this profile.

Shared canonical source

Song Han

Streaming + long-context stability (attention sinks)

4 sources

One of the clearest researchers to follow for efficient AI systems, especially the line of work that makes large models smaller, faster, and easier to deploy without giving up too much quality.

Start HereSong Han at MIT