Home/Researchers/Naman Goyal

Researcher Profile

Editor reviewed

Naman Goyal

Open-weight foundation models (LLaMA)

Founding team member at Thinking Machines Lab

A strong person to follow if you care about open-weight language models and retrieval-heavy NLP systems, especially the line from RoBERTa and RAG into LLaMA-era model development.

Organizations

Thinking Machines Lab

Labs

About This Page

This profile is meant to help you get oriented quickly: why this researcher matters, what to read first, and where to explore next.

Known For

The ideas, systems, and research directions that make this person worth knowing.

01

Open-weight foundation models

02

Retrieval-augmented generation

03

Large-scale language-model pretraining

04

Open-weight foundation models (LLaMA)

05

LLaMA: Open and Efficient Foundation Language Models

06

LLaMA

Start Here

Canonical papers, project pages, or repositories that anchor this profile.

Signature Works

Additional papers, projects, or repositories that help flesh out the profile.

Supporting Sources

Additional links that help verify and flesh out this profile.

Related Researchers

People worth exploring next because they share topics, labs, or source material with this profile.

Shared canonical source

Hugo Touvron

Open-weight foundation models (LLaMA)

3 sources

One of the cleaner bridge figures between the vision-transformer era and the open-weight LLaMA era: his public paper trail runs from influential self-supervised vision work into the first LLaMA release, Llama 2, and Code Llama.

Shared canonical source

Thibaut Lavril

Open-weight foundation models (LLaMA)

4 sources

A strong page to keep because he sits on both sides of a major shift in open models: he appears on Meta's LLaMA 2 paper and then on Mistral 7B and Mixtral, which makes him part of the early handoff from the first LLaMA wave into Mistral's open-weight model line.

Start HereMistral AI