Co-authored Extracting Training Data from Large Language Models: a core paper on memorization and extraction risk.
Researcher Profile
Ulfar Erlingsson
Training-data extraction and privacy risks
Co-author, Training Data Extraction
Co-authored Extracting Training Data from Large Language Models: a core paper on memorization and extraction risk.
Topics
About This Page
This profile is meant to help you get oriented quickly: why this researcher matters, what to read first, and where to explore next.
Last updated
March 20, 2026
Best First Clicks
Known For
The ideas, systems, and research directions that make this person worth knowing.
01
Training-data extraction and privacy risks
02
Extracting Training Data from Large Language Models
03
Security
04
Privacy
Start Here
Canonical papers, project pages, or repositories that anchor this profile.
Related Researchers
People worth exploring next because they share topics, labs, or source material with this profile.
Co-authored Extracting Training Data from Large Language Models: a core paper on memorization and extraction risk.
Co-authored Extracting Training Data from Large Language Models: a core paper on memorization and extraction risk.
Co-authored Extracting Training Data from Large Language Models: a core paper on memorization and extraction risk.
One of the most useful people to study if you care about what deployed models get wrong under pressure, especially around extraction, adversarial behavior, and practical security failures.
One of the clearest researchers to study for the GPT-3 era, especially around few-shot learning, scaling behavior, and what larger language models started making possible in practice.