One of the central figures of the deep-learning revival, especially for work on distributed representations and the research culture that produced an entire generation of modern AI leaders.
Researcher Profile
Editor reviewedYoshua Bengio
Deep learning, representation learning, safety
Professor at Universite de Montreal and founder of Mila
A foundational deep-learning researcher whose influence spans representation learning, institution building, and the long-running effort to connect frontier AI progress with public-interest concerns.
Organizations
About This Page
This profile is meant to help you get oriented quickly: why this researcher matters, what to read first, and where to explore next.
Last reviewed
March 18, 2026
Official And External Links
Known For
The ideas, systems, and research directions that make this person worth knowing.
01
Representation learning
02
Institution building around Mila and Montreal
03
Connecting frontier AI progress with safety and governance questions
04
Deep learning, representation learning, safety
05
Deep Learning (book)
06
Foundational
Start Here
Canonical papers, project pages, or repositories that anchor this profile.
Signature Works
Additional papers, projects, or repositories that help flesh out the profile.
Related Researchers
People worth exploring next because they share topics, labs, or source material with this profile.
A foundational researcher in generative modeling and adversarial robustness whose work changed both how models are trained and how their failure modes are studied.
A foundational deep-learning figure whose influence spans convolutional networks, representation learning, and long-running arguments about what capable AI systems should optimize for next.
One of the clearest interpreters of neural-network internals, especially in the line of work that turned interpretability into a concrete research agenda rather than a vague aspiration.
A foundational thinker in oversight, reward modeling, and delegation-style alignment ideas that influenced much of the modern post-training conversation.
One of the most useful people to study if you care about what deployed models get wrong under pressure, especially around extraction, adversarial behavior, and practical security failures.