Home/Researchers/Phillip Wallis

Researcher Profile

Editor reviewed

Phillip Wallis

Parameter-efficient finetuning

Google researcher working on low-rank adaptation and agent safety

A useful profile for the path from parameter-efficient finetuning into newer agent-safety work, especially if you want people whose contributions span both model customization and tool-using systems security.

Organizations

GoogleGoogle DeepMind

About This Page

This profile is meant to help you get oriented quickly: why this researcher matters, what to read first, and where to explore next.

Known For

The ideas, systems, and research directions that make this person worth knowing.

01

LoRA

02

Agent safety and indirect prompt injection defenses

03

Practical safety methods for tool-using models

04

Parameter-efficient finetuning

05

LoRA: Low-Rank Adaptation of Large Language Models

06

Finetuning

Start Here

Canonical papers, project pages, or repositories that anchor this profile.

Supporting Sources

Additional links that help verify and flesh out this profile.

Related Researchers

People worth exploring next because they share topics, labs, or source material with this profile.

Shared canonical source

Edward J. Hu

Parameter-efficient finetuning

3 sources

A high-signal person to study if you care about the practical mechanics of adapting large models, especially where scaling theory turns into techniques that actually spread across the industry.

Shared canonical source

Zeyuan Allen-Zhu

Parameter-efficient finetuning

3 sources

One of the clearer people to follow if you want the bridge between deep-learning theory, practical adaptation methods like LoRA, and broader attempts to explain how large language models actually work.

Shared canonical source

Yuanzhi Li

Parameter-efficient finetuning

3 sources

A useful profile for the seam between deep-learning theory and practical large-model methods, especially if you want someone whose work spans convergence theory, small-language-model data design, and LoRA.