Controlling Language Models
Lisa Li (Stanford University)
Colloquium
Tuesday, March 4, 2025, 3:30 pm
Gates Center (CSE2), G20 | Amazon Auditorium
Abstract
Controlling language models is key to unlocking their full potential and making them useful for downstream tasks. Successfully deploying these models often requires both task-specific customization and rigorous auditing of their behavior. In this talk, I will begin by introducing a customization method called Prefix-Tuning, which adapts language models by updating only 0.1% of their parameters. Next, I will address the need for robust auditing by presenting a Frank-Wolfe-inspired algorithm for red-teaming language models, which provides a principled framework for discovering diverse failure modes. Finally, I will rethink the root cause of these control challenges, and propose a new generative model for text, called Diffusion-LM, which is controllable by design.
Bio
Xiang Lisa Li is a PhD candidate at Stanford University, where she is advised by Percy Liang and Tatsunori Hashimoto. Her research focuses on developing methods to make language models more capable and controllable. Lisa is supported by the Two Sigma PhD fellowship and Stanford Graduate Fellowship and is the recipient of an EMNLP Best Paper award.
This talk will be streamed live on our YouTube channel. Link will be available on that page one hour prior to the event.