Contact

Former Group Members

  • Yoav Artzi (PhD, 2015; Situated Understanding and Learning of Natural Language; Assistant Professor, Cornell)
  • Terra Blevins (PhD, 2024; Multilingual Language Models: Analysis and Algorithms; Assistant Professor, Northeastern University)
  • Danqi Chen (Postdoc, 2019; Assistant Professor, Princeton University)
  • Eunsol Choi (PhD, 2019; Learning to Understand Entities in Text; Co-advised with Yejin Choi; Assistant Professor, University of Texas at Austin)
  • Chris Clark (PhD, 2020; Training Models to Ignore Dataset Bias; Research Scientist, Allen Institute for Artificial Intelligence)
  • Tim Dettmers (PhD, 2024; Accessible Foundation Models: Systems, Algorithms, and Science; Assistant Professor, Carnegie Mellon University)
  • Nicholas FitzGerald (PhD, 2018; Neural Models for Large-Scale Semantic Role Labelling; Google Research, NYC)
  • Dan Garrette (Postdoc, 2016; Google Research, NYC)
  • Suchin Gururangan (PhD, 2024; Data-Centric Methods for Decentralizing Large Language Models; Co-advised with Noah Smith; Meta)
  • Luheng He (PhD, 2018; Annotating and Modeling Shallow Semantics Directly from Text; Google Research, Mountain View)
  • Raphael Hoffmann (PhD, 2012; Interactive Learning of Relation Extractors with Weak Supervision; Co-advised with Dan Weld; Stealth startup)
  • Ari Holtzman (PhD 2023; Interpretation Errors: Extracting Functionality form Generative Model of Language by Understanding them Better; Assistant Professor, University of Chicago)
  • Srinivasan Iyer (PhD 2019; Learning to Map Natural Language to General Purpose Source Code; Co-advised with Alvin Cheung; Research Scientist, Facebook)
  • Mohit Iyyer (AI2 Postdoc, 2018; Assistant Professor, U. Mass Amherst)
  • Mandar Joshi (PhD, 2021; How to train your self-supervised NLP model: Investigating pre-training objectives, data, and scale; Co-advised with Dan Weld)
  • ChloĆ© Kiddon (PhD, 2016; Learning to Interpret and Generate Instructional Recipes; Co-advised with Yejin Choi; Google, Seattle)
  • Yannis Konstas (Postdoc, 2017; Lecturer, Herriot Watt University)
  • Tom Kwiatkowski (Postdoc, 2014; Google Research, NYC)
  • Kenton Lee (PhD 2017; Span-based Neural Structured Prediction; Google Research, Seattle)
  • Omer Levy (Postdoc, 2018; Facebook AI Research; Lecturer, Tel Aviv University)
  • Mike Lewis (Postdoc, 2016; Facebook AI Research)
  • Victoria Lin (PhD, 2024; Towards Large Language Models for Everyone: Instruction Following, Knowledge Retrieval and Multilingualism; Research Scientist, Meta)
  • Cynthia Matuszek (PhD, 2014; Talking to Robots: Learning to Ground Human Language in Perception and Execution; Co-advised with Dieter Fox; Assistant Professor, UMBC)
  • Julian Michael (PhD, 2022; Building Blocks for Data-Driven Theories of Language Understanding; Postdoc, NYU)
  • Sewon Min (PhD, 2024; Rethinking Data Use in Large Language Models; Co-advised with Hannaneh Hajishirzi; Assistant Professor, UC Berkeley)
  • Gabriel Schubiner (MS, 2016; Distance Metrics for Learning Dialogue Systems; Google, Seattle)
  • Bhargavi Paranjape (PhD, 2024; Towards Reliability and Interactive Debugging for Large Language Models; Co-advised with Hannaneh Hajishirzi; Research Scientist, Meta)
  • Sameer Singh (Postdoc, 2016; Assistant Professor, UC Irvine)
  • Gabriel Stanovsky (Postdoc, 2020; Lecturer, Hebrew University)
  • Jesse Thomason (Postdoc, 2020; Assistant Professor, University of Southern California)
  • Adrienne Wang (PhD, 2015; CCG Grammar Induction for Accurate and Efficient Parsing)
  • Mark Yatskar (PhD 2017; Natural Language as a Scaffold for Visual Recognition; Co-advised with Ali Farhadi; Assistant Professor, University of Pennsylvania)
  • Victor Zhong (PhD 2023; Reading to Learn; Assistant Professor, University of Waterloo)