Representation learning, inference, and reasoning
Fernando Pereira (Google AI)
Taskar Memorial Lecture 2020
Thursday, February 27, 2020, 3:30 pm
Abstract
Advances in deep learning have led to a golden age of increasingly rich models of language with large experimental gains in practical language understanding tasks. However, these gains came at the expense of earlier structured learning and inference methods, such as the machine-translation IBM models and corresponding alignments, that provided crisp inferential links between inputs and outputs in language understanding tasks. As a result, the new methods lack accountability for their decisions: they are prone to ignoring or making up important information, as I will show with examples from question answering and summarization. I will conclude by suggesting research directions for making learning and accountable inference more compatible.
Bio
Fernando Pereira is VP and Engineering Fellow at Google, where he leads research and development in natural language understanding and machine learning. His previous positions include chair of the Computer and Information Science department of the University of Pennsylvania, head of the Machine Learning and Information Retrieval department at AT&T Labs, and research and management positions at SRI International. He received a Ph.D. in Artificial Intelligence from the University of Edinburgh in 1982, and has over 120 research publications on computational linguistics, machine learning, bioinformatics, speech recognition, and logic programming, as well as several patents. He was elected AAAI Fellow in 1991 for contributions to computational linguistics and logic programming, ACM Fellow in 2010 for contributions to machine learning models of natural language and biological sequences, ACL Fellow for contributions to sequence modeling, finite-state methods, and dependency and deductive parsing, and to the American Philosophical Society in 2019. He was president of the Association for Computational Linguistics in 1993.