Semantic parsers map natural language sentences to rich, logical representations of their underlying meaning. However, to date, they have been developed primarily for natural language database query applications using learning algorithms that require carefully annotated training data. This project aims to learn robust semantic parsers for a number of new application domains, including robotics interfaces and other spoken dialog systems, with little or no manually annotated training data.
Such settings allow learning from interactions, where a well-defined goal allows a system to engage in remediations when confused, such as asking for clarification, rewording, or additional explanation. The user’s response to such requests provides a strong, if often indirect, signal that can be used to learn to avoid the original confusion in the future. In this project, we developing ways to automatically learn semantic parsers from this type of interactive feedback. We believe that this style of learning will contribute to the long term goal of building self-improving systems that continually learn from their mistakes, with little or no human intervention.