Learning Semantics Workshop at NIPS 2011 Invited Talk: Towards More Human-like Machine Learning of Word Meanings by Josh Tenenbaum Josh Tenenbaum is a Professor in the Department of Brain and Cognitive Sciences at Massachusetts Institute of Technology. Him and his colleagues in the Computational Cognitive Science group study one of the most basic and distinctively human aspects of cognition: the ability to learn so much about the world, rapidly and flexibly. Abstract: How can we build machines that learn the meanings of words more like the way that human children do? I will talk about several challenges and how we are beginning to address them using sophisticated probabilistic models. Children can learn words from minimal data, often just one or a few positive examples (one-shot learning). Children learn to learn: they acquire powerful inductive biases for new word meanings in the course of learning their first words. Children can learn words for abstract concepts or types of concepts that have no little or no direct perceptual correlate. Children's language can be highly context-sensitive, with parameters of word meaning that must be computed anew for each context rather than simply stored. Children learn function words: words whose meanings are expressed purely in how they compose with the meanings of other words. Children learn whole systems of words together, in mutually constraining ways, such as color terms, number words, or spatial prepositions. Children learn word meanings that not only describe the world but can be used for reasoning, including causal and counterfactual reasoning. Bayesian learning defined over appropriately structured representations — hierarchical probabilistic models, generative process models, and compositional probabilistic languages — provides a basis for beginning to address these challenges.
Get notified about new features and conference additions.