Building machines that learn and think like humans
Recent successes in computer vision, natural language processing and other areas of artificial intelligence have been largely driven by methods for sophisticated pattern recognition — most prominently deep neural networks. But human intelligence is more than just pattern recognition. We can see these abilities at work in young children, in ways that even a six month old baby is more intelligent than any AI system yet built. The heart of human common sense is our ability to model the world -- to explain and understand what we see, to imagine things we could see but haven't yet, and to plan actions to make these things real -- along with our ability to build new models rapidly and flexibly as we learn more about the world. I will talk about prospects for reverse-engineering these capacities at the core of human intelligence, and using what we learn to advance AI. In particular, I will introduce basic concepts of probabilistic programs and program induction, which together with tools from deep learning and modern video game engines provide an approach to make machines learn and think in more human-like ways.
Bio:
Josh Tenenbaum is Professor of Computational Cognitive Science in the Department of Brain and Cognitive Sciences at MIT, and a member of MIT's Computer Science and Artificial Intelligence Laboratory (CSAIL) and Center for Brains, Minds and Machines (CBMM). His twin goals are to reverse-engineer distinctively human aspects of intelligence, and to use what we learn to build more human-like intelligence in machines. His scientific work currently focuses on two areas: describing the structure, content, and development of people's core intuitive theories, especially intuitive physics and intuitive psychology, and understanding how people are able to learn and generalize new concepts, models, theories and tasks from very few examples -- often called "one-shot learning". On the AI side, he and his group have developed widely used models for nonlinear dimensionality reduction, probabilistic programming, and Bayesian approaches to unsupervised learning, program induction and discovering the structural form of data. He and his students have received best paper or best student paper awards at many conferences (CogSci, CVPR, NIPS, UAI, RLDM, ICDL, SPP), and he is the recipient of the Howard Crosby Warren Medal from the Society of Experimental Psychologists, the Distinguished Scientific Award for Early Career Contribution to Psychology from the American Psychological Association, and the Troland Research Award from the National Academy of Sciences.