A Google TechTalk, presented by Yoshua Bengio, 2021/06/07 ABSTRACT: Consider the setting where we can iteratively query an experimental bench with candidate solutions to a problem (like candidate molecules for use as drugs or new materials), and the outcomes of these experiments (measuring a scalar usefulness for each candidate) can be used as additional data to be taken into account to generate another batch of candidate solutions. Imagine also that the space of solutions is combinatorial, i.e., very high-dimensional and that there are many modes in the unknown usefulness function of the experimental bench. A lot of interesting machine learning research topics and questions intersect when working in such a setting, involving reinforcement learning, bandits, exploration, generative models, active learning, epistemic uncertainty, Bayesian approaches, Monte-Carlo Markov Chains, graph neural nets, out-of-distribution generalization. I will briefly summarize recent results stimulated by these goals, in particular on how to estimate epistemic uncertainty in a calibrated way and how to train a generative policy which samples with probability proportional to the reward function rather than maximize the expected return, thus producing much more diversity for exploring the space of molecules. About the speaker: Yoshua Bengio is a Full Professor in the Department of Computer Science and Operations Research at Université de Montreal, as well as the Founder and Scientific Director of Mila and the Scientific Director of IVADO. Considered one of the world’s leaders in artificial intelligence and deep learning, he is the recipient of the 2018 A.M. Turing Award with Geoff Hinton and Yann LeCun, known as the Nobel prize of computing. He is a Fellow of both the Royal Society of London and Canada, an Officer of the Order of Canada, and a Canada CIFAR AI Chair.
Get notified about new features and conference additions.