Irina Rish is an Associate Professor in the Computer Science and Operations Research Department at the Université de Montréal (UdeM) and a core faculty member of MILA - Quebec AI Institute. She holds Canada Excellence Research Chair (CERC) in Autonomous AI and a Canadian Institute for Advanced Research (CIFAR) Canada AI Chair. She received her MSc and PhD in AI from University of California, Irvine and MSc in Applied Mathematics from Moscow Gubkin Institute. Dr. Rish's research focus is on machine learning, neural data analysis and neuroscience-inspired AI. Before joining UdeM and MILA in 2019, Irina was a research scientist at the IBM T.J. Watson Research Center, where she worked on various projects at the intersection of neuroscience and AI, and led the Neuro-AI challenge. She received multiple IBM awards, including IBM Eminence & Excellence Award and IBM Outstanding Innovation Award in 2018, IBM Outstanding Technical Achievement Award in 2017, and IBM Research Accomplishment Award in 2009. Dr. Rish holds 64 patents, has published over 80 research papers in peer-reviewed conferences and journals, several book chapters, three edited books, and a monograph on Sparse Modeling.
Conference : Remembering The Bitter Lesson: Is Scale “All You Need” for Achieving Artificial General Intelligence (AGI)?
Saturday 7 may 2022, 10h - 10h45 — Amphi rouge
Modern AI systems have achieved impressive results in many specific domains, from image and speech recognition to natural language processing and mastering complex games such as chess and Go. However, they often remain inflexible, fragile and narrow, unable to continually adapt to a wide range of changing environments and novel tasks without "catastrophically forgetting" what they have learned before, to infer higher-order abstractions allowing for systematic generalization to out-of-distribution data, and to achieve the level of robustness necessary to "survive" various perturbations in their environment - a natural property of most biological intelligent systems, and a necessary property for successfully deploying AI systems in real-life applications. In this talk, we will provide a brief overview of some approaches towards making AI more general and robust. Furthermore, we briefly discuss the role of scale, and summarize recent advances in training large-scale unsupervised models, such as GPT-3, CLIP, DALL-e, which demonstrate remarkable improvements in generalization to novel data and tasks. We also emphasize the importance of developing an empirical science of AI behaviors, and focus on rapidly expanding field of neural scaling laws, which allow us to better compare and extrapolate behavior of various algorithms and models with increasing amounts of data, model size and computational resources.