Lifelong Learning: Towards Broad and Robust AI

Jul 18, 2020

Speakers

About

Modern AI systems have achieved impressive results in many specific domains, from image and speech recognition to natural language processing and mastering complex games such as chess and Go. However, they remain largely inflexible, fragile and narrow, unable to continually adapt to a wide range of changing environments and novel tasks without "catastrophically forgetting" what they have learned before, to infer higher-order abstractions allowing for systematic generalization to out-of-distribution data, and to achieve the level of robustness necessary to "survive" various perturbations in their environment - a natural property of most biological intelligent systems. In this talk, we will provide a brief overview of advances in continual learning (CL) field [1] which aims to push AI from "narrow" to "broad", from unsupervised adaptive ("neurogenetic") architectural adaptations [2] to a recent general supervised CL framework for quickly solving new, out-of-distribution tasks, combined with fast remembering of the previous ones; it unifies continual-, meta-, meta-continual-, and continual-meta learning and introduces continual-MAML, an online extension of the popular MAML algorithm [3]. Furthermore, we present a brief overview of the most challenging setting - continual RL, characterized by dynamic, non-stationary environment, and discuss open problems and challenges in bridging the gap between the current state of continual RL and better incremental reinforcement learners that can function in increasingly human realistic learning environments [4]. Next, we address the robust representation learning problem, i.e. extracting features invariant to various stochastic and/or adversarial perturbations of the environment - a common goal across continual-, meta-, transfer learning as well as adversarial robustness, out-of-distribution generalization, self-supervised learning, and related subfields. As an example, our recent Adversarial Feature Desensitization (AFD) approach [5] trains a feature extractor network to generate representations which are both predictive and robust to input perturbations (e.g. adversarial attacks) and demonstrates a significant improvement over the state-of-the-art, despite its relative simplicity (i.e., feature robustness is enforced via additional adversarial decoder with a GAN-like objective attempting to discriminate between the original and perturbed inputs). Finally, we conclude the talk with a discussion of severa directions for future work, which including drawing inspirations (e.g., inductive biases) from neuroscience [6], in order to develop truly broad and robust lifelong-learning AI systems. Related work: [1] https://arxiv.org/abs/1909.08383 de Lange et al (2019) A continual learning survey: Defying forgetting in classification tasks. [2] https://arxiv.org/abs/1701.06106 Garg et al (2017). Neurogenesis-Inspired Dictionary Learning: Online Model Adaptation in a Changing World. IJCAI 2017. [3] https://arxiv.org/abs/2003.05856 Caccia et al (2020). Online Fast Adaptation and Knowledge Accumulation: a New Approach to Continual Learning. submitted. [4] (in preparation) Khetarpal et al (2020). Towards Continual Reinforcement Learning: A Review and Perspectives. [5] https://arxiv.org/abs/2006.04621 Bashivan et al (2020). Adversarial Feature Desensitization. submitted. [6] https://xaqlab.com/wp-content/uploads/2019/09/LessArtificialIntelligence.pdf Sinz et al (2019). Engineering a Less Artificial Intelligence. Neuron.

Organizer

Categories

About ICML 2020

The International Conference on Machine Learning (ICML) is the premier gathering of professionals dedicated to the advancement of the branch of artificial intelligence known as machine learning. ICML is globally renowned for presenting and publishing cutting-edge research on all aspects of machine learning used in closely related areas like artificial intelligence, statistics and data science, as well as important application areas such as machine vision, computational biology, speech recognition, and robotics. ICML is one of the fastest growing artificial intelligence conferences in the world. Participants at ICML span a wide range of backgrounds, from academic and industrial researchers, to entrepreneurs and engineers, to graduate students and postdocs.

Store presentation

Should this presentation be stored for 1000 years?

How do we store presentations

Total of 0 viewers voted for saving the presentation to eternal vault which is 0.0%

Sharing

Recommended Videos

Presentations on similar topic, category or speaker

Interested in talks like this? Follow ICML 2020