Aryan Mokhtari, University of Texas, Austin
Talk Title: “Representation Learning with Model Agnostic Meta-Learning”
Abstract: Recent empirical evidence has driven conventional wisdom to believe that gradient-based meta-learning (GBML) methods perform well at few-shot learning because they learn an expressive data representation that is shared across tasks. However, the mechanics of GBML have remained largely mysterious from a theoretical perspective. In this talk, we show that GBML methods, such as Model-Agnostic Meta-Learning (MAML), are capable of learning a common representation among a set of given tasks in the well-known multi-task linear representation learning setting. Our analysis illuminates that the driving force causing GBML methods to learn the underlying representation is that they adapt the final layer of their model, which harnesses the underlying task diversity to improve the representation in all directions of interest.