论文标题

元学习的建模和优化权衡

Modeling and Optimization Trade-off in Meta-learning

论文作者

Gao, Katelyn, Sener, Ozan

论文摘要

通过搜索跨任务的共享归纳偏见,元学习有望加速学习新任务,但要解决一个复杂的双层优化问题的成本。我们介绍并严格地定义了准确的建模与元学习易于优化之间的权衡。一方面,经典的元学习算法解释了元学习的结构,但解决了一个复杂的优化问题,而在另一端域,随机搜索(否则称为联合培训)忽略了元学习的结构,并解决了单个级别优化问题。以MAML为代表性元学习算法,我们从理论上表征了一般非凸风险函数的权衡以及线性回归,为此我们能够在与建模和优化相关的错误上提供明确的界限。我们还从经验上研究了以元强化学习基准为基准的权衡。

By searching for shared inductive biases across tasks, meta-learning promises to accelerate learning on novel tasks, but with the cost of solving a complex bilevel optimization problem. We introduce and rigorously define the trade-off between accurate modeling and optimization ease in meta-learning. At one end, classic meta-learning algorithms account for the structure of meta-learning but solve a complex optimization problem, while at the other end domain randomized search (otherwise known as joint training) ignores the structure of meta-learning and solves a single level optimization problem. Taking MAML as the representative meta-learning algorithm, we theoretically characterize the trade-off for general non-convex risk functions as well as linear regression, for which we are able to provide explicit bounds on the errors associated with modeling and optimization. We also empirically study this trade-off for meta-reinforcement learning benchmarks.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源