论文标题
适合:针对个性化和联合图像分类的参数有效的几次转移学习
FiT: Parameter Efficient Few-shot Transfer Learning for Personalized and Federated Image Classification
论文作者
论文摘要
现代的深度学习系统越来越多地在诸如个性化和联合学习之类的情况下部署,需要支持i)学习少量数据,ii)沟通有效的分布式培训协议。在这项工作中,我们开发了胶卷转移(fit),通过结合转移学习(固定的预处理主机和微调胶片适配器层)和元学习(自动配置的幼稚贝叶斯分类器和情节训练)来满足图像分类设置的这些要求,从而在低构图中获得高效分类的精度。由此产生的参数效率是实现几次学习,廉价的个性化模型更新以及沟通有效的联合学习的关键。我们尝试适合各种下游数据集,并表明它可以比低光的领先传输(BIT)算法达到更好的分类精度,并且在具有挑战性的VTAB-1K基准上实现了最先进的精度,而VTAB-1K基准的精确度不足1%的可更新参数。最后,我们演示了在分布式低弹药应用中拟合的参数效率和卓越的准确性,包括模型个性化和联合学习,其中模型更新大小是重要的性能指标。
Modern deep learning systems are increasingly deployed in situations such as personalization and federated learning where it is necessary to support i) learning on small amounts of data, and ii) communication efficient distributed training protocols. In this work, we develop FiLM Transfer (FiT) which fulfills these requirements in the image classification setting by combining ideas from transfer learning (fixed pretrained backbones and fine-tuned FiLM adapter layers) and meta-learning (automatically configured Naive Bayes classifiers and episodic training) to yield parameter efficient models with superior classification accuracy at low-shot. The resulting parameter efficiency is key for enabling few-shot learning, inexpensive model updates for personalization, and communication efficient federated learning. We experiment with FiT on a wide range of downstream datasets and show that it achieves better classification accuracy than the leading Big Transfer (BiT) algorithm at low-shot and achieves state-of-the art accuracy on the challenging VTAB-1k benchmark, with fewer than 1% of the updateable parameters. Finally, we demonstrate the parameter efficiency and superior accuracy of FiT in distributed low-shot applications including model personalization and federated learning where model update size is an important performance metric.