论文标题
委员会的智慧:一种被忽视的方法,以更快,更准确的模型
Wisdom of Committees: An Overlooked Approach To Faster and More Accurate Models
论文作者
论文摘要
基于委员会的模型(合奏或级联模型)构建了现有预培训的模型。尽管合奏和级联是深度学习之前提出的知名技术,但它们并不是深层模型体系结构的核心构件,并且很少与有关开发高效模型的文献中的相比。在这项工作中,我们回到基础知识,并对基于委员会模型的效率进行全面分析。我们发现,即使是从现有,独立训练的模型中建立委员会的最简单方法也可以匹配或超过最先进模型的准确性,同时更加有效。这些简单的基于委员会的模型还优于复杂的神经体系结构搜索方法(例如Bignas)。这些发现对于多个任务,包括图像分类,视频分类和语义细分以及各种体系结构家族,例如VIT,VIT,EFIDENTENT,RESNET,MOBILENETV2和X3D。我们的结果表明,有效网络级联可以在B7上实现5.4倍的速度,而VIT级联可以在VIT-L-384上实现2.3倍的速度,而同样准确。
Committee-based models (ensembles or cascades) construct models by combining existing pre-trained ones. While ensembles and cascades are well-known techniques that were proposed before deep learning, they are not considered a core building block of deep model architectures and are rarely compared to in recent literature on developing efficient models. In this work, we go back to basics and conduct a comprehensive analysis of the efficiency of committee-based models. We find that even the most simplistic method for building committees from existing, independently pre-trained models can match or exceed the accuracy of state-of-the-art models while being drastically more efficient. These simple committee-based models also outperform sophisticated neural architecture search methods (e.g., BigNAS). These findings hold true for several tasks, including image classification, video classification, and semantic segmentation, and various architecture families, such as ViT, EfficientNet, ResNet, MobileNetV2, and X3D. Our results show that an EfficientNet cascade can achieve a 5.4x speedup over B7 and a ViT cascade can achieve a 2.3x speedup over ViT-L-384 while being equally accurate.