论文标题
通过自学标签表示改善模型培训
Improving Model Training via Self-learned Label Representations
论文作者
论文摘要
现代神经网络体系结构在几个大规模分类和预测任务中取得了巨大的成功。这些架构的成功的一部分是它们的灵活性将数据从原始输入表示(例如,视觉任务的像素或自然语言处理任务的文本)转换为单热输出编码。尽管许多工作都集中在研究如何将输入转换为单热编码的方式,但很少的工作研究了这些单热标签的有效性。 在这项工作中,我们证明,比通常的一式壁炉编码更复杂的标签表示对分类更好。我们建议使用自适应标签(LWAL)算法进行学习,该算法同时在培训分类任务时同时学习标签表示。这些学习的标签可以大大减少训练时间(通常超过50%),同时经常获得更好的测试精度。我们的算法引入了可忽略的其他参数,并且具有最小的计算开销。随着培训时间的改进,我们的学识标签具有语义上有意义,并且可以揭示数据中可能存在的层次关系。
Modern neural network architectures have shown remarkable success in several large-scale classification and prediction tasks. Part of the success of these architectures is their flexibility to transform the data from the raw input representations (e.g. pixels for vision tasks, or text for natural language processing tasks) to one-hot output encoding. While much of the work has focused on studying how the input gets transformed to the one-hot encoding, very little work has examined the effectiveness of these one-hot labels. In this work, we demonstrate that more sophisticated label representations are better for classification than the usual one-hot encoding. We propose Learning with Adaptive Labels (LwAL) algorithm, which simultaneously learns the label representation while training for the classification task. These learned labels can significantly cut down on the training time (usually by more than 50%) while often achieving better test accuracies. Our algorithm introduces negligible additional parameters and has a minimal computational overhead. Along with improved training times, our learned labels are semantically meaningful and can reveal hierarchical relationships that may be present in the data.