论文标题

Autolr:在深层网络的微调中,按层的修剪和自动调整学习率

AutoLR: Layer-wise Pruning and Auto-tuning of Learning Rates in Fine-tuning of Deep Networks

论文作者

Ro, Youngmin, Choi, Jin Young

论文摘要

现有的微调方法在所有层上使用单个学习率。在本文中,首先,我们讨论了通过使用单个学习率进行微调通过微调进行微调的趋势与众所周知的观念相匹配,即低级层提取一般特征和高级图层提取特定特征。基于我们的讨论,我们提出了一种算法,该算法可以通过层次的修剪和自动调整层的学习率来提高微调性能并降低网络的复杂性。拟议的算法通过在图像检索基准数据集(CUB-200,CARS-196,Stanford Online Product和Inshop)上实现最新性能来验证有效性。代码可在https://github.com/youngminpil/autolr上找到。

Existing fine-tuning methods use a single learning rate over all layers. In this paper, first, we discuss that trends of layer-wise weight variations by fine-tuning using a single learning rate do not match the well-known notion that lower-level layers extract general features and higher-level layers extract specific features. Based on our discussion, we propose an algorithm that improves fine-tuning performance and reduces network complexity through layer-wise pruning and auto-tuning of layer-wise learning rates. The proposed algorithm has verified the effectiveness by achieving state-of-the-art performance on the image retrieval benchmark datasets (CUB-200, Cars-196, Stanford online product, and Inshop). Code is available at https://github.com/youngminPIL/AutoLR.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源