论文标题
BNAS-V2:记忆效率和性能折断的广泛神经体系结构搜索
BNAS-v2: Memory-efficient and Performance-collapse-prevented Broad Neural Architecture Search
论文作者
论文摘要
在本文中,我们建议BNAS-V2进一步提高NAS的效率,同时体现BCNN的两个优势。为了减轻BNA的不公平培训问题,我们采用持续的放松策略,使BCNN中的每个细胞边缘与所有候选人操作相关,以实现过度参数化的BCNN构建。此外,连续的放松策略在所有预定义的操作中都放松了候选操作作为软性操作的选择。因此,BNAS-V2采用基于梯度的优化算法同时更新过度参数化的BCNN的所有可能路径,而不是单个采样作为BNA。但是,持续的放松导致另一个名为“性能崩溃”的问题,其中这些无重量的操作容易被搜索策略选择。对于此结果,给出了两个解决方案:1)我们提出了自信学习率(CLR),该学习率(CLR)考虑了梯度对建筑权重更新的信心,随着过度参数化BCNN的培训时间的增加; 2)我们介绍了部分通道连接和边缘归一化的组合,这也可以进一步提高记忆效率。此外,我们将可区分的BNA(即带有连续松弛的BNA)表示为BNAS-D,BNAS-D,用CLR为BNAS-V2-CLR,将部分连接的BNAS-D作为BNAS-V2-PC。 CIFAR-10和IMAGENET的实验结果表明,1)BNAS-V2在CIFAR-10(0.05 GPU天(比BNA快4倍)和ImageNet(0.19 GPU天)上均提供最先进的搜索效率; 2)提出的CLR可有效缓解BNAS-D和Vanilla可区分NAS框架中的性能崩溃问题。
In this paper, we propose BNAS-v2 to further improve the efficiency of NAS, embodying both superiorities of BCNN simultaneously. To mitigate the unfair training issue of BNAS, we employ continuous relaxation strategy to make each edge of cell in BCNN relevant to all candidate operations for over-parameterized BCNN construction. Moreover, the continuous relaxation strategy relaxes the choice of a candidate operation as a softmax over all predefined operations. Consequently, BNAS-v2 employs the gradient-based optimization algorithm to simultaneously update every possible path of over-parameterized BCNN, rather than the single sampled one as BNAS. However, continuous relaxation leads to another issue named performance collapse, in which those weight-free operations are prone to be selected by the search strategy. For this consequent issue, two solutions are given: 1) we propose Confident Learning Rate (CLR) that considers the confidence of gradient for architecture weights update, increasing with the training time of over-parameterized BCNN; 2) we introduce the combination of partial channel connections and edge normalization that also can improve the memory efficiency further. Moreover, we denote differentiable BNAS (i.e. BNAS with continuous relaxation) as BNAS-D, BNAS-D with CLR as BNAS-v2-CLR, and partial-connected BNAS-D as BNAS-v2-PC. Experimental results on CIFAR-10 and ImageNet show that 1) BNAS-v2 delivers state-of-the-art search efficiency on both CIFAR-10 (0.05 GPU days that is 4x faster than BNAS) and ImageNet (0.19 GPU days); and 2) the proposed CLR is effective to alleviate the performance collapse issue in both BNAS-D and vanilla differentiable NAS framework.