论文标题

DCNA:密集连接的神经体系结构搜索语义图像分割

DCNAS: Densely Connected Neural Architecture Search for Semantic Image Segmentation

论文作者

Zhang, Xiong, Xu, Hongmin, Mo, Hong, Tan, Jianchao, Yang, Cheng, Wang, Lei, Ren, Wenqi

论文摘要

神经体系结构搜索(NAS)在自动设计可扩展的网络体系结构方面具有巨大的潜力,以进行密集的图像预测。但是,现有的NAS算法通常会在限制的搜索空间和搜索代理任务上妥协以满足可实现的计算需求。为了允许尽可能宽的网络体系结构并避免目标数据集和代理数据集之间的差距,我们提出了一个密集连接的NAS(DCNAS)框架,该框架直接在大型目标数据集上直接搜索视觉信息的多尺度表示的最佳网络结构。具体而言,通过使用可学习的权重将彼此连接起来,我们引入了密集连接的搜索空间,以涵盖大量主流网络设计。此外,通过结合路径级别和通道级采样策略,我们设计了一个融合模块,以减少充足的搜索空间的记忆消耗。我们证明,从我们的DCNA算法获得的建筑在公共语义图像细分基准上实现了最先进的表演,其中包括CityScapes的84.3%,在Pascal VOC 2012上获得了86.9%的表现。我们还保留了更具挑战性的ADE20K和Pascal Contect datasEstect的领先表演。

Neural Architecture Search (NAS) has shown great potentials in automatically designing scalable network architectures for dense image predictions. However, existing NAS algorithms usually compromise on restricted search space and search on proxy task to meet the achievable computational demands. To allow as wide as possible network architectures and avoid the gap between target and proxy dataset, we propose a Densely Connected NAS (DCNAS) framework, which directly searches the optimal network structures for the multi-scale representations of visual information, over a large-scale target dataset. Specifically, by connecting cells with each other using learnable weights, we introduce a densely connected search space to cover an abundance of mainstream network designs. Moreover, by combining both path-level and channel-level sampling strategies, we design a fusion module to reduce the memory consumption of ample search space. We demonstrate that the architecture obtained from our DCNAS algorithm achieves state-of-the-art performances on public semantic image segmentation benchmarks, including 84.3% on Cityscapes, and 86.9% on PASCAL VOC 2012. We also retain leading performances when evaluating the architecture on the more challenging ADE20K and Pascal Context dataset.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源