论文标题
深度哈希,悬一致的较大边距代理嵌入
Deep Hashing with Hash-Consistent Large Margin Proxy Embeddings
论文作者
论文摘要
图像哈希码是通过将培训的分类或检索训练的卷积神经网络(CNN)的嵌入来产生的。尽管代理嵌入在这两项任务上都能达到良好的性能,但由于旋转歧义鼓励了非二进制嵌入,因此它们对二进制并非二进制。提出了一组固定的代理(CNN分类层的权重)来消除这种歧义,并引入了设计代理集的过程,这些程序对于分类和哈希的介绍几乎是最佳的。结果证明,产生的剧烈的大边距(HCLM)代理可以鼓励哈希单元饱和,从而保证了少量的二进制误差,同时产生了高度判别的哈希编码。还提出了旨在在转移方案中提高哈希表现的语义扩展(SHCLM)。广泛的实验表明,SHCLM嵌入在几个小型和大型数据集上,无论是在培训类别的集合之内还是之外,都对最先进的哈希方法实现了重大改进。
Image hash codes are produced by binarizing the embeddings of convolutional neural networks (CNN) trained for either classification or retrieval. While proxy embeddings achieve good performance on both tasks, they are non-trivial to binarize, due to a rotational ambiguity that encourages non-binary embeddings. The use of a fixed set of proxies (weights of the CNN classification layer) is proposed to eliminate this ambiguity, and a procedure to design proxy sets that are nearly optimal for both classification and hashing is introduced. The resulting hash-consistent large margin (HCLM) proxies are shown to encourage saturation of hashing units, thus guaranteeing a small binarization error, while producing highly discriminative hash-codes. A semantic extension (sHCLM), aimed to improve hashing performance in a transfer scenario, is also proposed. Extensive experiments show that sHCLM embeddings achieve significant improvements over state-of-the-art hashing procedures on several small and large datasets, both within and beyond the set of training classes.