论文标题
群集和诱饵:随机性符合图形局部性
Cluster-and-Conquer: When Randomness Meets Graph Locality
论文作者
论文摘要
K-nearest-neighbors(KNN)图是许多象征数据挖掘和机器学习应用程序的核心。一些最有效的KNN图算法是增量和局部性:它们从随机图开始,它们通过遍历邻居链接来逐步改善。矛盾的是,这种随机开始也是这些算法的关键弱点之一:节点最初是连接到不同邻居的,根据相似性度量,该节点远离邻居。结果,增量算法必须首先费力地探索虚假的潜在邻居,然后才能识别类似的节点并开始收敛。在本文中,我们使用群集和串联(简称C 2)删除了此缺点。由于一种新型的轻巧的聚类机制,被称为Fastrandomhash,群集和诱导促进了贪婪算法的起始配置。 Fastrandomhash利用随机性和递归以非常低的成本来预群类似节点。我们对真实数据集的广泛评估表明,集群和争议的表现明显优于现有方法,包括LSH,速度高达X4.42,而在KNN质量方面仅造成可忽略的损失。
K-Nearest-Neighbors (KNN) graphs are central to many emblematic data mining and machine-learning applications. Some of the most efficient KNN graph algorithms are incremental and local: they start from a random graph, which they incrementally improve by traversing neighbors-of-neighbors links. Paradoxically, this random start is also one of the key weaknesses of these algorithms: nodes are initially connected to dissimilar neighbors, that lie far away according to the similarity metric. As a result, incremental algorithms must first laboriously explore spurious potential neighbors before they can identify similar nodes, and start converging. In this paper, we remove this drawback with Cluster-and-Conquer (C 2 for short). Cluster-and-Conquer boosts the starting configuration of greedy algorithms thanks to a novel lightweight clustering mechanism, dubbed FastRandomHash. FastRandomHash leverages random-ness and recursion to pre-cluster similar nodes at a very low cost. Our extensive evaluation on real datasets shows that Cluster-and-Conquer significantly outperforms existing approaches, including LSH, yielding speed-ups of up to x4.42 while incurring only a negligible loss in terms of KNN quality.