论文标题
学习激进的最大流量
Learning-Augmented Maximum Flow
论文作者
论文摘要
我们提出了一个使用预测来加速最大流量计算的框架。一个预测是流动,即将非负流动值分配到边缘,它满足流量保护属性,但不一定尊重实际实例的边缘能力(因为这些在学习时是未知的)。我们提出了一种算法,鉴于$ M $ - 边缘流网络和预测流,计算$ O(mη)$时间的最大流量,其中$η$是$ \ ell_1 $的预测错误,即预测流量和最佳流量值之间的绝对差和最佳差值。此外,我们证明,鉴于Oracle访问流量网络的分布,可以有效地将PAC-LEARN的预测最小化,以最大程度地减少该分布中预期的$ \ ell_1 $错误。我们的结果符合最新的学习算法的研究系列,该算法旨在通过使用预测(例如,从以前的类似实例中获得机器学习)来改善经典算法的最差范围。到目前为止,该领域的主要重点是提高在线问题的竞争比率。跟随Dinitz等。 (Neurips 2021),我们的结果是改善离线问题运行时间的首先。
We propose a framework for speeding up maximum flow computation by using predictions. A prediction is a flow, i.e., an assignment of non-negative flow values to edges, which satisfies the flow conservation property, but does not necessarily respect the edge capacities of the actual instance (since these were unknown at the time of learning). We present an algorithm that, given an $m$-edge flow network and a predicted flow, computes a maximum flow in $O(mη)$ time, where $η$ is the $\ell_1$ error of the prediction, i.e., the sum over the edges of the absolute difference between the predicted and optimal flow values. Moreover, we prove that, given an oracle access to a distribution over flow networks, it is possible to efficiently PAC-learn a prediction minimizing the expected $\ell_1$ error over that distribution. Our results fit into the recent line of research on learning-augmented algorithms, which aims to improve over worst-case bounds of classical algorithms by using predictions, e.g., machine-learned from previous similar instances. So far, the main focus in this area was on improving competitive ratios for online problems. Following Dinitz et al. (NeurIPS 2021), our results are one of the firsts to improve the running time of an offline problem.