论文标题

预测的战略防止调度

Strategyproof Scheduling with Predictions

论文作者

Balkanski, Eric, Gkatzelis, Vasilis, Tan, Xizhi

论文摘要

\ citet {nr99}在发起算法机理设计领域的开创性论文中研究了设计策略性机制以在无关机器上安排工作的策略性机制的问题,旨在最大程度地减少Makepan。他们提供了一种实现$ n $ approximation的策略性防止机制,并大胆地猜想这是任何确定性的策略范围防止调度机制都可以实现的最佳近似值。经过二十多年和几项努力,$ n $仍然是最著名的近似值,而\ citet {ckk21}的最新工作已经能够证明$ω(\ sqrt {n})$近似$近似限制的下限,用于所有确定性策略性机制。但是,这种强大的负面结果在很大程度上取决于以下事实:这些机制的性能是使用最坏情况分析评估的。为了克服这种过度悲观的,通常是不信息的最坏情况的界限,最近的工作集中在``学习授权的框架''上,其目标是利用机器学习的预测来获得改进的近似值,即当这些预测是准确的(一致性),同时还要近距离地进行了预测,即使在近似的情况下也是如此。 在这项工作中,我们使用学习授权的框架研究了〜\ citet {nr99}的经典战略调度问题,并提供了确定性的多项式策略性防止机制,该机制为$ 6 $ - 一致,$ 2N $ -Obobust。因此,我们实现了``两全其美的'':$ o(1)$一致性和$ o(n)$ robustness,渐近地符合最著名的近似值。然后,我们扩展此结果以提供更通用的最差案例近似保证,保证了预测误差的函数。最后,我们通过表明任何1美元的确定性策略性机制具有无界的鲁棒性来补充我们的积极结果。

In their seminal paper that initiated the field of algorithmic mechanism design, \citet{NR99} studied the problem of designing strategyproof mechanisms for scheduling jobs on unrelated machines aiming to minimize the makespan. They provided a strategyproof mechanism that achieves an $n$-approximation and they made the bold conjecture that this is the best approximation achievable by any deterministic strategyproof scheduling mechanism. After more than two decades and several efforts, $n$ remains the best known approximation and very recent work by \citet{CKK21} has been able to prove an $Ω(\sqrt{n})$ approximation lower bound for all deterministic strategyproof mechanisms. This strong negative result, however, heavily depends on the fact that the performance of these mechanisms is evaluated using worst-case analysis. To overcome such overly pessimistic, and often uninformative, worst-case bounds, a surge of recent work has focused on the ``learning-augmented framework'', whose goal is to leverage machine-learned predictions to obtain improved approximations when these predictions are accurate (consistency), while also achieving near-optimal worst-case approximations even when the predictions are arbitrarily wrong (robustness). In this work, we study the classic strategic scheduling problem of~\citet{NR99} using the learning-augmented framework and give a deterministic polynomial-time strategyproof mechanism that is $6$-consistent and $2n$-robust. We thus achieve the ``best of both worlds'': an $O(1)$ consistency and an $O(n)$ robustness that asymptotically matches the best-known approximation. We then extend this result to provide more general worst-case approximation guarantees as a function of the prediction error. Finally, we complement our positive results by showing that any $1$-consistent deterministic strategyproof mechanism has unbounded robustness.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源