论文标题
竞争性算法用于块状缓存
Competitive Algorithms for Block-Aware Caching
论文作者
论文摘要
我们研究了块感知的缓存问题,这是经典缓存的概括,其中从相同块中获取(或驱逐)页面的收入与从块中获得(或驱逐)相同的成本相同。给定尺寸$ k $的缓存,以及从分区$β\ leq k $块中的$ n $页面的一系列请求,目标是最大程度地减少获取到(或驱逐从)缓存的总成本。我们显示以下结果: $ \ bullet $用于驱逐成本模型,我们显示$ O(\ log K)$ - 近似离线算法,$ k $ - 竞争性确定性确定性在线算法和$ O(\ log^2 K)$ - 竞争性随机在线算法。 $ \ bullet $用于获取成本模型,我们显示出$ω(β)$的完整性差距(β)$,用于自然LP放松问题,以及$ω(β+ \ log k)$ sublovents $ sublovermations $ for $ω(β+ \ log k)$下限用于随机在线算法。忽略块结构并运行经典的分页算法的策略在$ O(β)$近似和$ O(β\ log k)$竞争比率分别为离线和在线旋转设置。 $ \ bullet $用于获取和驱逐模型,我们显示了该问题的$(h,k)$ - 双晶型版本的改进范围。特别是,当$ k = 2h $时,我们将经典的缓存算法的性能与恒定因素相匹配。 我们的结果在获取和驱逐成本模型之间建立了分离,这很有趣,因为提取/驱逐成本的成本是相同的,直到经典缓存的添加术语相同。以前的工作仅在$ k> h $时研究了在线确定性算法。我们的见解是放松对覆盖LP的块感受的缓存问题。主要的技术挑战是维持竞争性的分数解决方案,并以有限的损失将其围起来,因为该LP的限制在线揭示了。
We study the block-aware caching problem, a generalization of classic caching in which fetching (or evicting) pages from the same block incurs the same cost as fetching (or evicting) just one page from the block. Given a cache of size $k$, and a sequence of requests from $n$ pages partitioned into given blocks of size $β\leq k$, the goal is to minimize the total cost of fetching to (or evicting from) cache. We show the following results: $\bullet$ For the eviction cost model, we show an $O(\log k)$-approximate offline algorithm, a $k$-competitive deterministic online algorithm, and an $O(\log^2 k)$-competitive randomized online algorithm. $\bullet$ For the fetching cost model, we show an integrality gap of $Ω(β)$ for the natural LP relaxation of the problem, and an $Ω(β+ \log k)$ lower bound for randomized online algorithms. The strategy of ignoring the block-structure and running a classical paging algorithm trivially achieves an $O(β)$ approximation and an $O(β\log k)$ competitive ratio respectively for the offline and online-randomized setting. $\bullet$ For both fetching and eviction models, we show improved bounds for the $(h,k)$-bicriteria version of the problem. In particular, when $k=2h$, we match the performance of classical caching algorithms up to constant factors. Our results establish a separation between the tractability of the fetching and eviction cost models, which is interesting since fetching/evictions costs are the same up to an additive term for classic caching. Previous work only studied online deterministic algorithms for the fetching cost model when $k > h$. Our insight is to relax the block-aware caching problem to a submodular covering LP. The main technical challenge is to maintain a competitive fractional solution, and to round it with bounded loss, as the constraints of this LP are revealed online.