论文标题
DEAP缓存:深度驱逐入场和预取速
DEAP Cache: Deep Eviction Admission and Prefetching for Cache
论文作者
论文摘要
最新的学习政策方法来改善缓存,仅针对预摘要,录取和驱逐过程中的一种。相比之下,我们提出了一条端到端管道,以使用机器学习来学习所有三个政策。我们还从在大型语料库进行预处理的成功中汲取灵感,以学习任务的专业嵌入。我们根据过去的错过将预取的预取为序列预测任务。遵循以前的工作,表明频率和新近度是缓存的两个正交基本属性,我们使用在线加强学习技术来学习基于它们的两个正交驱逐策略之间的最佳策略分布。尽管以前的方法将过去用作未来的指标,但我们取而代之的是以多任务的方式明确地对未来的频率和新近度建模,并利用深网的能力来捕获未来派趋势,并将其用于学习驱逐和录取。我们还使用我们的方法中的内核密度估计来以在线方式建模数据的分布,以解决缓存非平稳数据的问题。我们将方法作为“概念验证”,以学习使用机器学习的所有三个组成部分,并保留改进的实际部署,以供将来的工作。
Recent approaches for learning policies to improve caching, target just one out of the prefetching, admission and eviction processes. In contrast, we propose an end to end pipeline to learn all three policies using machine learning. We also take inspiration from the success of pretraining on large corpora to learn specialized embeddings for the task. We model prefetching as a sequence prediction task based on past misses. Following previous works suggesting that frequency and recency are the two orthogonal fundamental attributes for caching, we use an online reinforcement learning technique to learn the optimal policy distribution between two orthogonal eviction strategies based on them. While previous approaches used the past as an indicator of the future, we instead explicitly model the future frequency and recency in a multi-task fashion with prefetching, leveraging the abilities of deep networks to capture futuristic trends and use them for learning eviction and admission. We also model the distribution of the data in an online fashion using Kernel Density Estimation in our approach, to deal with the problem of caching non-stationary data. We present our approach as a "proof of concept" of learning all three components of cache strategies using machine learning and leave improving practical deployment for future work.