论文标题

以无缝的方式自动化应用程序级缓存

Automation of application-level caching in a seamless way

论文作者

Mertz, Jhonny, Nunes, Ingrid

论文摘要

在交付服务时满足性能和可伸缩性要求是Web应用程序中的关键问题。最近,基于Internet的服务的延迟和成本正在鼓励使用应用程序级缓存来继续满足用户的需求并提高原始服务器的可扩展性和可用性。当传统的缓存不足以满足此类要求时,已采用了应用级的缓存,其中开发人员可以手动控制缓存的内容。尽管它很受欢迎,但这种级别的缓存通常以临时的方式解决,因为它取决于应用程序的特定细节。此外,IT迫使应用程序开发人员推理与应用程序业务逻辑无关的横切关注。结果,应用程序级缓存是一项耗时且容易出错的任务,成为常见的错误来源。在应用程序级缓存涉及的所有问题中,必须经常调整应该缓存的决定以应对应用程序的演变和使用,从而使其成为一项艰巨的任务。在本文中,我们引入了一种自动缓存方法,通过监视系统执行并自适应管理缓存决策,在运行时自动识别应用程序级的缓存内容。我们的方法是作为一个可以无缝集成到新的和现有的Web应用程序中的框架。除了减少开发人员开发缓存解决方案所需的努力外,经验评估还表明,我们的方法可以显着加快并提高HIT比,而改善的范围从2.78%到17.18%。

Meeting performance and scalability requirements while delivering services is a critical issue in web applications. Recently, latency and cost of Internet-based services are encouraging the use of application-level caching to continue satisfying users' demands and improve the scalability and availability of origin servers. Application-level caching, in which developers manually control cached content, has been adopted when traditional forms of caching are insufficient to meet such requirements. Despite its popularity, this level of caching is typically addressed in an ad hoc way, given that it depends on specific details of the application. Furthermore, it forces application developers to reason about a crosscutting concern, which is unrelated to the application business logic. As a result, application-level caching is a time-consuming and error-prone task, becoming a common source of bugs. Among all the issues involved with application-level caching, the decision of what should be cached must frequently be adjusted to cope with the application evolution and usage, making it a challenging task. In this paper, we introduce an automated caching approach to automatically identify application-level cache content at runtime by monitoring system execution and adaptively managing caching decisions. Our approach is implemented as a framework that can be seamlessly integrated into new and existing web applications. In addition to the reduction of the effort required from developers to develop a caching solution, an empirical evaluation showed that our approach significantly speeds up and improves hit ratios with improvements ranging from 2.78% to 17.18%.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源