论文标题

连续预取交互式数据应用程序

Continuous Prefetch for Interactive Data Applications

论文作者

Mohammed, Haneen, Wei, Ziyun, Wu, Eugene, Netravali, Ravi

论文摘要

交互式数据可视化和探索(DVE)应用程序通常是由于爆发的请求模式,较大的响应大小以及在一系列网络和设备上的异质部署而被网络 - 底线。这使得难以确保持续较低的响应时间(<100ms)。 Khameleon是DVE应用程序的框架,该框架使用了预取和响应调整的新型组合,以使低潜伏期的动态折衷响应质量。 Khameleon利用了DVE的近似公差:立即的低质量响应比等待完整的结果更可取。为此,Khameleon逐步编码响应,并运行服务器端调度程序,该调度程序使用可用的带宽主动地串出部分响应,以最大程度地提高用户的感知交互性。调度程序涉及基于可用资源,预测用户交互和响应质量水平的复杂优化;但是,决策也必须是实时的。为了克服这一点,Khameleon使用了快速贪婪的近似,该近似紧密地模仿了最佳方法。使用图像探索和可视化应用程序具有真实的用户交互轨迹,我们表明,在各种网络和客户资源条件下,Khameleon优于经典的预取方法,这些方法受益于完美的预测模型:Khameleon的响应潜伏期永远不会更高,通常在2至3个数量级之间,而响应质量则保持在50%-80%的范围内。

Interactive data visualization and exploration (DVE) applications are often network-bottlenecked due to bursty request patterns, large response sizes, and heterogeneous deployments over a range of networks and devices. This makes it difficult to ensure consistently low response times (< 100ms). Khameleon is a framework for DVE applications that uses a novel combination of prefetching and response tuning to dynamically trade-off response quality for low latency. Khameleon exploits DVE's approximation tolerance: immediate lower-quality responses are preferable to waiting for complete results. To this end, Khameleon progressively encodes responses, and runs a server-side scheduler that proactively streams portions of responses using available bandwidth to maximize user's perceived interactivity. The scheduler involves a complex optimization based on available resources, predicted user interactions, and response quality levels; yet, decisions must also be real-time. To overcome this, Khameleon uses a fast greedy approximation which closely mimics the optimal approach. Using image exploration and visualization applications with real user interaction traces, we show that across a wide range of network and client resource conditions, Khameleon outperforms classic prefetching approaches that benefit from perfect prediction models: response latencies with Khameleon are never higher, and typically between 2 to 3 orders of magnitude lower while response quality remains within 50%-80%.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源