论文标题
使用混合数据集的多内核学习的概括界限
Generalization Bounds on Multi-Kernel Learning with Mixed Datasets
论文作者
论文摘要
本文为多内核学习问题提供了新的概括界限。由传感器网络和时空模型中的应用中的应用激励,我们假设将数据集混合在每个样品中从有限的马尔可夫链中取出。我们学习内核的界限承认$ O(\ sqrt {\ log m})$依赖于基本内核数和$ o(1/\ sqrt {n})$依赖于培训样本的数量。但是,与i.i.d的多内核学习相比,添加了一些$ o(1/\ sqrt {n})$项,以补偿样品之间的依赖性。数据集。
This paper presents novel generalization bounds for the multi-kernel learning problem. Motivated by applications in sensor networks and spatial-temporal models, we assume that the dataset is mixed where each sample is taken from a finite pool of Markov chains. Our bounds for learning kernels admit $O(\sqrt{\log m})$ dependency on the number of base kernels and $O(1/\sqrt{n})$ dependency on the number of training samples. However, some $O(1/\sqrt{n})$ terms are added to compensate for the dependency among samples compared with existing generalization bounds for multi-kernel learning with i.i.d. datasets.