论文标题
黑盒假设转移学习的动态知识蒸馏
Dynamic Knowledge Distillation for Black-box Hypothesis Transfer Learning
论文作者
论文摘要
在像医疗保健这样的现实世界应用中,通常很难构建一个机器学习预测模型,该模型在不同机构中普遍运作良好。同时,可用的模型通常是专有的,即,模型参数和用于模型训练的数据集都无法访问。因此,利用可用模型中隐藏的知识(又称假设)并将其调整为本地数据集变得极为挑战。在这种情况下,在本文中,我们旨在解决假设转移学习框架中的这种特定案例,其中1)源假设是黑框模型,2)源域数据不可用。特别是,我们介绍了一种称为假设转移学习(DKDHTL)的动态知识蒸馏的新型算法。在这种方法中,我们将知识蒸馏与实例的加权机制一起使用,将“黑暗”知识从源假说转移到目标领域。蒸馏损失和标准损失的加权系数由源源假说的预测概率与目标基础的效果效率和学习效率数据的效率数据和求助于您的Nathycass的效率数据,这是由预测的概率之间的一致性确定的。
In real world applications like healthcare, it is usually difficult to build a machine learning prediction model that works universally well across different institutions. At the same time, the available model is often proprietary, i.e., neither the model parameter nor the data set used for model training is accessible. In consequence, leveraging the knowledge hidden in the available model (aka. the hypothesis) and adapting it to a local data set becomes extremely challenging. Motivated by this situation, in this paper we aim to address such a specific case within the hypothesis transfer learning framework, in which 1) the source hypothesis is a black-box model and 2) the source domain data is unavailable. In particular, we introduce a novel algorithm called dynamic knowledge distillation for hypothesis transfer learning (dkdHTL). In this method, we use knowledge distillation with instance-wise weighting mechanism to adaptively transfer the "dark" knowledge from the source hypothesis to the target domain.The weighting coefficients of the distillation loss and the standard loss are determined by the consistency between the predicted probability of the source hypothesis and the target ground-truth label.Empirical results on both transfer learning benchmark datasets and a healthcare dataset demonstrate the effectiveness of our method.