论文标题
$β$ -CAPSNET:通过信息瓶颈学习capsnet的学习分解表示
$β$-CapsNet: Learning Disentangled Representation for CapsNet by Information Bottleneck
论文作者
论文摘要
我们提出了一个通过信息瓶颈约束来学习capsnet的学习框架的框架,该框架将信息提炼成紧凑的形式,并激励学习可解释的分解化胶囊。在我们的$β$ -CAPSNET框架中,使用超参数$β$用于权衡解开和其他任务,将变异推理用于将信息瓶颈术语转换为kl差异,该术语近似于capsule的平均值。对于监督学习,使用类别独立掩码矢量来理解合成类型的变化类型,无论图像类别类别,我们通过调整参数$β$来确定分离,重建和分类性能之间的关系,进行大量的定量和定性实验。 Furthermore, the unsupervised $β$-CapsNet and the corresponding dynamic routing algorithm is proposed for learning disentangled capsule in an unsupervised manner, extensive empirical evaluations suggest that our $β$-CapsNet achieves state-of-the-art disentanglement performance compared to CapsNet and various baselines on several complex datasets both in supervision and unsupervised scenes.
We present a framework for learning disentangled representation of CapsNet by information bottleneck constraint that distills information into a compact form and motivates to learn an interpretable factorized capsule. In our $β$-CapsNet framework, hyperparameter $β$ is utilized to trade-off disentanglement and other tasks, variational inference is utilized to convert the information bottleneck term into a KL divergence that is approximated as a constraint on the mean of the capsule. For supervised learning, class independent mask vector is used for understanding the types of variations synthetically irrespective of the image class, we carry out extensive quantitative and qualitative experiments by tuning the parameter $β$ to figure out the relationship between disentanglement, reconstruction and classfication performance. Furthermore, the unsupervised $β$-CapsNet and the corresponding dynamic routing algorithm is proposed for learning disentangled capsule in an unsupervised manner, extensive empirical evaluations suggest that our $β$-CapsNet achieves state-of-the-art disentanglement performance compared to CapsNet and various baselines on several complex datasets both in supervision and unsupervised scenes.