论文标题
通过解散已知和未知的滋扰因素进行弱监督的不变表示学习
Weakly Supervised Invariant Representation Learning Via Disentangling Known and Unknown Nuisance Factors
论文作者
论文摘要
散布和不变的表示是代表学习的两个关键目标,并且已经提出了许多方法来实现其中的一个。但是,这两个目标实际上是彼此互补的,因此我们提出了一个框架,以同时完成两个目标。我们引入了一个弱监督的信号,以学习分离的表示,该表示由三个拆分组成,分别包含预测性,已知滋扰和未知的滋扰信息。此外,我们结合了对比度来执行表示不变性的方法。实验表明,所提出的方法在四个标准基准上优于最先进的方法(SOTA)方法,并表明该方法与没有对抗训练的其他方法相比,所提出的方法具有更好的对抗防御能力。
Disentangled and invariant representations are two critical goals of representation learning and many approaches have been proposed to achieve either one of them. However, those two goals are actually complementary to each other so that we propose a framework to accomplish both of them simultaneously. We introduce a weakly supervised signal to learn disentangled representation which consists of three splits containing predictive, known nuisance and unknown nuisance information respectively. Furthermore, we incorporate contrastive method to enforce representation invariance. Experiments shows that the proposed method outperforms state-of-the-art (SOTA) methods on four standard benchmarks and shows that the proposed method can have better adversarial defense ability comparing to other methods without adversarial training.