论文标题
关于无监督学习中可识别性的陷阱。关于:“代表学习的逃避:因果观点”的注释
On Pitfalls of Identifiability in Unsupervised Learning. A Note on: "Desiderata for Representation Learning: A Causal Perspective"
论文作者
论文摘要
在无监督的表示学习的背景下,模型可识别性是理想的属性。在不存在的情况下,不同的模型在观察上可能是无法区分的,同时产生了彼此之间无关紧要的表示,从而使地面真理生成模型从根本上不可能恢复,这通常是通过适当构造的反例显示的。在本说明中,我们讨论了一种这样的构建,说明了王(Wang&Jordan)(2021)中“ Desiderata的代表性学习:因果观点”中提出的可识别性结果的潜在失败情况。该结构基于非线性独立组件分析理论。我们评论该和其他反例以识别表示的含义。
Model identifiability is a desirable property in the context of unsupervised representation learning. In absence thereof, different models may be observationally indistinguishable while yielding representations that are nontrivially related to one another, thus making the recovery of a ground truth generative model fundamentally impossible, as often shown through suitably constructed counterexamples. In this note, we discuss one such construction, illustrating a potential failure case of an identifiability result presented in "Desiderata for Representation Learning: A Causal Perspective" by Wang & Jordan (2021). The construction is based on the theory of nonlinear independent component analysis. We comment on implications of this and other counterexamples for identifiable representation learning.