论文标题
通过弱监督的分发分发推理出来的分配推理
Out of Distribution Reasoning by Weakly-Supervised Disentangled Logic Variational Autoencoder
论文作者
论文摘要
为了确保机器学习模型产生的结果的安全性,必须确保与训练集不同的分布(OOD)检测,即找到与训练集不同的测试样品以及有关此类样本的推理(OOD推理)。最近,在变异自动编码器(VAE)的潜在空间中,OOD检测的结果有令人鼓舞的结果。但是,如果没有解散,VAE就无法执行OOD推理。分解可确保在OOD的生成因子(例如,图像数据中的降雨)和它们编码的潜在变量之间进行单一的映射。尽管以前的文献集中在具有已知和独立生成因素的简单数据集上的弱监督下的分解。实际上,对于具有未知和抽象生成因素的Carla等复杂数据集,无法通过薄弱的监督实现完全分解。结果,我们提出了一个OOD推理框架,该框架学习了部分分离的VAE来推理复杂的数据集。我们的框架包括三个步骤:基于观察到的生成因素对数据进行分区,将VAE作为满足分离规则的逻辑张量网络以及运行时间的OOD推理进行训练。我们在Carla数据集上评估了我们的方法,并将结果与三种最新方法进行了比较。我们发现,我们的框架在分离和端到端的OOD推理方面优于这些方法。
Out-of-distribution (OOD) detection, i.e., finding test samples derived from a different distribution than the training set, as well as reasoning about such samples (OOD reasoning), are necessary to ensure the safety of results generated by machine learning models. Recently there have been promising results for OOD detection in the latent space of variational autoencoders (VAEs). However, without disentanglement, VAEs cannot perform OOD reasoning. Disentanglement ensures a one- to-many mapping between generative factors of OOD (e.g., rain in image data) and the latent variables to which they are encoded. Although previous literature has focused on weakly-supervised disentanglement on simple datasets with known and independent generative factors. In practice, achieving full disentanglement through weak supervision is impossible for complex datasets, such as Carla, with unknown and abstract generative factors. As a result, we propose an OOD reasoning framework that learns a partially disentangled VAE to reason about complex datasets. Our framework consists of three steps: partitioning data based on observed generative factors, training a VAE as a logic tensor network that satisfies disentanglement rules, and run-time OOD reasoning. We evaluate our approach on the Carla dataset and compare the results against three state-of-the-art methods. We found that our framework outperformed these methods in terms of disentanglement and end-to-end OOD reasoning.