论文标题
ModSelect:用于合成到实域概括的自动模态选择
ModSelect: Automatic Modality Selection for Synthetic-to-Real Domain Generalization
论文作者
论文摘要
在设计多模式系统时,模态选择是一个重要的步骤,尤其是在跨域活动识别的情况下,因为某些模态比其他模式更适合域移动。但是,仅选择具有积极贡献的方式需要系统的方法。我们通过提出一种无监督的模态选择方法(ModSelect)来解决此问题,该方法不需要任何地面真相标签。我们确定了多个单峰分类器的预测与嵌入之间的域差异之间的相关性。然后,我们系统地计算模态选择阈值,该阈值仅选择具有较高相关性和低域差异的模态。我们在实验中表明,我们的方法ModSelect仅选择具有积极贡献的模态,并始终提高合成到现实的域适应基准的性能,从而缩小了域间隙。
Modality selection is an important step when designing multimodal systems, especially in the case of cross-domain activity recognition as certain modalities are more robust to domain shift than others. However, selecting only the modalities which have a positive contribution requires a systematic approach. We tackle this problem by proposing an unsupervised modality selection method (ModSelect), which does not require any ground-truth labels. We determine the correlation between the predictions of multiple unimodal classifiers and the domain discrepancy between their embeddings. Then, we systematically compute modality selection thresholds, which select only modalities with a high correlation and low domain discrepancy. We show in our experiments that our method ModSelect chooses only modalities with positive contributions and consistently improves the performance on a Synthetic-to-Real domain adaptation benchmark, narrowing the domain gap.