论文标题
关于自我监管的多模式表示学习:对阿尔茨海默氏病的应用
On self-supervised multi-modal representation learning: An application to Alzheimer's disease
论文作者
论文摘要
对功能和结构性脑成像训练的深度监督预测模型的内省可能会发现阿尔茨海默氏病(AD)的新颖标志物。但是,受监督的培训容易从虚假特征(快捷方式学习)中学习,从而损害了其在发现过程中的价值。深度无监督的,最近,对对比的自我监督的方法,不偏分类,是该任务的更好候选人。他们的多模式选项专门通过模态交互提供了其他正则化。在本文中,我们介绍了一种详尽考虑多模式体系结构的方法,以对fMRI和AD患者和对照组的fMRI和MRI进行对比。我们表明,这种多模式融合会导致表示两种模态下游分类结果的表示。我们研究了投影到大脑空间中的融合自我监督的特征,并引入了一种稳定的方法。
Introspection of deep supervised predictive models trained on functional and structural brain imaging may uncover novel markers of Alzheimer's disease (AD). However, supervised training is prone to learning from spurious features (shortcut learning) impairing its value in the discovery process. Deep unsupervised and, recently, contrastive self-supervised approaches, not biased to classification, are better candidates for the task. Their multimodal options specifically offer additional regularization via modality interactions. In this paper, we introduce a way to exhaustively consider multimodal architectures for contrastive self-supervised fusion of fMRI and MRI of AD patients and controls. We show that this multimodal fusion results in representations that improve the results of the downstream classification for both modalities. We investigate the fused self-supervised features projected into the brain space and introduce a numerically stable way to do so.