论文标题

MAEEG:蒙版的自动编码器用于脑电图表示学习

MAEEG: Masked Auto-encoder for EEG Representation Learning

论文作者

Chien, Hsiang-Yun Sherry, Goh, Hanlin, Sandino, Christopher M., Cheng, Joseph Y.

论文摘要

通过小型数据集和获得标签的困难,使用机器学习的生物信号(例如脑电图)的解码信息一直是一个挑战。我们提出了一个基于重建的自我监督学习模型,即脑电图(MAEEG)的蒙版自动编码器,用于通过学习使用变压器体系结构重建蒙面的脑电图特征来学习EEG表示。我们发现,当仅给出少数标签时,Maeeg可以学习显着改善睡眠阶段分类的表示形式(准确性提高约5%)。我们还发现,基于重建的SSL预处理期间的输入样本长度和不同的掩盖方式对下游模型性能具有巨大影响。具体而言,学习重建更大的比例和更集中的信号会导致睡眠分类的表现更好。我们的发现提供了有关基于重建的SSL如何帮助表示脑电图学习的洞察力。

Decoding information from bio-signals such as EEG, using machine learning has been a challenge due to the small data-sets and difficulty to obtain labels. We propose a reconstruction-based self-supervised learning model, the masked auto-encoder for EEG (MAEEG), for learning EEG representations by learning to reconstruct the masked EEG features using a transformer architecture. We found that MAEEG can learn representations that significantly improve sleep stage classification (~5% accuracy increase) when only a small number of labels are given. We also found that input sample lengths and different ways of masking during reconstruction-based SSL pretraining have a huge effect on downstream model performance. Specifically, learning to reconstruct a larger proportion and more concentrated masked signal results in better performance on sleep classification. Our findings provide insight into how reconstruction-based SSL could help representation learning for EEG.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源