论文标题

迈向可靠的脑电图解码:对EEGNEX的基准研究

Toward reliable signals decoding for electroencephalogram: A benchmark study to EEGNeX

论文作者

Chen, Xia, Teng, Xiangbin, Chen, Han, Pan, Yafeng, Geyer, Philipp

论文摘要

This study examines the efficacy of various neural network (NN) models in interpreting mental constructs via electroencephalogram (EEG) signals.通过评估四个脑部计算机界面(BCI)范式的16个普遍的NN模型及其变体,我们评估了它们的信息表示能力。 Rooted in comprehensive literature review findings, we proposed EEGNeX, a novel, purely ConvNet-based architecture.我们将其与现有的尖端策略和所有BCI基准(MOABB)的母亲相抵触,涉及11个不同的EEG运动想象(MI)分类任务,并透露EEGNEX超过了其他最先进的方法。值得注意的是,与竞争对手相比,在不同方案的分类准确性中,分类准确性提高了2.1%-8.5%。这项研究不仅为设计有效的脑电图数据设计有效的NN模型提供了更深入的见解,而且还为将来探索生物电信号与NN体系结构之间的关系奠定了基础。为了更广泛的科学合作,我们制作了所有基准模型,包括EEGNEX,可在(https://github.com/chenxiachan/eeegnex)上公开使用。

This study examines the efficacy of various neural network (NN) models in interpreting mental constructs via electroencephalogram (EEG) signals. Through the assessment of 16 prevalent NN models and their variants across four brain-computer interface (BCI) paradigms, we gauged their information representation capability. Rooted in comprehensive literature review findings, we proposed EEGNeX, a novel, purely ConvNet-based architecture. We pitted it against both existing cutting-edge strategies and the Mother of All BCI Benchmarks (MOABB) involving 11 distinct EEG motor imagination (MI) classification tasks and revealed that EEGNeX surpasses other state-of-the-art methods. Notably, it shows up to 2.1%-8.5% improvement in the classification accuracy in different scenarios with statistical significance (p < 0.05) compared to its competitors. This study not only provides deeper insights into designing efficient NN models for EEG data but also lays groundwork for future explorations into the relationship between bioelectric brain signals and NN architectures. For the benefit of broader scientific collaboration, we have made all benchmark models, including EEGNeX, publicly available at (https://github.com/chenxiachan/EEGNeX).

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源