论文标题
基于显着的特征融合模型,用于脑电图估计
A Saliency based Feature Fusion Model for EEG Emotion Estimation
论文作者
论文摘要
在评估情绪的不同方式中,代表电脑活动的脑电图(EEG)在过去十年中取得了激励结果。脑电图的情绪估计可能有助于诊断或康复某些疾病。在本文中,我们提出了一个双重模型,该模型考虑了EEG特征图的两个不同表示形式:1)基于EEG频段功率的基于顺序的表示,2)基于图像向量的基于图像的表示。我们还提出了一种创新方法,以基于基于图像的模型的显着性分析来组合信息,以促进两个模型零件的联合学习。该模型已在四个可公开的数据集上进行了评估:种子-IV,种子,DEAP和MPED。所达到的结果优于最先进的方法,该方法的三个提议的数据集具有较低的标准偏差,反映了较高的稳定性。为了重现性,本文提出的代码和模型可在https://github.com/vdelv/emotion-eeg上找到。
Among the different modalities to assess emotion, electroencephalogram (EEG), representing the electrical brain activity, achieved motivating results over the last decade. Emotion estimation from EEG could help in the diagnosis or rehabilitation of certain diseases. In this paper, we propose a dual model considering two different representations of EEG feature maps: 1) a sequential based representation of EEG band power, 2) an image-based representation of the feature vectors. We also propose an innovative method to combine the information based on a saliency analysis of the image-based model to promote joint learning of both model parts. The model has been evaluated on four publicly available datasets: SEED-IV, SEED, DEAP and MPED. The achieved results outperform results from state-of-the-art approaches for three of the proposed datasets with a lower standard deviation that reflects higher stability. For sake of reproducibility, the codes and models proposed in this paper are available at https://github.com/VDelv/Emotion-EEG.