论文标题
K-Emocon,一种多模式传感器数据集,用于自然主义对话中的连续情感识别
K-EmoCon, a multimodal sensor dataset for continuous emotion recognition in naturalistic conversations
论文作者
论文摘要
在社交互动期间识别情绪随着低成本移动传感器的普及而具有许多潜在的应用,但是由于缺乏自然主义的情感互动数据,挑战仍然存在。大多数现有的情绪数据集都不支持研究在受约束环境中收集的野外产生的特质情绪。因此,在社交互动的背景下研究情绪需要一个新颖的数据集,而K-Emocon是一个多模式的数据集,在自然主义对话过程中具有连续情绪的全面注释。该数据集包含多模式测量值,包括视听记录,脑电图和外围生理信号,并从16个会议上获得了大约10分钟长的对社会问题的辩论,并从现成的设备中获取。与以前的数据集不同,它包括从所有三个可用角度来看的情感注释:自我,辩论伙伴和外部观察者。评估者以唤醒价值和18种其他分类情绪来观看辩论镜头时,每5秒钟的时间间隔一次注释的情绪显示。由此产生的K-Emocon是第一个公开可用的情感数据集,可容纳社交互动期间情绪的多效性评估。
Recognizing emotions during social interactions has many potential applications with the popularization of low-cost mobile sensors, but a challenge remains with the lack of naturalistic affective interaction data. Most existing emotion datasets do not support studying idiosyncratic emotions arising in the wild as they were collected in constrained environments. Therefore, studying emotions in the context of social interactions requires a novel dataset, and K-EmoCon is such a multimodal dataset with comprehensive annotations of continuous emotions during naturalistic conversations. The dataset contains multimodal measurements, including audiovisual recordings, EEG, and peripheral physiological signals, acquired with off-the-shelf devices from 16 sessions of approximately 10-minute long paired debates on a social issue. Distinct from previous datasets, it includes emotion annotations from all three available perspectives: self, debate partner, and external observers. Raters annotated emotional displays at intervals of every 5 seconds while viewing the debate footage, in terms of arousal-valence and 18 additional categorical emotions. The resulting K-EmoCon is the first publicly available emotion dataset accommodating the multiperspective assessment of emotions during social interactions.