论文标题
通过自我监督的识别一致的神经合奏来捕获跨节神经人群的变异性
Capturing cross-session neural population variability through self-supervised identification of consistent neuron ensembles
论文作者
论文摘要
从记录的神经活动中解码刺激或行为是询问研究中大脑功能的常见方法,也是脑部计算机和脑部机界面的重要组成部分。即使是从小神经种群中进行的可靠解码也是可能的,因为高维神经种群活动通常占据低维流形,可通过合适的潜在变量模型发现。然而,随着时间的流逝,单个神经元的活动和神经记录设备中不稳定性的活动可能很大,使几天和几周的稳定解码变得不切实际。尽管无法在单个神经元水平上预测这种漂移,但是当连续记录会话(例如不同的神经元集集和一致的神经元的不同排列)中,在记录数据中,当基础歧管随着时间的推移而稳定时,可能会学习。在会话中对一致性与陌生神经元的分类,并按照记录数据集中记录录制的录制神经元的偏差来计算录制会话的偏差,然后可以保持解码性能。在这项工作中,我们表明,对深神经网络的自我监督培训可用于弥补这一间隔的可变性。结果,顺序自动编码模型可以维持最新的行为解码性能,以使未来几天的完全看不见的记录会话。我们的方法仅需要一次录制会话来训练该模型,这是迈向可靠,无重新校准的大脑计算机接口的一步。
Decoding stimuli or behaviour from recorded neural activity is a common approach to interrogate brain function in research, and an essential part of brain-computer and brain-machine interfaces. Reliable decoding even from small neural populations is possible because high dimensional neural population activity typically occupies low dimensional manifolds that are discoverable with suitable latent variable models. Over time however, drifts in activity of individual neurons and instabilities in neural recording devices can be substantial, making stable decoding over days and weeks impractical. While this drift cannot be predicted on an individual neuron level, population level variations over consecutive recording sessions such as differing sets of neurons and varying permutations of consistent neurons in recorded data may be learnable when the underlying manifold is stable over time. Classification of consistent versus unfamiliar neurons across sessions and accounting for deviations in the order of consistent recording neurons in recording datasets over sessions of recordings may then maintain decoding performance. In this work we show that self-supervised training of a deep neural network can be used to compensate for this inter-session variability. As a result, a sequential autoencoding model can maintain state-of-the-art behaviour decoding performance for completely unseen recording sessions several days into the future. Our approach only requires a single recording session for training the model, and is a step towards reliable, recalibration-free brain computer interfaces.