论文标题
错误校正代码变压器
Error Correction Code Transformer
论文作者
论文摘要
错误校正代码是通信物理层的主要组成部分,可确保数据在噪声通道上的可靠传输。最近,显示神经解码器表现优于经典解码技术。但是,由于指数训练的复杂性或由于依赖信念传播而导致的限制性偏见,现有的神经方法表现出强大的过度拟合。最近,由于它们能够代表元素之间复杂的相互作用,变形金刚已成为许多应用中的选择方法。在这项工作中,我们提议首次将变压器体系结构扩展到任意块长度的线性代码的软解码。我们将每个通道的输出维度编码为高维度,以更好地表示要分别处理的位信息。元素处理允许分析通道输出可靠性,而代数代码和位之间的相互作用通过适应的掩盖自我发项模块插入模型中。所提出的方法证明了变压器的极端力量和灵活性,并且在其时间复杂的一小部分时,通过很大的边缘胜过现有的最先进的神经解码器。
Error correction code is a major part of the communication physical layer, ensuring the reliable transfer of data over noisy channels. Recently, neural decoders were shown to outperform classical decoding techniques. However, the existing neural approaches present strong overfitting due to the exponential training complexity, or a restrictive inductive bias due to reliance on Belief Propagation. Recently, Transformers have become methods of choice in many applications thanks to their ability to represent complex interactions between elements. In this work, we propose to extend for the first time the Transformer architecture to the soft decoding of linear codes at arbitrary block lengths. We encode each channel's output dimension to high dimension for better representation of the bits information to be processed separately. The element-wise processing allows the analysis of the channel output reliability, while the algebraic code and the interaction between the bits are inserted into the model via an adapted masked self-attention module. The proposed approach demonstrates the extreme power and flexibility of Transformers and outperforms existing state-of-the-art neural decoders by large margins at a fraction of their time complexity.