论文标题

反馈很好,主动反馈更好:阻止注意力主动反馈代码

Feedback is Good, Active Feedback is Better: Block Attention Active Feedback Codes

论文作者

Ozfatura, Emre, Shao, Yulin, Ghazanfari, Amin, Perotti, Alberto, Popovic, Branislav, Gunduz, Deniz

论文摘要

深度神经网络(DNN)协助的通道编码设计,例如现有代码的低复杂性神经解码器,或基于端到的神经网络编码器设计,由于其提高性能和灵活性,最近引起了兴趣;特别是对于不存在高性能结构化代码设计的通信方案。在存在反馈的情况下,沟通就是一种这样的沟通场景,几十年来,反馈渠道的实用代码设计在编码理论中一直是一个开放的挑战。最近,基于DNN的设计在利用反馈方面显示出令人印象深刻的结果。特别是,使用流行的变压器体系结构的广义块注意力反馈(GBAF)代码在块错误率(BLER)性能方面取得了显着提高。但是,以前的作品主要集中在被动反馈上,在该反馈中,发射器观察到接收器的信号的嘈杂版本。在这项工作中,我们表明GBAF代码也可以用于具有主动反馈的频道。我们在发射机和接收器上实现了一对变压器体系结构,它们会依次相互交互,并实现新的最先进的效果,尤其是在低SNR制度中。

Deep neural network (DNN)-assisted channel coding designs, such as low-complexity neural decoders for existing codes, or end-to-end neural-network-based auto-encoder designs are gaining interest recently due to their improved performance and flexibility; particularly for communication scenarios in which high-performing structured code designs do not exist. Communication in the presence of feedback is one such communication scenario, and practical code design for feedback channels has remained an open challenge in coding theory for many decades. Recently, DNN-based designs have shown impressive results in exploiting feedback. In particular, generalized block attention feedback (GBAF) codes, which utilizes the popular transformer architecture, achieved significant improvement in terms of the block error rate (BLER) performance. However, previous works have focused mainly on passive feedback, where the transmitter observes a noisy version of the signal at the receiver. In this work, we show that GBAF codes can also be used for channels with active feedback. We implement a pair of transformer architectures, at the transmitter and the receiver, which interact with each other sequentially, and achieve a new state-of-the-art BLER performance, especially in the low SNR regime.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源