论文标题
Ganbert:具有双向编码器表示从MRI到PET合成的双向编码器表示的生成对抗网络
GANBERT: Generative Adversarial Networks with Bidirectional Encoder Representations from Transformers for MRI to PET synthesis
论文作者
论文摘要
综合医学图像(例如PET)是一项艰巨的任务,因为强度范围比照片和数字效果图中的强度范围更宽且密集,并且通常偏向零。最重要的是,PET中的强度值具有绝对的意义,并用于计算在整个人群中可重现的参数。但是,通常在综合PET图像时,必须在处理前/后处理中进行大量手动调整,因为其强度范围可能会在浮点值值中发生差异,例如-100至1000之间。为了克服这些挑战,我们采用了来自变形金刚(BERT)算法的双向编码器表示,这些算法在自然语言处理(NLP)方面取得了巨大成功,其中宽范围的浮点强度值表示为整数,范围为0至10000,类似于自然语言的字典。然后,对BERT进行训练,以预测蒙版值图像的比例,其中其“下一个句子预测(NSP)”充当GAN歧视者。我们提出的方法能够从MRI图像中产生PET图像,并在处理前/后处理中没有手动调整。这是一种可以扩展并准备部署的方法。
Synthesizing medical images, such as PET, is a challenging task due to the fact that the intensity range is much wider and denser than those in photographs and digital renderings and are often heavily biased toward zero. Above all, intensity values in PET have absolute significance, and are used to compute parameters that are reproducible across the population. Yet, usually much manual adjustment has to be made in pre-/post- processing when synthesizing PET images, because its intensity ranges can vary a lot, e.g., between -100 to 1000 in floating point values. To overcome these challenges, we adopt the Bidirectional Encoder Representations from Transformers (BERT) algorithm that has had great success in natural language processing (NLP), where wide-range floating point intensity values are represented as integers ranging between 0 to 10000 that resemble a dictionary of natural language vocabularies. BERT is then trained to predict a proportion of masked values images, where its "next sentence prediction (NSP)" acts as GAN discriminator. Our proposed approach, is able to generate PET images from MRI images in wide intensity range, with no manual adjustments in pre-/post- processing. It is a method that can scale and ready to deploy.