论文标题
稀疏X射线计算机断层扫描中的自我监督方法重建方法
A Self-Supervised Approach to Reconstruction in Sparse X-Ray Computed Tomography
论文作者
论文摘要
计算机断层扫描推动了从生物学到材料科学领域的科学进步。这项技术允许通过相对于梁在不同旋转处的物体衰减X射线来阐明3维内部结构。通过对二维投影进行成像,可以通过计算算法重建一个三维对象。以更多的旋转角度成像可以改善重建。但是,进行更多测量会增加X射线剂量,并可能导致样本损坏。深度神经网络已被用来通过在已知类似对象的数据集上训练稀疏的2D投影测量到3D重建。但是,获得训练数据集的高质量对象重建需要高X射线剂量测量,可以在成像完成之前破坏或更改样品。这成为一个鸡和蛋的问题:如果没有深度学习,就无法产生高质量的重建,并且没有重建就无法学习深度的神经网络。这项工作开发并验证了一种自我监管的概率深度学习技术,即物理学的变异自动编码器,以解决此问题。一个仅由每个对象的稀疏投影测量值组成的数据集用于共同重建集合的所有对象。这种方法具有允许使用X射线计算机断层扫描可视化脆弱样品的潜力。我们发布了我们的代码以复制结果:https://github.com/vganapati/ct_pvae。
Computed tomography has propelled scientific advances in fields from biology to materials science. This technology allows for the elucidation of 3-dimensional internal structure by the attenuation of x-rays through an object at different rotations relative to the beam. By imaging 2-dimensional projections, a 3-dimensional object can be reconstructed through a computational algorithm. Imaging at a greater number of rotation angles allows for improved reconstruction. However, taking more measurements increases the x-ray dose and may cause sample damage. Deep neural networks have been used to transform sparse 2-D projection measurements to a 3-D reconstruction by training on a dataset of known similar objects. However, obtaining high-quality object reconstructions for the training dataset requires high x-ray dose measurements that can destroy or alter the specimen before imaging is complete. This becomes a chicken-and-egg problem: high-quality reconstructions cannot be generated without deep learning, and the deep neural network cannot be learned without the reconstructions. This work develops and validates a self-supervised probabilistic deep learning technique, the physics-informed variational autoencoder, to solve this problem. A dataset consisting solely of sparse projection measurements from each object is used to jointly reconstruct all objects of the set. This approach has the potential to allow visualization of fragile samples with x-ray computed tomography. We release our code for reproducing our results at: https://github.com/vganapati/CT_PVAE .