论文标题

部分可观测时空混沌系统的无模型预测

Strong Gravitational Lensing Parameter Estimation with Vision Transformer

论文作者

Huang, Kuan-Wei, Chen, Geoff Chih-Fan, Chang, Po-Wen, Lin, Sheng-Chieh, Hsu, Chia-Jung, Thengane, Vishal, Lin, Joshua Yao-Yu

论文摘要

量化数百个强镜的类星体系统的参数和相应的不确定性是解决最重要的科学问题之一的关键:哈勃常数($ h_ {0} $)张力。常用的马尔可夫链蒙特卡洛(MCMC)方法已经太耗时了,无法实现这一目标,但最近的工作表明,卷积神经网络(CNN)可以是替代速度七个数量级的替代方案。有了31,200张模拟强烈的类星体图像,我们首次探索了视觉变压器(VIT)的使用情况。我们表明,与CNN相比,VIT可以达到竞争成果,并且在某些镜头参数方面特别擅长,包括最重要的质量相关参数,例如镜头中心$θ_{1} $和$θ_{2} $,Ellipticities $ e_1 $和$ e_1 $和$ e_2 $,以及radial power-power-power-power-power-power-luch $ $ $γ''''''''''''''''''''''''。有了这个有希望的初步结果,我们认为VIT(或基于注意力的)网络体系结构可能是下一代调查的强大镜头科学的重要工具。我们的代码和数据的开源位于\ url {https://github.com/kuanweih/strong_lensing_vit_resnet}中。

Quantifying the parameters and corresponding uncertainties of hundreds of strongly lensed quasar systems holds the key to resolving one of the most important scientific questions: the Hubble constant ($H_{0}$) tension. The commonly used Markov chain Monte Carlo (MCMC) method has been too time-consuming to achieve this goal, yet recent work has shown that convolution neural networks (CNNs) can be an alternative with seven orders of magnitude improvement in speed. With 31,200 simulated strongly lensed quasar images, we explore the usage of Vision Transformer (ViT) for simulated strong gravitational lensing for the first time. We show that ViT could reach competitive results compared with CNNs, and is specifically good at some lensing parameters, including the most important mass-related parameters such as the center of lens $θ_{1}$ and $θ_{2}$, the ellipticities $e_1$ and $e_2$, and the radial power-law slope $γ'$. With this promising preliminary result, we believe the ViT (or attention-based) network architecture can be an important tool for strong lensing science for the next generation of surveys. The open source of our code and data is in \url{https://github.com/kuanweih/strong_lensing_vit_resnet}.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源