论文标题

无可能无宇宙界限的人工神经网络的测试:IMNN和DAE的比较

Test of Artificial Neural Networks in Likelihood-free Cosmological Constraints: A Comparison of IMNN and DAE

论文作者

Chen, Jie-Feng, Wang, Yu-Chen, Zhang, Tingting, Zhang, Tong-Jie

论文摘要

在使用观察性哈勃数据和IA型超新星数据来限制宇宙学参数的过程中,掩盖自回归流量和脱氧自动编码器的组合可以执行良好的结果。上面的组合用DAE提取了OHD的特征,并估计了使用MAF的宇宙学参数的后验分布。我们询问是否可以找到一个更好的工具来压缩大数据,以便在限制宇宙学参数的同时获得更好的结果。提出了最大化神经网络(一种基于模拟的机器学习技术)的信息。在一系列数值示例中,结果表明,IMNN可以坚固地找到最佳的非线性摘要。在这项工作中,我们主要比较IMNN和DAE的降低功能。我们使用IMNN和DAE将数据压缩为不同的维度,并为MAF设置不同的学习率以计算后部。同时,训练数据和模拟性OHD是使用简单的高斯可能性,空间平坦的λCDM模型和实际OHD数据生成的。为了避免直接比较后验的复杂计算,我们将不同的标准设置为比较IMNN和DAE。

In the procedure of constraining the cosmological parameters with the observational Hubble data and the type Ia supernova data, the combination of Masked Autoregressive Flow and Denoising Autoencoder can perform a good result. The above combination extracts the features from OHD with DAE, and estimates the posterior distribution of cosmological parameters with MAF. We ask whether we can find a better tool to compress large data in order to gain better results while constraining the cosmological parameters. Information maximising neural networks, a kind of simulation-based machine learning technique, was proposed at an earlier time. In a series of numerical examples, the results show that IMNN can find optimal, non-linear summaries robustly. In this work, we mainly compare the dimensionality reduction capabilities of IMNN and DAE. We use IMNN and DAE to compress the data into different dimensions and set different learning rates for MAF to calculate the posterior. Meanwhile, the training data and mock OHD are generated with a simple Gaussian likelihood, the spatially flat ΛCDM model and the real OHD data. To avoid the complex calculation in comparing the posterior directly, we set different criteria to compare IMNN and DAE.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源