论文标题

与模型相关的不确定性如何在不同学科中进行量化和报告?

How is model-related uncertainty quantified and reported in different disciplines?

论文作者

Simmonds, Emily G., Adjei, Kwaku Peprah, Andersen, Christoffer Wold, Aspheim, Janne Cathrin Hetle, Battistin, Claudia, Bulso, Nicola, Christensen, Hannah, Cretois, Benjamin, Cubero, Ryan, Davidovich, Ivan A., Dickel, Lisa, Dunn, Benjamin, Dunn-Sigouin, Etienne, Dyrstad, Karin, Einum, Sigurd, Giglio, Donata, Gjerlow, Haakon, Godefroidt, Amelie, Gonzalez-Gil, Ricardo, Cogno, Soledad Gonzalo, Grosse, Fabian, Halloran, Paul, Jensen, Mari F., Kennedy, John James, Langsaether, Peter Egge, Laverick, Jack H., Lederberger, Debora, Li, Camille, Mandeville, Elizabeth, Mandeville, Caitlin, Moe, Espen, Schroder, Tobias Navarro, Nunan, David, Parada, Jorge Sicacha, Simpson, Melanie Rae, Skarstein, Emma Sofie, Spensberger, Clemens, Stevens, Richard, Subramanian, Aneesh, Svendsen, Lea, Theisen, Ole Magnus, Watret, Connor, OHara, Robert B.

论文摘要

我们怎么知道我们知道多少?量化与我们的建模工作相关的不确定性是我们回答对任何现象的了解的唯一方法。凭借定量科学在公共领域的影响很大,并且模型转化为行动的结果,我们必须以足够的严格性来支持我们的结论,以产生有用的,可重复的结果。对基于模型的不确定性的不完全考虑会导致对现实世界影响的错误结论。尽管存在这些潜在的破坏后果,但在科学领域内和跨科学领域的不确定性考虑因素是不完整的。我们采用独特的跨学科方法,并对跨越生物学,物理和社会科学的七个科学领域进行对模型相关的不确定性量化的系统审核。我们的结果表明,没有一个字段可以完全考虑模型不确定性,但是我们可以一起填补空白。我们建议通过使用源框架来考虑不确定性考虑,特定于模型的指南,改进的演示和共享最佳实践来改善不确定性量化的机会。我们还确定了共同的挑战(输入数据的不确定性,平衡权衡取舍,错误传播以及定义需要多少不确定性)。 Finally, we make nine concrete recommendations for current practice (following good practice guidelines and an uncertainty checklist, presenting uncertainty numerically, and propagating model-related uncertainty into conclusions), future research priorities (uncertainty in input data, quantifying uncertainty in complex models, and the importance of missing uncertainty in different contexts), and general research standards across the sciences (transparency about study limitations and dedicated手稿的不确定性部分)。

How do we know how much we know? Quantifying uncertainty associated with our modelling work is the only way we can answer how much we know about any phenomenon. With quantitative science now highly influential in the public sphere and the results from models translating into action, we must support our conclusions with sufficient rigour to produce useful, reproducible results. Incomplete consideration of model-based uncertainties can lead to false conclusions with real world impacts. Despite these potentially damaging consequences, uncertainty consideration is incomplete both within and across scientific fields. We take a unique interdisciplinary approach and conduct a systematic audit of model-related uncertainty quantification from seven scientific fields, spanning the biological, physical, and social sciences. Our results show no single field is achieving complete consideration of model uncertainties, but together we can fill the gaps. We propose opportunities to improve the quantification of uncertainty through use of a source framework for uncertainty consideration, model type specific guidelines, improved presentation, and shared best practice. We also identify shared outstanding challenges (uncertainty in input data, balancing trade-offs, error propagation, and defining how much uncertainty is required). Finally, we make nine concrete recommendations for current practice (following good practice guidelines and an uncertainty checklist, presenting uncertainty numerically, and propagating model-related uncertainty into conclusions), future research priorities (uncertainty in input data, quantifying uncertainty in complex models, and the importance of missing uncertainty in different contexts), and general research standards across the sciences (transparency about study limitations and dedicated uncertainty sections of manuscripts).

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源