论文标题
从校准到参数学习:在地球科学建模中利用大数据的缩放效应
From calibration to parameter learning: Harnessing the scaling effects of big data in geoscientific modeling
论文作者
论文摘要
许多地球科学(例如水文和生态系统科学)中模型的行为和技能在很大程度上取决于需要校准的空间变化参数。经过良好校准的模型可以通过模型物理学合理地传播从观测值到未观察到的变量的信息,但是传统校准效率很低,并且导致了非唯一的解决方案。在这里,我们提出了一个新颖的可区分参数学习(DPL)框架,该框架有效地学习了输入(以及所选响应)和参数之间的全局映射。至关重要的是,DPL表现出有益的缩放曲线,以前没有向地球科学家证明:随着训练数据的增加,DPL的性能提高,更多的身体连贯性和更好的概括性(跨越空间和未校准的变量),所有这些都具有降低级别的计算成本。我们展示了从土壤水分和水流中学到的示例,其中DPL极大地超过了现有的进化和区域化方法,或者仅需要约12.5%的训练数据以实现相似的性能。通用方案促进了深度学习和基于过程的模型的集成,而无需重新实现。
The behaviors and skills of models in many geosciences (e.g., hydrology and ecosystem sciences) strongly depend on spatially-varying parameters that need calibration. A well-calibrated model can reasonably propagate information from observations to unobserved variables via model physics, but traditional calibration is highly inefficient and results in non-unique solutions. Here we propose a novel differentiable parameter learning (dPL) framework that efficiently learns a global mapping between inputs (and optionally responses) and parameters. Crucially, dPL exhibits beneficial scaling curves not previously demonstrated to geoscientists: as training data increases, dPL achieves better performance, more physical coherence, and better generalizability (across space and uncalibrated variables), all with orders-of-magnitude lower computational cost. We demonstrate examples that learned from soil moisture and streamflow, where dPL drastically outperformed existing evolutionary and regionalization methods, or required only ~12.5% of the training data to achieve similar performance. The generic scheme promotes the integration of deep learning and process-based models, without mandating reimplementation.