论文标题
泄漏和基于ML科学的可重复性危机
Leakage and the Reproducibility Crisis in ML-based Science
论文作者
论文摘要
用于预测和预测的机器学习(ML)方法已在定量科学中普遍存在。但是,基于ML的科学中有许多已知的方法论陷阱,包括数据泄漏。在本文中,我们系统地研究了基于ML的科学中的可重复性问题。我们表明,数据泄漏确实是一个普遍的问题,并导致了严重的可重复性失败。具体而言,通过对采用ML方法的研究社区的文献调查,我们发现了17个领域,发现了错误,共同影响了329篇论文,在某些情况下,导致了过度解放的结论。根据我们的调查,我们提出了8种泄漏类型的细粒分类法,范围从教科书错误到打开研究问题。 我们主张基于ML的科学的基本方法学变化,以便在出版之前可以捕获泄漏病例。为此,我们提出了模型信息表,以根据ML模型报告科学主张,以解决我们调查中确定的所有类型的泄漏。为了调查可重复性错误的影响和模型信息表的功效,我们在一个据信复杂的ML模型的领域进行了可重复性研究,该研究被认为比旧统计模型(例如逻辑回归(LR):内战预测)大大超过了较早的统计模型。我们发现,与LR模型相比,所有声称复杂ML模型具有出色性能的论文由于数据泄漏而无法再现,并且复杂的ML模型的性能并不比数十年历史的LR模型更好。尽管这些错误都无法通过阅读论文来捕获,但模型信息表将在每种情况下都能检测到泄漏。
The use of machine learning (ML) methods for prediction and forecasting has become widespread across the quantitative sciences. However, there are many known methodological pitfalls, including data leakage, in ML-based science. In this paper, we systematically investigate reproducibility issues in ML-based science. We show that data leakage is indeed a widespread problem and has led to severe reproducibility failures. Specifically, through a survey of literature in research communities that adopted ML methods, we find 17 fields where errors have been found, collectively affecting 329 papers and in some cases leading to wildly overoptimistic conclusions. Based on our survey, we present a fine-grained taxonomy of 8 types of leakage that range from textbook errors to open research problems. We argue for fundamental methodological changes to ML-based science so that cases of leakage can be caught before publication. To that end, we propose model info sheets for reporting scientific claims based on ML models that would address all types of leakage identified in our survey. To investigate the impact of reproducibility errors and the efficacy of model info sheets, we undertake a reproducibility study in a field where complex ML models are believed to vastly outperform older statistical models such as Logistic Regression (LR): civil war prediction. We find that all papers claiming the superior performance of complex ML models compared to LR models fail to reproduce due to data leakage, and complex ML models don't perform substantively better than decades-old LR models. While none of these errors could have been caught by reading the papers, model info sheets would enable the detection of leakage in each case.