Your browser doesn't support javascript.
loading
Paired evaluation of machine-learning models characterizes effects of confounders and outliers.
Nariya, Maulik K; Mills, Caitlin E; Sorger, Peter K; Sokolov, Artem.
  • Nariya MK; Laboratory of Systems Pharmacology, Harvard Program in Therapeutic Science, Harvard Medical School, Boston, MA 02115, USA.
  • Mills CE; Department of Systems Biology, Harvard Medical School, Boston, MA 02115, USA.
  • Sorger PK; Laboratory of Systems Pharmacology, Harvard Program in Therapeutic Science, Harvard Medical School, Boston, MA 02115, USA.
  • Sokolov A; Laboratory of Systems Pharmacology, Harvard Program in Therapeutic Science, Harvard Medical School, Boston, MA 02115, USA.
Patterns (N Y) ; 4(8): 100791, 2023 Aug 11.
Article en En | MEDLINE | ID: mdl-37602225
The true accuracy of a machine-learning model is a population-level statistic that cannot be observed directly. In practice, predictor performance is estimated against one or more test datasets, and the accuracy of this estimate strongly depends on how well the test sets represent all possible unseen datasets. Here we describe paired evaluation as a simple, robust approach for evaluating performance of machine-learning models in small-sample biological and clinical studies. We use the method to evaluate predictors of drug response in breast cancer cell lines and of disease severity in patients with Alzheimer's disease, demonstrating that the choice of test data can cause estimates of performance to vary by as much as 20%. We show that paired evaluation makes it possible to identify outliers, improve the accuracy of performance estimates in the presence of known confounders, and assign statistical significance when comparing machine-learning models.
Palabras clave

Texto completo: 1 Banco de datos: MEDLINE Tipo de estudio: Prognostic_studies Idioma: En Año: 2023 Tipo del documento: Article

Texto completo: 1 Banco de datos: MEDLINE Tipo de estudio: Prognostic_studies Idioma: En Año: 2023 Tipo del documento: Article