Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 1 de 1
Filtrar
Más filtros

Bases de datos
Tipo del documento
País de afiliación
Intervalo de año de publicación
1.
Alzheimers Dement ; 19(5): 2135-2149, 2023 05.
Artículo en Inglés | MEDLINE | ID: mdl-36735865

RESUMEN

INTRODUCTION: Machine learning research into automated dementia diagnosis is becoming increasingly popular but so far has had limited clinical impact. A key challenge is building robust and generalizable models that generate decisions that can be reliably explained. Some models are designed to be inherently "interpretable," whereas post hoc "explainability" methods can be used for other models. METHODS: Here we sought to summarize the state-of-the-art of interpretable machine learning for dementia. RESULTS: We identified 92 studies using PubMed, Web of Science, and Scopus. Studies demonstrate promising classification performance but vary in their validation procedures and reporting standards and rely heavily on popular data sets. DISCUSSION: Future work should incorporate clinicians to validate explanation methods and make conclusive inferences about dementia-related disease pathology. Critically analyzing model explanations also requires an understanding of the interpretability methods itself. Patient-specific explanations are also required to demonstrate the benefit of interpretable machine learning in clinical practice.


Asunto(s)
Demencia , Aprendizaje Automático , Humanos , Proyectos de Investigación , Demencia/diagnóstico
SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA