RESUMEN
Introduction: Artificial intelligence (AI) systems leveraging speech and language changes could support timely detection of Alzheimer's disease (AD). Methods: The AMYPRED study (NCT04828122) recruited 133 subjects with an established amyloid beta (Aß) biomarker (66 Aß+, 67 Aß-) and clinical status (71 cognitively unimpaired [CU], 62 mild cognitive impairment [MCI] or mild AD). Daily story recall tasks were administered via smartphones and analyzed with an AI system to predict MCI/mild AD and Aß positivity. Results: Eighty-six percent of participants (115/133) completed remote assessments. The AI system predicted MCI/mild AD (area under the curve [AUC] = 0.85, ±0.07) but not Aß (AUC = 0.62 ±0.11) in the full sample, and predicted Aß in clinical subsamples (MCI/mild AD: AUC = 0.78 ±0.14; CU: AUC = 0.74 ±0.13) on short story variants (immediate recall). Long stories and delayed retellings delivered broadly similar results. Discussion: Speech-based testing offers simple and accessible screening for early-stage AD.
RESUMEN
BACKGROUND: Story recall is a simple and sensitive cognitive test that is commonly used to measure changes in episodic memory function in early Alzheimer disease (AD). Recent advances in digital technology and natural language processing methods make this test a candidate for automated administration and scoring. Multiple parallel test stimuli are required for higher-frequency disease monitoring. OBJECTIVE: This study aims to develop and validate a remote and fully automated story recall task, suitable for longitudinal assessment, in a population of older adults with and without mild cognitive impairment (MCI) or mild AD. METHODS: The "Amyloid Prediction in Early Stage Alzheimer's disease" (AMYPRED) studies recruited participants in the United Kingdom (AMYPRED-UK: NCT04828122) and the United States (AMYPRED-US: NCT04928976). Participants were asked to complete optional daily self-administered assessments remotely on their smart devices over 7 to 8 days. Assessments included immediate and delayed recall of 3 stories from the Automatic Story Recall Task (ASRT), a test with multiple parallel stimuli (18 short stories and 18 long stories) balanced for key linguistic and discourse metrics. Verbal responses were recorded and securely transferred from participants' personal devices and automatically transcribed and scored using text similarity metrics between the source text and retelling to derive a generalized match score. Group differences in adherence and task performance were examined using logistic and linear mixed models, respectively. Correlational analysis examined parallel-forms reliability of ASRTs and convergent validity with cognitive tests (Logical Memory Test and Preclinical Alzheimer's Cognitive Composite with semantic processing). Acceptability and usability data were obtained using a remotely administered questionnaire. RESULTS: Of the 200 participants recruited in the AMYPRED studies, 151 (75.5%)-78 cognitively unimpaired (CU) and 73 MCI or mild AD-engaged in optional remote assessments. Adherence to daily assessment was moderate and did not decline over time but was higher in CU participants (ASRTs were completed each day by 73/106, 68.9% participants with MCI or mild AD and 78/94, 83% CU participants). Participants reported favorable task usability: infrequent technical problems, easy use of the app, and a broad interest in the tasks. Task performance improved modestly across the week and was better for immediate recall. The generalized match scores were lower in participants with MCI or mild AD (Cohen d=1.54). Parallel-forms reliability of ASRT stories was moderate to strong for immediate recall (mean rho 0.73, range 0.56-0.88) and delayed recall (mean rho=0.73, range=0.54-0.86). The ASRTs showed moderate convergent validity with established cognitive tests. CONCLUSIONS: The unsupervised, self-administered ASRT task is sensitive to cognitive impairments in MCI and mild AD. The task showed good usability, high parallel-forms reliability, and high convergent validity with established cognitive tests. Remote, low-cost, low-burden, and automatically scored speech assessments could support diagnostic screening, health care, and treatment monitoring.
RESUMEN
Early detection of Alzheimer's disease is required to identify patients suitable for disease-modifying medications and to improve access to non-pharmacological preventative interventions. Prior research shows detectable changes in speech in Alzheimer's dementia and its clinical precursors. The current study assesses whether a fully automated speech-based artificial intelligence system can detect cognitive impairment and amyloid beta positivity, which characterize early stages of Alzheimer's disease. Two hundred participants (age 54-85, mean 70.6; 114 female, 86 male) from sister studies in the UK (NCT04828122) and the USA (NCT04928976), completed the same assessments and were combined in the current analyses. Participants were recruited from prior clinical trials where amyloid beta status (97 amyloid positive, 103 amyloid negative, as established via PET or CSF test) and clinical diagnostic status was known (94 cognitively unimpaired, 106 with mild cognitive impairment or mild Alzheimer's disease). The automatic story recall task was administered during supervised in-person or telemedicine assessments, where participants were asked to recall stories immediately and after a brief delay. An artificial intelligence text-pair evaluation model produced vector-based outputs from the original story text and recorded and transcribed participant recalls, quantifying differences between them. Vector-based representations were fed into logistic regression models, trained with tournament leave-pair-out cross-validation analysis to predict amyloid beta status (primary endpoint), mild cognitive impairment and amyloid beta status in diagnostic subgroups (secondary endpoints). Predictions were assessed by the area under the receiver operating characteristic curve for the test result in comparison with reference standards (diagnostic and amyloid status). Simulation analysis evaluated two potential benefits of speech-based screening: (i) mild cognitive impairment screening in primary care compared with the Mini-Mental State Exam, and (ii) pre-screening prior to PET scanning when identifying an amyloid positive sample. Speech-based screening predicted amyloid beta positivity (area under the curve = 0.77) and mild cognitive impairment or mild Alzheimer's disease (area under the curve = 0.83) in the full sample, and predicted amyloid beta in subsamples (mild cognitive impairment or mild Alzheimer's disease: area under the curve = 0.82; cognitively unimpaired: area under the curve = 0.71). Simulation analyses indicated that in primary care, speech-based screening could modestly improve detection of mild cognitive impairment (+8.5%), while reducing false positives (-59.1%). Furthermore, speech-based amyloid pre-screening was estimated to reduce the number of PET scans required by 35.3% and 35.5% in individuals with mild cognitive impairment and cognitively unimpaired individuals, respectively. Speech-based assessment offers accessible and scalable screening for mild cognitive impairment and amyloid beta positivity.
RESUMEN
Clinical and pathological stage are defining parameters in oncology, which direct a patient's treatment options and prognosis. Pathology reports contain a wealth of staging information that is not stored in structured form in most electronic health records (EHRs). Therefore, we evaluated three supervised machine learning methods (Support Vector Machine, Decision Trees, Gradient Boosting) to classify free-text pathology reports for prostate cancer into T, N and M stage groups.