Your browser doesn't support javascript.
loading
: 20 | 50 | 100
1 - 2 de 2
1.
J Alzheimers Dis ; 92(3): 875-886, 2023.
Article En | MEDLINE | ID: mdl-36847001

BACKGROUND: Early identification of different stages of cognitive impairment is important to provide available intervention and timely care for the elderly. OBJECTIVE: This study aimed to examine the ability of the artificial intelligence (AI) technology to distinguish participants with mild cognitive impairment (MCI) from those with mild to moderate dementia based on automated video analysis. METHODS: A total of 95 participants were recruited (MCI, 41; mild to moderate dementia, 54). The videos were captured during the Short Portable Mental Status Questionnaire process; the visual and aural features were extracted using these videos. Deep learning models were subsequently constructed for the binary differentiation of MCI and mild to moderate dementia. Correlation analysis of the predicted Mini-Mental State Examination, Cognitive Abilities Screening Instrument scores, and ground truth was also performed. RESULTS: Deep learning models combining both the visual and aural features discriminated MCI from mild to moderate dementia with an area under the curve (AUC) of 77.0% and accuracy of 76.0%. The AUC and accuracy increased to 93.0% and 88.0%, respectively, when depression and anxiety were excluded. Significant moderate correlations were observed between the predicted cognitive function and ground truth, and the correlation was strong excluding depression and anxiety. Interestingly, female, but not male, exhibited a correlation. CONCLUSION: The study showed that video-based deep learning models can differentiate participants with MCI from those with mild to moderate dementia and can predict cognitive function. This approach may offer a cost-effective and easily applicable method for early detection of cognitive impairment.


Cognitive Dysfunction , Dementia , Humans , Female , Aged , Dementia/diagnosis , Dementia/psychology , Artificial Intelligence , Neuropsychological Tests , Cognitive Dysfunction/diagnosis , Cognitive Dysfunction/psychology , Cognition
2.
Nutrients ; 14(16)2022 Aug 12.
Article En | MEDLINE | ID: mdl-36014819

Background and aims: Digital food viewing is a vital skill for connecting dieticians to e-health. The aim of this study was to integrate a novel pedagogical framework that combines interactive three- (3-D) and two-dimensional (2-D) food models into a formal dietetic training course. The level of agreement between the digital food models (first semester) and the effectiveness of educational integration of digital food models during the school closure due to coronavirus disease 2019 (COVID-19) (second semester) were evaluated. Method: In total, 65 second-year undergraduate dietetic students were enrolled in a nutritional practicum course at the School of Nutrition and Health Sciences, Taipei Medical University (Taipei, Taiwan). A 3-D food model was created using Agisoft Metashape. Students' digital food viewing skills and receptiveness towards integrating digital food models were evaluated. Results: In the first semester, no statistical differences were observed between 2-D and 3-D food viewing skills in food identification (2-D: 89% vs. 3-D: 85%) and quantification (within ±10% difference in total calories) (2-D: 19.4% vs. 3-D: 19.3%). A Spearman correlation analysis showed moderate to strong correlations of estimated total calories (0.69~0.93; all p values < 0.05) between the 3-D and 2-D models. Further analysis showed that students who struggled to master both 2-D and 3-D food viewing skills had lower estimation accuracies than those who did not (equal performers: 28% vs. unequal performers:16%, p = 0.041), and interactive 3-D models may help them perform better than 2-D models. In the second semester, the digital food viewing skills significantly improved (food identification: 91.5% and quantification: 42.9%) even for those students who struggled to perform digital food viewing skills equally in the first semester (equal performers: 44% vs. unequal performers: 40%). Conclusion: Although repeated training greatly enhanced students' digital food viewing skills, a tailored training program may be needed to master 2-D and 3-D digital food viewing skills. Future study is needed to evaluate the effectiveness of digital food models for future "eHealth" care.


COVID-19 , Simulation Training , COVID-19/epidemiology , Humans , Nutritional Status , Pilot Projects , Portion Size
...