Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 2 de 2
Filtrar
Mais filtros

Base de dados
Ano de publicação
Tipo de documento
Intervalo de ano de publicação
1.
Cortex ; 145: 264-272, 2021 12.
Artigo em Inglês | MEDLINE | ID: mdl-34775263

RESUMO

Previous research suggests oral and written language can act as barometers of an individual's cognitive function, potentially providing a screening tool for the earliest stages of Alzheimer's disease (AD) and other forms of dementia. Idea density is a measure of the rate at which ideas, or elementary predications, are expressed and may provide an ideal measure for early detection of deficits in language. Previous research has shown that when no restrictions are set on the topic of the idea, a decrease in propositional idea density (PID) is associated with an increased risk of developing AD. However, this has been limited by moderate sample sizes and manual transcribing. Technological advancement has enabled the automated calculation of PID from tools such as the Computerized Propositional Idea Density Rater (CPIDR). We delivered an online autobiographical writing task to older adult Australians from ISLAND (Island Study Linking Ageing and Neurodegenerative Disease). Linear regression models were fitted in R. We analysed text files (range 10-1180 words) using CPIDRv5 provided by 3316 (n = 853 males [25.7%], n = 2463 females [74.3%]) ISLAND participants. Over 358,957 words written in 3316 written autobiographical responses were analysed. Mean PID was higher in females (53.5 [±3.69]) than males (52.6 [±4.50]). Both advancing age and being male were significantly associated with a decrease in PID (p < .001). Automated methods of language analysis hold great promise for the early detection of subtle deficits in language capacity. Although our effect sizes were small, PID may be a sensitive measure of deficits in language in ageing individuals and is able to be collected at scale using online methods of data capture.


Assuntos
Doença de Alzheimer , Doenças Neurodegenerativas , Idoso , Envelhecimento , Doença de Alzheimer/diagnóstico , Austrália , Feminino , Humanos , Idioma , Masculino
2.
Sensors (Basel) ; 21(7)2021 Mar 27.
Artigo em Inglês | MEDLINE | ID: mdl-33801739

RESUMO

Emotion recognition plays an important role in human-computer interactions. Recent studies have focused on video emotion recognition in the wild and have run into difficulties related to occlusion, illumination, complex behavior over time, and auditory cues. State-of-the-art methods use multiple modalities, such as frame-level, spatiotemporal, and audio approaches. However, such methods have difficulties in exploiting long-term dependencies in temporal information, capturing contextual information, and integrating multi-modal information. In this paper, we introduce a multi-modal flexible system for video-based emotion recognition in the wild. Our system tracks and votes on significant faces corresponding to persons of interest in a video to classify seven basic emotions. The key contribution of this study is that it proposes the use of face feature extraction with context-aware and statistical information for emotion recognition. We also build two model architectures to effectively exploit long-term dependencies in temporal information with a temporal-pyramid model and a spatiotemporal model with "Conv2D+LSTM+3DCNN+Classify" architecture. Finally, we propose the best selection ensemble to improve the accuracy of multi-modal fusion. The best selection ensemble selects the best combination from spatiotemporal and temporal-pyramid models to achieve the best accuracy for classifying the seven basic emotions. In our experiment, we take benchmark measurement on the AFEW dataset with high accuracy.


Assuntos
Conscientização , Emoções , Humanos , Estimulação Luminosa , Modalidades de Fisioterapia
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA