RESUMO
OBJECTIVE: Maintaining attention underlies many aspects of cognition and becomes compromised early in neurodegenerative diseases like Alzheimer's disease (AD). The consistency of maintaining attention can be measured with reaction time (RT) variability. Previous work has focused on measuring such fluctuations during in-clinic testing, but recent developments in remote, smartphone-based cognitive assessments can allow one to test if these fluctuations in attention are evident in naturalistic settings and if they are sensitive to traditional clinical and cognitive markers of AD. METHOD: Three hundred and seventy older adults (aged 75.8 +/- 5.8 years) completed a week of remote daily testing on the Ambulatory Research in Cognition (ARC) smartphone platform and also completed clinical, genetic, and conventional in-clinic cognitive assessments. RT variability was assessed in a brief (20-40 seconds) processing speed task using two different measures of variability, the Coefficient of Variation (CoV) and the Root Mean Squared Successive Difference (RMSSD) of RTs on correct trials. RESULTS: Symptomatic participants showed greater variability compared to cognitively normal participants. When restricted to cognitively normal participants, APOE ε4 carriers exhibited greater variability than noncarriers. Both CoV and RMSSD showed significant, and similar, correlations with several in-clinic cognitive composites. Finally, both RT variability measures significantly mediated the relationship between APOE ε4 status and several in-clinic cognition composites. CONCLUSIONS: Attentional fluctuations over 20-40 seconds assessed in daily life, are sensitive to clinical status and genetic risk for AD. RT variability appears to be an important predictor of cognitive deficits during the preclinical disease stage.
Assuntos
Doença de Alzheimer , Tempo de Reação , Humanos , Doença de Alzheimer/fisiopatologia , Doença de Alzheimer/genética , Idoso , Masculino , Feminino , Tempo de Reação/fisiologia , Idoso de 80 Anos ou mais , Testes Neuropsicológicos , Apolipoproteína E4/genética , Smartphone , Atenção/fisiologiaRESUMO
Facial expression recognition (FER) has received increasing attention. However, multiple factors (e.g., uneven illumination, facial deflection, occlusion, and subjectivity of annotations in image datasets) probably reduce the performance of traditional FER methods. Thus, we propose a novel Hybrid Domain Consistency Network (HDCNet) based on a feature constraint method that combines both spatial domain consistency and channel domain consistency. Specifically, first, the proposed HDCNet mines the potential attention consistency feature expression (different from manual features, e.g., HOG and SIFT) as effective supervision information by comparing the original sample image with the augmented facial expression image. Second, HDCNet extracts facial expression-related features in the spatial and channel domains, and then it constrains the consistent expression of features through the mixed domain consistency loss function. In addition, the loss function based on the attention-consistency constraints does not require additional labels. Third, the network weights are learned to optimize the classification network through the loss function of the mixed domain consistency constraints. Finally, experiments conducted on the public RAF-DB and AffectNet benchmark datasets verify that the proposed HDCNet improved classification accuracy by 0.3-3.84% compared to the existing methods.