RESUMEN
The paper proposes an ecological and portable protocol for the large-scale collection of reading data in high-functioning autism spectrum disorder (ASD) children based on recording the finger movements of a subject reading a text displayed on a tablet touchscreen. By capitalizing on recent evidence that movements of a finger that points to a scene or text during visual exploration or reading may approximate eye fixations, we focus on recognition of written content and function words, pace of reading, and accuracy in reading comprehension. The analysis showed significant differences between typically developing and ASD children, with the latter group exhibiting greater variation in levels of reading ability, slower developmental pace in reading speed, less accurate comprehension, greater dependency on word length and word frequency, less significant prediction-based processing, as well as a monotonous, steady reading pace with reduced attention to weak punctuation. Finger-tracking patterns provides evidence that ASD readers may fail to integrate single word processing into major syntactic structures and lends support to the hypothesis of an impaired use of contextual information to predict upcoming stimuli, suggesting that difficulties in perception may arise as difficulties in prediction.
RESUMEN
A typical consequence of stroke in the right hemisphere is unilateral spatial neglect. Distinct forms of neglect have been described, such as space-based (egocentric) and object-based (allocentric) neglect. However, the relationship between these two forms of neglect is still far from being understood, as well as their neural substrates. Here, we further explore this issue by using voxel lesion symptoms mapping (VLSM) analyses on a large sample of early subacute right-stroke patients assessed with the Apples Cancellation Test. This is a sensitive test that simultaneously measures both egocentric and allocentric neglect. Behaviourally, we found no correlation between egocentric and allocentric performance, indicating independent mechanisms supporting the two forms of neglect. This was confirmed by the VLSM analysis that pointed out a link between a damage in the superior longitudinal fasciculus and left egocentric neglect. By contrast, no association was found between brain damage and left allocentric neglect. These results indicate a higher probability to observe egocentric neglect as a consequence of white matter damages in the superior longitudinal fasciculus, while allocentric neglect appears more "globally" related to the whole lesion map. Overall, these findings on early subacute right-stroke patients highlight the role played by white matter integrity in sustaining attention-related operations within an egocentric frame of reference.
Asunto(s)
Trastornos de la Percepción , Accidente Cerebrovascular , Sustancia Blanca , Lateralidad Funcional , Humanos , Imagen por Resonancia Magnética , Pruebas Neuropsicológicas , Trastornos de la Percepción/diagnóstico por imagen , Trastornos de la Percepción/etiología , Percepción Espacial , Accidente Cerebrovascular/diagnóstico por imagen , Sustancia Blanca/diagnóstico por imagenRESUMEN
Current research in the emotion recognition field is exploring the possibility of merging the information from physiological signals, behavioural data, and speech. Electrodermal activity (EDA) is amongst the main psychophysiological arousal indicators. Nonetheless, it is quite difficult to be analyzed in ecological scenarios, like, for instance, when the subject is speaking. On the other hand, speech carries relevant information of subject emotional state and its potential in the field of affective computing is still to be fully exploited. In this work, we aim at exploring the possibility of merging the information from electrodermal activity (EDA) and speech to improve the recognition of human arousal level during the pronunciation of single affective words. Unlike the majority of studies in the literature, we focus on speakers' arousal rather than the emotion conveyed by the spoken word. Specifically, a support vector machine with recursive feature elimination strategy (SVM-RFE) is trained and tested on three datasets, i.e using the two channels (i.e., speech and EDA) separately and then jointly. The results show that the merging of EDA and speech information significantly improves the marginal classifier (+11.64%). The six selected features by the RFE procedure will be used for the development of a future multivariate model of emotions.