Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 40
Filtrar
Más filtros











Base de datos
Intervalo de año de publicación
1.
Cochrane Database Syst Rev ; 9: CD013853, 2024 Sep 25.
Artículo en Inglés | MEDLINE | ID: mdl-39319863

RESUMEN

BACKGROUND: Dementia and mild cognitive impairment are significant contributors to disability and dependency in older adults. Current treatments for managing these conditions are limited. Exergaming, a novel technology-driven intervention combining physical exercise with cognitive tasks, is a potential therapeutic approach. OBJECTIVES: To assess the effects of exergaming interventions on physical and cognitive outcomes, and activities of daily living, in people with dementia and mild cognitive impairment. SEARCH METHODS: On 22 December 2023, we searched the Cochrane Dementia and Cognitive Improvement Group's register, MEDLINE (Ovid SP), Embase (Ovid SP), PsycINFO (Ovid SP), CINAHL (EBSCOhost), Web of Science Core Collection (Clarivate), LILACS (BIREME), ClinicalTrials.gov, and the WHO (World Health Organization) meta-register the International Clinical Trials Registry Portal. SELECTION CRITERIA: We included randomised controlled trials (RCTs) that recruited individuals diagnosed with dementia or mild cognitive impairment (MCI). Exergaming interventions involved participants being engaged in physical activity of at least moderate intensity, and used immersive and non-immersive virtual reality (VR) technology and real-time interaction. We planned to classify comparators as inactive control group (e.g. no treatment, waiting list), active control group (e.g. standard treatment, non-specific active control), or alternative treatment (e.g. physical activity, computerised cognitive training). Outcomes were to be measured using validated instruments. DATA COLLECTION AND ANALYSIS: Two review authors independently selected studies for inclusion, extracted data, assessed the risk of bias using the Cochrane risk of bias tool RoB 2, and assessed the certainty of the evidence using GRADE. We consulted a third author if required. Where possible, we pooled outcome data using a fixed-effect or random-effects model. We expressed treatment effects as standardised mean differences (SMDs) for continuous outcomes and as risk ratios (RRs) for dichotomous outcomes, along with 95% confidence intervals (CIs). When data could not be pooled, we presented a narrative synthesis. MAIN RESULTS: We included 11 studies published between 2014 and 2023. Six of these studies were pre-registered. Seven studies involved 308 participants with mild cognitive impairment, and five studies included 228 individuals with dementia. One of the studies presented data for both MCI and dementia separately. Most comparisons exhibited a high risk or some concerns of bias. We have only low or very low certainty about all the results presented below. Effects of exergaming interventions for people with dementia Compared to a control group Exergaming may improve global cognitive functioning at the end of treatment, but the evidence is very uncertain (SMD 1.47, 95% 1.04 to 1.90; 2 studies, 113 participants). The evidence is very uncertain about the effects of exergaming at the end of treatment on global physical functioning (SMD -0.20, 95% -0.57 to 0.17; 2 studies, 113 participants) or activities of daily living (ADL) (SMD -0.28, 95% -0.65 to 0.09; 2 studies, 113 participants). The evidence is very uncertain about adverse effects due to the small sample size and no events. Findings are based on two studies (113 participants), but data could not be pooled; both studies reported no adverse reactions linked to the intervention or control group. Compared to an alternative treatment group At the end of treatment, the evidence is very uncertain about the effects of exergaming on global physical functioning (SMD 0.14, 95% -0.30 to 0.58; 2 studies, 85 participants) or global cognitive functioning (SMD 0.11, 95% -0.33 to 0.55; 2 studies, 85 participants). For ADL, only one study was available (n = 67), which provided low-certainty evidence of little to no difference between exergaming and exercise. The evidence is very uncertain about adverse effects of exergaming compared with alternative treatment (RR 7.50, 95% CI 0.41 to 136.52; 2 studies, 2/85 participants). Effects of exergaming interventions for people with mild cognitive impairment (MCI) Compared to a control group Exergaming may improve global cognitive functioning at the end of treatment for people with MCI, but the evidence is very uncertain, (SMD 0.79, 95% 0.05 to 1.53; 2 studies, 34 participants). The evidence is very uncertain about the effects of exergaming at the end of treatment on global physical functioning (SMD 0.27, 95% -0.41 to 0.94; 2 studies, 34 participants) and ADL (SMD 0.51, 95% -0.01 to 1.03; 2 studies, 60 participants). The evidence is very uncertain about the effects of exergaming on adverse effects due to a small sample size and no events (0/14 participants). Findings are based on one study. Compared to an alternative treatment group The evidence is very uncertain about global physical functioning at the end of treatment. Only one study was included (n = 45). For global cognitive functioning, we included four studies (n = 235 participants), but due to considerable heterogeneity (I² = 96%), we could not pool results. The evidence is very uncertain about the effects of exergaming on global cognitive functioning. No study evaluated ADL outcomes. The evidence is very uncertain about adverse effects of exergaming due to the small sample size and no events (n = 123 participants). Findings are based on one study. AUTHORS' CONCLUSIONS: Overall, the evidence is very uncertain about the effects of exergaming on global physical and cognitive functioning, and ADL. There may be an improvement in global cognitive functioning at the end of treatment for both people with dementia and people with MCI, but the evidence is very uncertain. The potential benefit is observed only when exergaming is compared with a control intervention (e.g. usual care, listening to music, health education), and not when compared with an alternative treatment with a specific effect, such as physical activity (e.g. standing and sitting exercises or cycling). The evidence is very uncertain about the effects of exergaming on adverse effects. All sessions took place in a controlled and supervised environment. Therefore, we do not know if exergaming can be safely used in a home environment, unsupervised.


Asunto(s)
Actividades Cotidianas , Disfunción Cognitiva , Demencia , Terapia por Ejercicio , Ensayos Clínicos Controlados Aleatorios como Asunto , Juegos de Video , Humanos , Disfunción Cognitiva/terapia , Demencia/terapia , Demencia/psicología , Anciano , Terapia por Ejercicio/métodos , Sesgo , Calidad de Vida , Ejercicio Físico , Cognición , Anciano de 80 o más Años
2.
Autism Res ; 17(5): 1041-1052, 2024 May.
Artículo en Inglés | MEDLINE | ID: mdl-38661256

RESUMEN

Research has shown that children on the autism spectrum and adults with high levels of autistic traits are less sensitive to audiovisual asynchrony compared to their neurotypical peers. However, this evidence has been limited to simultaneity judgments (SJ) which require participants to consider the timing of two cues together. Given evidence of partly divergent perceptual and neural mechanisms involved in making temporal order judgments (TOJ) and SJ, and given that SJ require a more global type of processing which may be impaired in autistic individuals, here we ask whether the observed differences in audiovisual temporal processing are task and stimulus specific. We examined the ability to detect audiovisual asynchrony in a group of 26 autistic adult males and a group of age and IQ-matched neurotypical males. Participants were presented with beep-flash, point-light drumming, and face-voice displays with varying degrees of asynchrony and asked to make SJ and TOJ. The results indicated that autistic participants were less able to detect audiovisual asynchrony compared to the control group, but this effect was specific to SJ and more complex social stimuli (e.g., face-voice) with stronger semantic correspondence between the cues, requiring a more global type of processing. This indicates that audiovisual temporal processing is not generally different in autistic individuals and that a similar level of performance could be achieved by using a more local type of processing, thus informing multisensory integration theory as well as multisensory training aimed to aid perceptual abilities in this population.


Asunto(s)
Percepción Auditiva , Trastorno Autístico , Juicio , Percepción Visual , Humanos , Masculino , Juicio/fisiología , Adulto , Percepción Visual/fisiología , Percepción Auditiva/fisiología , Adulto Joven , Trastorno Autístico/fisiopatología , Estimulación Luminosa/métodos , Señales (Psicología) , Estimulación Acústica/métodos , Percepción del Tiempo/fisiología , Adolescente
3.
Virtual Real ; 27(3): 2043-2057, 2023.
Artículo en Inglés | MEDLINE | ID: mdl-37614716

RESUMEN

Research has shown that high trait anxiety can alter multisensory processing of threat cues (by amplifying integration of angry faces and voices); however, it remains unknown whether differences in multisensory processing play a role in the psychological response to trauma. This study examined the relationship between multisensory emotion processing and intrusive memories over seven days following exposure to an analogue trauma in a sample of 55 healthy young adults. We used an adapted version of the trauma film paradigm, where scenes showing a car accident trauma were presented using virtual reality, rather than a conventional 2D film. Multisensory processing was assessed prior to the trauma simulation using a forced choice emotion recognition paradigm with happy, sad and angry voice-only, face-only, audiovisual congruent (face and voice expressed matching emotions) and audiovisual incongruent expressions (face and voice expressed different emotions). We found that increased accuracy in recognising anger (but not happiness and sadness) in the audiovisual condition relative to the voice- and face-only conditions was associated with more intrusions following VR trauma. Despite previous results linking trait anxiety and intrusion development, no significant influence of trait anxiety on intrusion frequency was observed. Enhanced integration of threat-related information (i.e. angry faces and voices) could lead to overly threatening appraisals of stressful life events and result in greater intrusion development after trauma. Supplementary Information: The online version contains supplementary material available at 10.1007/s10055-023-00784-1.

4.
Virtual Real ; : 1-17, 2023 Mar 24.
Artículo en Inglés | MEDLINE | ID: mdl-37360806

RESUMEN

Attention is the ability to actively process specific information within one's environment over longer periods of time while disregarding other details. Attention is an important process that contributes to overall cognitive performance from performing every day basic tasks to complex work activities. The use of virtual reality (VR) allows study of the attention processes in realistic environments using ecological tasks. To date, research has focused on the efficacy of VR attention tasks in detecting attention impairment, while the impact of the combination of variables such as mental workload, presence and simulator sickness on both self-reported usability and objective attention task performance in immersive VR has not been examined. The current study tested 87 participants on an attention task in a virtual aquarium using a cross-sectional design. The VR task followed the continuous performance test paradigm where participants had to respond to correct targets and ignore non-targets over 18 min. Performance was measured using three outcomes: omission (failing to respond to correct targets), commission errors (incorrect responses to targets) and reaction time to correct targets. Measures of self-reported usability, mental workload, presence and simulator sickness were collected. The results showed that only presence and simulator sickness had a significant impact on usability. For performance outcomes, simulator sickness was significantly and weakly associated with omission errors, but not with reaction time and commission errors. Mental workload and presence did not significantly predict performance. Our results suggest that usability is more likely to be negatively impacted by simulator sickness and lack of presence than performance and that usability and attention performance are linked. They highlight the importance of considering factors such as presence and simulator sickness in attention tasks as these variables can impact usability. Supplementary Information: The online version contains supplementary material available at 10.1007/s10055-023-00782-3.

5.
PLoS One ; 18(3): e0280390, 2023.
Artículo en Inglés | MEDLINE | ID: mdl-36928040

RESUMEN

Users' emotions may influence the formation of presence in virtual reality (VR). Users' expectations, state of arousal and personality may also moderate the relationship between emotions and presence. An interoceptive predictive coding model of conscious presence (IPCM) considers presence as a product of the match between predictions of interoceptive emotional states and the actual states evoked by an experience (Seth et al. 2012). The present paper aims to test this model's applicability to VR for both high-arousal and low-arousal emotions. The moderating effect of personality traits on the creation of presence is also investigated. Results show that user expectations about emotional states in VR have an impact on presence, however, expression of this relationship is moderated by the intensity of an emotion, with only high-arousal emotions showing an effect. Additionally, users' personality traits moderated the relationship between emotions and presence. A refined model is proposed that predicts presence in VR by weighting emotions according to their level of arousal and by considering the impact of personality traits.


Asunto(s)
Emociones , Realidad Virtual , Emociones/fisiología , Personalidad , Nivel de Alerta , Euforia
6.
Sci Rep ; 12(1): 20087, 2022 11 22.
Artículo en Inglés | MEDLINE | ID: mdl-36418441

RESUMEN

Music involves different senses and is emotional in nature, and musicians show enhanced detection of audio-visual temporal discrepancies and emotion recognition compared to non-musicians. However, whether musical training produces these enhanced abilities or if they are innate within musicians remains unclear. Thirty-one adult participants were randomly assigned to a music training, music listening, or control group who all completed a one-hour session per week for 11 weeks. The music training group received piano training, the music listening group listened to the same music, and the control group did their homework. Measures of audio-visual temporal discrepancy, facial expression recognition, autistic traits, depression, anxiety, stress and mood were completed and compared from the beginning to end of training. ANOVA results revealed that only the music training group showed a significant improvement in detection of audio-visual temporal discrepancies compared to the other groups for both stimuli (flash-beep and face-voice). However, music training did not improve emotion recognition from facial expressions compared to the control group, while it did reduce the levels of depression, stress and anxiety compared to baseline. This RCT study provides the first evidence of a causal effect of music training on improved audio-visual perception that goes beyond the music domain.


Asunto(s)
Música , Percepción del Tiempo , Adulto , Humanos , Música/psicología , Estimulación Acústica , Percepción Visual , Percepción Auditiva
7.
J Alzheimers Dis ; 88(4): 1341-1370, 2022.
Artículo en Inglés | MEDLINE | ID: mdl-35811514

RESUMEN

BACKGROUND: Mild cognitive impairment (MCI) and dementia result in cognitive decline which can negatively impact everyday functional abilities and quality of life. Virtual reality (VR) interventions could benefit the cognitive abilities of people with MCI and dementia, but evidence is inconclusive. OBJECTIVE: To investigate the efficacy of VR training on global and domain-specific cognition, activities of daily living and quality of life. To explore the influence of priori moderators (e.g., immersion type, training type) on the effects of VR training. Adverse effects of VR training were also considered. METHODS: A systematic literature search was conducted on all major databases for randomized control trial studies. Two separate meta-analyses were performed on studies with people with MCI and dementia. RESULTS: Sixteen studies with people with MCI and four studies with people with dementia were included in each meta-analysis. Results showed moderate to large effects of VR training on global cognition, attention, memory, and construction and motor performance in people with MCI. Immersion and training type were found to be significant moderators of the effect of VR training on global cognition. For people with dementia, results showed moderate to large improvements after VR training on global cognition, memory, and executive function, but a subgroup analysis was not possible. CONCLUSION: Our findings suggest that VR training is an effective treatment for both people with MCI and dementia. These results contribute to the establishment of practical guidelines for VR interventions for patients with cognitive decline.


Asunto(s)
Disfunción Cognitiva , Demencia , Realidad Virtual , Actividades Cotidianas , Cognición , Disfunción Cognitiva/terapia , Demencia/terapia , Humanos , Calidad de Vida , Ensayos Clínicos Controlados Aleatorios como Asunto
8.
J Exp Psychol Hum Percept Perform ; 48(9): 926-942, 2022 Sep.
Artículo en Inglés | MEDLINE | ID: mdl-35862072

RESUMEN

The tongue is an incredibly complex sensory organ, yet little is known about its tactile capacities compared to the hands. In particular, the tongue receives almost no visual input during development and so may be calibrated differently compared to other tactile senses for spatial tasks. Using a cueing task, via an electro-tactile display, we examined how a tactile cue (to the tongue) or an auditory cue can affect the orientation of attention to electro-tactile targets presented to one of four regions on the tongue. We observed that response accuracy was generally low for the same modality condition, especially at the back of the tongue. This implies that spatial localization ability is diminished either because the tongue is less calibrated by the visual modality or because of its position and orientation inside the body. However, when cues were provided cross-modally, target identification at the back of the tongue seemed to improve. Our findings suggest that, while the brain relies on a general mechanism for spatial (and tactile) attention, the surface of the tongue may not have clear access to these representations of space when solely provided via electro-tactile feedback but can be directed by other sensory modalities. (PsycInfo Database Record (c) 2022 APA, all rights reserved).


Asunto(s)
Señales (Psicología) , Tacto , Mano , Humanos , Tiempo de Reacción , Lengua
9.
Sci Rep ; 12(1): 7401, 2022 05 05.
Artículo en Inglés | MEDLINE | ID: mdl-35513403

RESUMEN

Individuals are increasingly relying on GPS devices to orient and find their way in their environment and research has pointed to a negative impact of navigational systems on spatial memory. We used immersive virtual reality to examine whether an audio-visual navigational aid can counteract the negative impact of visual only or auditory only GPS systems. We also examined the effect of spatial representation preferences and abilities when using different GPS systems. Thirty-four participants completed an IVR driving game including 4 GPS conditions (No GPS; audio GPS; visual GPS; audio-visual GPS). After driving one of the routes in one of the 4 GPS conditions, participants were asked to drive to a target landmark they had previously encountered. The audio-visual GPS condition returned more accurate performance than the visual and no GPS condition. General orientation ability predicted the distance to the target landmark for the visual and the audio-visual GPS conditions, while landmark preference predicted performance in the audio GPS condition. Finally, the variability in end distance to the target landmark was significantly reduced in the audio-visual GPS condition when compared to the visual and audio GPS conditions. These findings support theories of spatial cognition and inform the optimisation of GPS designs.


Asunto(s)
Conducción de Automóvil , Realidad Virtual , Cognición , Humanos
10.
J Behav Ther Exp Psychiatry ; 74: 101693, 2022 03.
Artículo en Inglés | MEDLINE | ID: mdl-34563795

RESUMEN

BACKGROUND: Emotion perception is essential to human interaction and relies on effective integration of emotional cues across sensory modalities. Despite initial evidence for anxiety-related biases in multisensory processing of emotional information, there is no research to date that directly addresses whether the mechanism of multisensory integration is altered by anxiety. Here, we compared audiovisual integration of emotional cues between individuals with low vs. high trait anxiety. METHODS: Participants were 62 young adults who were assessed on their ability to quickly and accurately identify happy, angry and sad emotions from dynamic visual-only, audio-only and audiovisual face and voice displays. RESULTS: The results revealed that individuals in the high anxiety group were more likely to integrate angry faces and voices in a statistically optimal fashion, as predicted by the Maximum Likelihood Estimation model, compared to low anxiety individuals. This means that high anxiety individuals achieved higher precision in correctly recognising anger from angry audiovisual stimuli compared to angry face or voice-only stimuli, and compared to low anxiety individuals. LIMITATIONS: We tested a higher proportion of females, and although this does reflect the higher prevalence of clinical anxiety among females in the general population, potential sex differences in multisensory mechanisms due to anxiety should be examined in future studies. CONCLUSIONS: Individuals with high trait anxiety have multisensory mechanisms that are especially fine-tuned for processing threat-related emotions. This bias may exhaust capacity for processing of other emotional stimuli and lead to overly negative evaluations of social interactions.


Asunto(s)
Señales (Psicología) , Voz , Ira , Ansiedad/psicología , Emociones , Expresión Facial , Femenino , Humanos , Masculino , Adulto Joven
11.
Behav Brain Res ; 410: 113346, 2021 07 23.
Artículo en Inglés | MEDLINE | ID: mdl-33964354

RESUMEN

In everyday life, information from multiple senses is integrated for a holistic understanding of emotion. Despite evidence of atypical multisensory perception in populations with socio-emotional difficulties (e.g., autistic individuals), little research to date has examined how anxiety impacts on multisensory emotion perception. Here we examined whether the level of trait anxiety in a sample of 56 healthy adults affected audiovisual processing of emotion for three types of stimuli: dynamic faces and voices, body motion and dialogues of two interacting agents, and circles and tones. Participants judged emotion from four types of displays - audio-only, visual-only, audiovisual congruent (e.g., angry face and angry voice) and audiovisual incongruent (e.g., angry face and happy voice) - as happy or angry, as quickly as possible. In one task, participants based their emotional judgements on information in one modality while ignoring information in the other, and in a second task they based their judgements on their overall impressions of the stimuli. The results showed that the higher trait anxiety group prioritized the processing of angry cues when combining faces and voices that portrayed conflicting emotions. Individuals in this group were also more likely to benefit from combining congruent face and voice cues when recognizing anger. The multisensory effects of anxiety were found to be independent of the effects of autistic traits. The observed effects of trait anxiety on multisensory processing of emotion may serve to maintain anxiety by increasing sensitivity to social-threat and thus contributing to interpersonal difficulties.


Asunto(s)
Ansiedad/fisiopatología , Trastorno del Espectro Autista/fisiopatología , Emociones/fisiología , Personalidad/fisiología , Percepción Social , Percepción del Habla/fisiología , Percepción Visual/fisiología , Adolescente , Adulto , Señales (Psicología) , Reconocimiento Facial/fisiología , Femenino , Humanos , Masculino , Adulto Joven
12.
Behav Brain Res ; 397: 112922, 2021 01 15.
Artículo en Inglés | MEDLINE | ID: mdl-32971196

RESUMEN

During self-guided movements, we optimise performance by combining sensory and self-motion cues optimally, based on their reliability. Discrepancies between such cues and problems in combining them are suggested to underlie some pain conditions. Therefore, we examined whether visuomotor integration is altered in twenty-two participants with upper or lower limb complex regional pain syndrome (CRPS) compared to twenty-four controls. Participants located targets that appeared in the unaffected (CRPS) / dominant (controls) or affected (CRPS) / non-dominant (controls) side of space, using the hand of their unaffected/dominant or affected/non-dominant side of the body. For each side of space and each hand, participants located the target using visual information and no movement (vision only condition), an unseen pointing movement (self-motion only condition), or a visually-guided pointing movement (visuomotor condition). In all four space-by-hand conditions, controls reduced their variability in the visuomotor compared to the vision and self-motion only conditions and in line with a model prediction for optimal integration. Participants with CRPS showed similar evidence of cue combination in two of the four conditions. However, they had better-than-optimal integration for the unaffected hand in the affected space. Furthermore, they did not integrate optimally for the hand of the affected side of the body in unaffected space, but instead relied on the visual information. Our results suggest that people with CRPS can optimally integrate visual and self-motion cues under some conditions, despite lower reliability of self-motion cues, and use different strategies to controls.


Asunto(s)
Dolor Crónico/fisiopatología , Síndromes de Dolor Regional Complejo/fisiopatología , Mano/fisiopatología , Cinestesia/fisiología , Actividad Motora/fisiología , Desempeño Psicomotor/fisiología , Percepción Visual/fisiología , Adulto , Conflicto Psicológico , Señales (Psicología) , Femenino , Humanos , Masculino , Persona de Mediana Edad , Corteza Sensoriomotora/fisiopatología , Adulto Joven
13.
Dev Sci ; 24(1): e13001, 2021 01.
Artículo en Inglés | MEDLINE | ID: mdl-32506580

RESUMEN

Integrating different senses to reduce sensory uncertainty and increase perceptual precision can have an important compensatory function for individuals with visual impairment and blindness. However, how visual impairment and blindness impact the development of optimal multisensory integration in the remaining senses is currently unknown. Here we first examined how audio-haptic integration develops and changes across the life span in 92 sighted (blindfolded) individuals between 7 and 70 years of age. We used a child-friendly task in which participants had to discriminate different object sizes by touching them and/or listening to them. We assessed whether audio-haptic performance resulted in a reduction of perceptual uncertainty compared to auditory-only and haptic-only performance as predicted by maximum-likelihood estimation model. We then compared how this ability develops in 28 children and adults with different levels of visual experience, focussing on low-vision individuals and blind individuals that lost their sight at different ages during development. Our results show that in sighted individuals, adult-like audio-haptic integration develops around 13-15 years of age, and remains stable until late adulthood. While early-blind individuals, even at the youngest ages, integrate audio-haptic information in an optimal fashion, late-blind individuals do not. Optimal integration in low-vision individuals follows a similar developmental trajectory as that of sighted individuals. These findings demonstrate that visual experience is not necessary for optimal audio-haptic integration to emerge, but that consistency of sensory information across development is key for the functional outcome of optimal multisensory integration.


Asunto(s)
Percepción del Tacto , Personas con Daño Visual , Adulto , Percepción Auditiva , Ceguera , Niño , Humanos , Tacto
14.
Front Psychol ; 11: 1443, 2020.
Artículo en Inglés | MEDLINE | ID: mdl-32754082

RESUMEN

Human adults can optimally combine vision with self-motion to facilitate navigation. In the absence of visual input (e.g., dark environments and visual impairments), sensory substitution devices (SSDs), such as The vOICe or BrainPort, which translate visual information into auditory or tactile information, could be used to increase navigation precision when integrated together or with self-motion. In Experiment 1, we compared and assessed together The vOICe and BrainPort in aerial maps task performed by a group of sighted participants. In Experiment 2, we examined whether sighted individuals and a group of visually impaired (VI) individuals could benefit from using The vOICe, with and without self-motion, to accurately navigate a three-dimensional (3D) environment. In both studies, 3D motion tracking data were used to determine the level of precision with which participants performed two different tasks (an egocentric and an allocentric task) and three different conditions (two unisensory conditions and one multisensory condition). In Experiment 1, we found no benefit of using the devices together. In Experiment 2, the sighted performance during The vOICe was almost as good as that for self-motion despite a short training period, although we found no benefit (reduction in variability) of using The vOICe and self-motion in combination compared to the two in isolation. In contrast, the group of VI participants did benefit from combining The vOICe and self-motion despite the low number of trials. Finally, while both groups became more accurate in their use of The vOICe with increased trials, only the VI group showed an increased level of accuracy in the combined condition. Our findings highlight how exploiting non-visual multisensory integration to develop new assistive technologies could be key to help blind and VI persons, especially due to their difficulty in attaining allocentric information.

15.
J Exp Psychol Hum Percept Perform ; 46(10): 1105-1117, 2020 Oct.
Artículo en Inglés | MEDLINE | ID: mdl-32718153

RESUMEN

The brain's ability to integrate information from the different senses is essential for decreasing sensory uncertainty and ultimately limiting errors. Temporal correspondence is one of the key processes that determines whether information from different senses will be integrated and is influenced by both experience- and task-dependent mechanisms in adults. Here we investigated the development of both task- and experience-dependent temporal mechanisms by testing 7-8-year-old children, 10-11-year-old children, and adults in two tasks (simultaneity judgment, temporal order judgment) using audiovisual stimuli with differing degrees of association based on prior experience (low for beep-flash vs. high for face-voice). By fitting an independent channels model to the data, we found that while the experience-dependent mechanism of audiovisual simultaneity perception is already adult-like in 10-11-year-old children, the task-dependent mechanism is still not. These results indicate that differing maturation rates of experience-dependent and task-dependent mechanisms underlie the development of multisensory integration. Understanding this development has important implications for clinical and educational interventions. (PsycInfo Database Record (c) 2020 APA, all rights reserved).


Asunto(s)
Percepción Auditiva/fisiología , Desarrollo Humano/fisiología , Desempeño Psicomotor/fisiología , Percepción del Tiempo/fisiología , Percepción Visual/fisiología , Adulto , Niño , Femenino , Humanos , Masculino , Adulto Joven
16.
Brain Res ; 1723: 146381, 2019 11 15.
Artículo en Inglés | MEDLINE | ID: mdl-31419429

RESUMEN

In order to increase perceptual precision the adult brain dynamically combines redundant information from different senses depending on their reliability. During object size estimation, for example, visual, auditory and haptic information can be integrated to increase the precision of the final size estimate. Young children, however, do not integrate sensory information optimally and instead rely on active touch. Whether this early haptic dominance is reflected in age-related differences in neural mechanisms and whether it is driven by changes in bottom-up perceptual or top-down attentional processes has not yet been investigated. Here, we recorded event-related-potentials from a group of adults and children aged 5-7 years during an object size perception task using auditory, visual and haptic information. Multisensory information was presented either congruently (conveying the same information) or incongruently (conflicting information). No behavioral responses were required from participants. When haptic size information was available via actively tapping the objects, response amplitudes in the mid-parietal area were significantly reduced by information congruency in children but not in adults between 190 ms-250 ms and 310 ms-370 ms. These findings indicate that during object size perception only children's brain activity is modulated by active touch supporting a neural maturational shift from sensory dominance in early childhood to optimal multisensory benefit in adulthood.


Asunto(s)
Percepción del Tamaño/fisiología , Percepción del Tacto/fisiología , Tacto/fisiología , Adulto , Factores de Edad , Atención/fisiología , Encéfalo/fisiología , Mapeo Encefálico , Niño , Preescolar , Potenciales Evocados/fisiología , Femenino , Humanos , Masculino , Reproducibilidad de los Resultados , Percepción Visual/fisiología
17.
Front Psychol ; 9: 2168, 2018.
Artículo en Inglés | MEDLINE | ID: mdl-30473677

RESUMEN

Long-term music training has been shown to affect different cognitive and perceptual abilities. However, it is less well known whether it can also affect the perception of emotion from music, especially purely rhythmic music. Hence, we asked a group of 16 non-musicians, 16 musicians with no drumming experience, and 16 drummers to judge the level of expressiveness, the valence (positive and negative), and the category of emotion perceived from 96 drumming improvisation clips (audio-only, video-only, and audiovideo) that varied in several music features (e.g., musical genre, tempo, complexity, drummer's expressiveness, and drummer's style). Our results show that the level and type of music training influence the perceived expressiveness, valence, and emotion from solo drumming improvisation. Overall, non-musicians, non-drummer musicians, and drummers were affected differently by changes in some characteristics of the music performance, for example musicians (with and without drumming experience) gave a greater weight to the visual performance than non-musicians when giving their emotional judgments. These findings suggest that besides influencing several cognitive and perceptual abilities, music training also affects how we perceive emotion from music.

18.
Front Hum Neurosci ; 12: 274, 2018.
Artículo en Inglés | MEDLINE | ID: mdl-30018545

RESUMEN

Multisensory processing is a core perceptual capability, and the need to understand its neural bases provides a fundamental problem in the study of brain function. Both synchrony and temporal order judgments are commonly used to investigate synchrony perception between different sensory cues and multisensory perception in general. However, extensive behavioral evidence indicates that these tasks do not measure identical perceptual processes. Here we used functional magnetic resonance imaging to investigate how behavioral differences between the tasks are instantiated as neural differences. As these neural differences could manifest at either the sustained (task/state-related) and/or transient (event-related) levels of processing, a mixed block/event-related design was used to investigate the neural response of both time-scales. Clear differences in both sustained and transient BOLD responses were observed between the two tasks, consistent with behavioral differences indeed arising from overlapping but divergent neural mechanisms. Temporal order judgments, but not synchrony judgments, required transient activation in several left hemisphere regions, which may reflect increased task demands caused by an extra stage of processing. Our results highlight that multisensory integration mechanisms can be task dependent, which, in particular, has implications for the study of atypical temporal processing in clinical populations.

19.
Exp Brain Res ; 236(7): 1869-1880, 2018 Jul.
Artículo en Inglés | MEDLINE | ID: mdl-29687204

RESUMEN

To overcome differences in physical transmission time and neural processing, the brain adaptively recalibrates the point of simultaneity between auditory and visual signals by adapting to audiovisual asynchronies. Here, we examine whether the prolonged recalibration process of passively sensed visual and auditory signals is affected by naturally occurring multisensory training known to enhance audiovisual perceptual accuracy. Hence, we asked a group of drummers, of non-drummer musicians and of non-musicians to judge the audiovisual simultaneity of musical and non-musical audiovisual events, before and after adaptation with two fixed audiovisual asynchronies. We found that the recalibration for the musicians and drummers was in the opposite direction (sound leading vision) to that of non-musicians (vision leading sound), and change together with both increased music training and increased perceptual accuracy (i.e. ability to detect asynchrony). Our findings demonstrate that long-term musical training reshapes the way humans adaptively recalibrate simultaneity between auditory and visual signals.


Asunto(s)
Percepción Auditiva/fisiología , Música , Enseñanza , Percepción Visual/fisiología , Estimulación Acústica , Adaptación Fisiológica , Adulto , Femenino , Humanos , Juicio , Masculino , Estimulación Luminosa , Psicofísica , Tiempo de Reacción/fisiología , Factores de Tiempo , Adulto Joven
20.
Sci Rep ; 6: 29163, 2016 07 06.
Artículo en Inglés | MEDLINE | ID: mdl-27381183

RESUMEN

Human adults can optimally integrate visual and non-visual self-motion cues when navigating, while children up to 8 years old cannot. Whether older children can is unknown, limiting our understanding of how our internal multisensory representation of space develops. Eighteen adults and fifteen 10- to 11-year-old children were guided along a two-legged path in darkness (self-motion only), in a virtual room (visual + self-motion), or were shown a pre-recorded walk in the virtual room while standing still (visual only). Participants then reproduced the path in darkness. We obtained a measure of the dispersion of the end-points (variable error) and of their distances from the correct end point (constant error). Only children reduced their variable error when recalling the path in the visual + self-motion condition, indicating combination of these cues. Adults showed a constant error for the combined condition intermediate to those for single cues, indicative of cue competition, which may explain the lack of near-optimal integration in this group. This suggests that later in childhood humans can gain from optimally integrating spatial cues even when in the same situation these are kept separate in adulthood.


Asunto(s)
Envejecimiento/fisiología , Percepción de Movimiento , Percepción Espacial , Visión Ocular/fisiología , Adulto , Factores de Edad , Niño , Femenino , Humanos , Masculino
SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA