Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 25
Filtrar
1.
Trends Hear ; 27: 23312165231182289, 2023.
Artículo en Inglés | MEDLINE | ID: mdl-37611181

RESUMEN

Lateralized sounds can orient visual attention, with benefits for audio-visual processing. Here, we asked to what extent perturbed auditory spatial cues-resulting from cochlear implants (CI) or unilateral hearing loss (uHL)-allow this automatic mechanism of information selection from the audio-visual environment. We used a classic paradigm from experimental psychology (capture of visual attention with sounds) to probe the integrity of audio-visual attentional orienting in 60 adults with hearing loss: bilateral CI users (N = 20), unilateral CI users (N = 20), and individuals with uHL (N = 20). For comparison, we also included a group of normal-hearing (NH, N = 20) participants, tested in binaural and monaural listening conditions (i.e., with one ear plugged). All participants also completed a sound localization task to assess spatial hearing skills. Comparable audio-visual orienting was observed in bilateral CI, uHL, and binaural NH participants. By contrast, audio-visual orienting was, on average, absent in unilateral CI users and reduced in NH listening with one ear plugged. Spatial hearing skills were better in bilateral CI, uHL, and binaural NH participants than in unilateral CI users and monaurally plugged NH listeners. In unilateral CI users, spatial hearing skills correlated with audio-visual-orienting abilities. These novel results show that audio-visual-attention orienting can be preserved in bilateral CI users and in uHL patients to a greater extent than unilateral CI users. This highlights the importance of assessing the impact of hearing loss beyond auditory difficulties alone: to capture to what extent it may enable or impede typical interactions with the multisensory environment.


Asunto(s)
Implantación Coclear , Implantes Cocleares , Sordera , Pérdida Auditiva Unilateral , Pérdida Auditiva , Localización de Sonidos , Percepción del Habla , Adulto , Humanos , Señales (Psicología) , Audición , Implantación Coclear/métodos
2.
Eur Arch Otorhinolaryngol ; 280(8): 3661-3672, 2023 Aug.
Artículo en Inglés | MEDLINE | ID: mdl-36905419

RESUMEN

BACKGROUND AND PURPOSE: Use of unilateral cochlear implant (UCI) is associated with limited spatial hearing skills. Evidence that training these abilities in UCI user is possible remains limited. In this study, we assessed whether a Spatial training based on hand-reaching to sounds performed in virtual reality improves spatial hearing abilities in UCI users METHODS: Using a crossover randomized clinical trial, we compared the effects of a Spatial training protocol with those of a Non-Spatial control training. We tested 17 UCI users in a head-pointing to sound task and in an audio-visual attention orienting task, before and after each training.
Study is recorded in clinicaltrials.gov (NCT04183348). RESULTS: During the Spatial VR training, sound localization errors in azimuth decreased. Moreover, when comparing head-pointing to sounds before vs. after training, localization errors decreased after the Spatial more than the control training. No training effects emerged in the audio-visual attention orienting task. CONCLUSIONS: Our results showed that sound localization in UCI users improves during a Spatial training, with benefits that extend also to a non-trained sound localization task (generalization). These findings have potentials for novel rehabilitation procedures in clinical contexts.


Asunto(s)
Implantación Coclear , Implantes Cocleares , Localización de Sonidos , Percepción del Habla , Humanos , Audición , Implantación Coclear/métodos , Pruebas Auditivas/métodos
3.
Eur Arch Otorhinolaryngol ; 280(8): 3557-3566, 2023 Aug.
Artículo en Inglés | MEDLINE | ID: mdl-36763152

RESUMEN

PURPOSE: To develop and validate a new questionnaire, the Kid-SSQ, for the rapid screening of hearing abilities in children with hearing impairment, aged 7-17 years. METHODS: The questionnaire was constructed from two existing, validated versions of the 'Speech, Spatial and Qualities of Hearing' - (SSQ) questionnaire (pediatric form and adult short-form). The 12 selected items included auditory aspects from three subscales: speech perception, spatial hearing, and qualities of hearing. This new short form was then validated in 154 children with cochlear implants (100 bilaterally, and 54 unilaterally implanted children). Construct validity was assessed by testing relationships between Kid-SSQ scores and objective clinical parameters (e.g., age at test, pure-tone audiometry-PTA threshold, speech reception threshold-SRT, duration of binaural experience). RESULTS: Completion time was acceptable for use with children (less than 10 min) and the non-response rate was less than 1%. Good internal consistency was obtained (Cronbach's α = 0.78), with a stable internal structure corresponding to the 3 intended subscales. External validity showed the specificity of each subscale: speech subscale scores were significantly predicted (r = 0.32, p < 0.001) by both 2 kHz PTA threshold (ß = 0.33, p < 0.001) and SRT (ß = - 0.23, p < 0.001). Children with more binaural experience showed significantly higher scores on the spatial subscale than children with less binaural experience (F(1,98) = 5.1, p < 0.03) and the qualities of hearing subscale scores significantly depended on both age and SRT (r = 0.32, p < 0.001). CONCLUSIONS: The Kid-SSQ questionnaire is a robust and clinically useful questionnaire for self-assessment of difficulties in various auditory domains.


Asunto(s)
Implantes Cocleares , Pérdida Auditiva , Percepción del Habla , Adulto , Humanos , Niño , Habla , Pérdida Auditiva/diagnóstico , Audición/fisiología , Encuestas y Cuestionarios , Percepción del Habla/fisiología , Audiometría de Tonos Puros
4.
Ear Hear ; 44(1): 61-76, 2023.
Artículo en Inglés | MEDLINE | ID: mdl-35943235

RESUMEN

OBJECTIVE: The aim of this study was to evaluate the feasibility of a virtual reality-based spatial hearing training protocol in bilateral cochlear implant (CI) users and to provide pilot data on the impact of this training on different qualities of hearing. DESIGN: Twelve bilateral CI adults aged between 19 and 69 followed an intensive 10-week rehabilitation program comprised eight virtual reality training sessions (two per week) interspersed with several evaluation sessions (2 weeks before training started, after four and eight training sessions, and 1 month after the end of training). During each 45-minute training session, participants localized a sound source whose position varied in azimuth and/or in elevation. At the start of each trial, CI users received no information about sound location, but after each response, feedback was given to enable error correction. Participants were divided into two groups: a multisensory feedback group (audiovisual spatial cue) and an unisensory group (visual spatial cue) who only received feedback in a wholly intact sensory modality. Training benefits were measured at each evaluation point using three tests: 3D sound localization in virtual reality, the French Matrix test, and the Speech, Spatial and other Qualities of Hearing questionnaire. RESULTS: The training was well accepted and all participants attended the whole rehabilitation program. Four training sessions spread across 2 weeks were insufficient to induce significant performance changes, whereas performance on all three tests improved after eight training sessions. Front-back confusions decreased from 32% to 14.1% ( p = 0.017); speech recognition threshold score from 1.5 dB to -0.7 dB signal-to-noise ratio ( p = 0.029) and eight CI users successfully achieved a negative signal-to-noise ratio. One month after the end of structured training, these performance improvements were still present, and quality of life was significantly improved for both self-reports of sound localization (from 5.3 to 6.7, p = 0.015) and speech understanding (from 5.2 to 5.9, p = 0.048). CONCLUSIONS: This pilot study shows the feasibility and potential clinical relevance of this type of intervention involving a sensorial immersive environment and could pave the way for more systematic rehabilitation programs after cochlear implantation.


Asunto(s)
Implantación Coclear , Implantes Cocleares , Localización de Sonidos , Percepción del Habla , Adulto , Humanos , Recién Nacido , Implantación Coclear/métodos , Proyectos Piloto , Calidad de Vida , Percepción del Habla/fisiología , Audición/fisiología
5.
Ear Hear ; 44(1): 189-198, 2023.
Artículo en Inglés | MEDLINE | ID: mdl-35982520

RESUMEN

OBJECTIVES: We assessed if spatial hearing training improves sound localization in bilateral cochlear implant (BCI) users and whether its benefits can generalize to untrained sound localization tasks. DESIGN: In 20 BCI users, we assessed the effects of two training procedures (spatial versus nonspatial control training) on two different tasks performed before and after training (head-pointing to sound and audiovisual attention orienting). In the spatial training, participants identified sound position by reaching toward the sound sources with their hand. In the nonspatial training, comparable reaching movements served to identify sound amplitude modulations. A crossover randomized design allowed comparison of training procedures within the same participants. Spontaneous head movements while listening to the sounds were allowed and tracked to correlate them with localization performance. RESULTS: During spatial training, BCI users reduced their sound localization errors in azimuth and adapted their spontaneous head movements as a function of sound eccentricity. These effects generalized to the head-pointing sound localization task, as revealed by greater reduction of sound localization error in azimuth and more accurate first head-orienting response, as compared to the control nonspatial training. BCI users benefited from auditory spatial cues for orienting visual attention, but the spatial training did not enhance this multisensory attention ability. CONCLUSIONS: Sound localization in BCI users improves with spatial reaching-to-sound training, with benefits to a nontrained sound localization task. These findings pave the way to novel rehabilitation procedures in clinical contexts.


Asunto(s)
Implantación Coclear , Implantes Cocleares , Localización de Sonidos , Humanos , Percepción Auditiva/fisiología , Implantación Coclear/métodos , Audición/fisiología , Pruebas Auditivas/métodos , Localización de Sonidos/fisiología , Estudios Cruzados
6.
PLoS One ; 17(4): e0263509, 2022.
Artículo en Inglés | MEDLINE | ID: mdl-35421095

RESUMEN

Localising sounds means having the ability to process auditory cues deriving from the interplay among sound waves, the head and the ears. When auditory cues change because of temporary or permanent hearing loss, sound localization becomes difficult and uncertain. The brain can adapt to altered auditory cues throughout life and multisensory training can promote the relearning of spatial hearing skills. Here, we study the training potentials of sound-oriented motor behaviour to test if a training based on manual actions toward sounds can learning effects that generalize to different auditory spatial tasks. We assessed spatial hearing relearning in normal hearing adults with a plugged ear by using visual virtual reality and body motion tracking. Participants performed two auditory tasks that entail explicit and implicit processing of sound position (head-pointing sound localization and audio-visual attention cueing, respectively), before and after having received a spatial training session in which they identified sound position by reaching to auditory sources nearby. Using a crossover design, the effects of the above-mentioned spatial training were compared to a control condition involving the same physical stimuli, but different task demands (i.e., a non-spatial discrimination of amplitude modulations in the sound). According to our findings, spatial hearing in one-ear plugged participants improved more after reaching to sound trainings rather than in the control condition. Training by reaching also modified head-movement behaviour during listening. Crucially, the improvements observed during training generalize also to a different sound localization task, possibly as a consequence of acquired and novel head-movement strategies.


Asunto(s)
Señales (Psicología) , Localización de Sonidos , Estimulación Acústica , Adaptación Fisiológica , Adulto , Percepción Auditiva , Estudios Cruzados , Audición , Humanos
7.
Ear Hear ; 43(1): 192-205, 2022.
Artículo en Inglés | MEDLINE | ID: mdl-34225320

RESUMEN

OBJECTIVES: The aim of this study was to assess three-dimensional (3D) spatial hearing abilities in reaching space of children and adolescents fitted with bilateral cochlear implants (BCI). The study also investigated the impact of spontaneous head movements on sound localization abilities. DESIGN: BCI children (N = 18, aged between 8 and 17) and age-matched normal-hearing (NH) controls (N = 18) took part in the study. Tests were performed using immersive virtual reality equipment that allowed control over visual information and initial eye position, as well as real-time 3D motion tracking of head and hand position with subcentimeter accuracy. The experiment exploited these technical features to achieve trial-by-trial exact positioning in head-centered coordinates of a single loudspeaker used for real, near-field sound delivery, which was reproducible across trials and participants. Using this novel approach, broadband sounds were delivered at different azimuths within the participants' arm length, in front and back space, at two different distances from their heads. Continuous head-monitoring allowed us to compare two listening conditions: "head immobile" (no head movements allowed) and "head moving" (spontaneous head movements allowed). Sound localization performance was assessed by computing the mean 3D error (i.e. the difference in space between the X-Y-Z position of the loudspeaker and the participant's final hand position used to indicate the localization of the sound's source), as well as the percentage of front-back and left-right confusions in azimuth, and the discriminability between two nearby distances. Several clinical factors (i.e. age at test, interimplant interval, and duration of binaural experience) were also correlated with the mean 3D error. Finally, the Speech Spatial and Qualities of Hearing Scale was administered to BCI participants and their parents. RESULTS: Although BCI participants distinguished well between left and right sound sources, near-field spatial hearing remained challenging, particularly under the " head immobile" condition. Without visual priors of the sound position, response accuracy was lower than that of their NH peers, as evidenced by the mean 3D error (BCI: 55 cm, NH: 24 cm, p = 0.008). The BCI group mainly pointed along the interaural axis, corresponding to the position of their CI microphones. This led to important front-back confusions (44.6%). Distance discrimination also remained challenging for BCI users, mostly due to sound compression applied by their processor. Notably, BCI users benefitted from head movements under the "head moving" condition, with a significant decrease of the 3D error when pointing to front targets (p < 0.001). Interimplant interval was correlated with 3D error (p < 0.001), whereas no correlation with self-assessment of spatial hearing difficulties emerged (p = 0.9). CONCLUSIONS: In reaching space, BCI children and adolescents are able to extract enough auditory cues to discriminate sound side. However, without any visual cues or spontaneous head movements during sound emission, their localization abilities are substantially impaired for front-back and distance discrimination. Exploring the environment with head movements was a valuable strategy for improving sound localization within individuals with different clinical backgrounds. These novel findings could prompt new perspectives to better understand sound localization maturation in BCI children, and more broadly in patients with hearing loss.


Asunto(s)
Implantación Coclear , Implantes Cocleares , Pérdida Auditiva , Localización de Sonidos , Percepción del Habla , Adolescente , Niño , Implantación Coclear/métodos , Movimientos de la Cabeza , Audición , Humanos
8.
Neuropsychologia ; 149: 107665, 2020 12.
Artículo en Inglés | MEDLINE | ID: mdl-33130161

RESUMEN

When localising sounds in space the brain relies on internal models that specify the correspondence between the auditory input reaching the ears, initial head-position and coordinates in external space. These models can be updated throughout life, setting the basis for re-learning spatial hearing abilities in adulthood. In addition, strategic behavioural adjustments allow people to quickly adapt to atypical listening situations. Until recently, the potential role of dynamic listening, involving head-movements or reaching to sounds, have remained largely overlooked. Here, we exploited visual virtual reality (VR) and real-time kinematic tracking, to study the role of active multisensory-motor interactions when hearing individuals adapt to altered binaural cues (one ear plugged and muffed). Participants were immersed in a VR scenario showing 17 virtual speakers at ear-level. In each trial, they heard a sound delivered from a real speaker aligned with one of the virtual ones and were instructed to either reach-to-touch the perceived sound source (Reaching group), or read the label associated with the speaker (Naming group). Participants were free to move their heads during the task and received audio-visual feedback on their performance. Most importantly, they performed the task under binaural or monaural listening. Results show that both groups adapted rapidly to monaural listening, improving sound localisation performance across trials and changing their head-movement behaviour. Reaching the sounds induced faster and larger sound localisation improvements, compared to just naming its position. This benefit was linked to progressively wider head-movements to explore auditory space, selectively in the Reaching group. In conclusion, reaching to sounds in an immersive visual VR context proved most effective for adapting to altered binaural listening. Head-movements played an important role in adaptation, pointing to the importance of dynamic listening when implementing training protocols for improving spatial hearing.


Asunto(s)
Localización de Sonidos , Realidad Virtual , Adaptación Fisiológica , Adulto , Señales (Psicología) , Audición , Humanos
9.
Front Hum Neurosci ; 14: 549537, 2020.
Artículo en Inglés | MEDLINE | ID: mdl-33132873

RESUMEN

Fast, online control of movement is an essential component of human motor skills, as it allows automatic correction of inaccurate planning. The present study explores the role of two types of concurrent signals in error correction: predicted visual reafferences coming from an internal representation of the hand, and actual visual feedback from the hand. While the role of sensory feedback in these corrections is well-established, much less is known about sensory prediction. The relative contributions of these two types of signals remain a subject of debate, as they are naturally interconnected. We address the issue in a study that compares online correction of an artificially induced, undetected planning error. Two conditions are tested, which only differ with respect to the accuracy of predicted visual reafferences. In the first, "Prism" experiment, a planning error is introduced by prisms that laterally displace the seen hand prior to hand movement onset. The prism-induced conflict between visual and proprioceptive inputs of the hand also generates an erroneous prediction of visual reafferences of the moving hand. In the second, "Jump" experiment, a planning error is introduced by a jump in the target position, during the orienting saccade, prior to hand movement onset. In the latter condition, predicted reafferences of the hand remained intact. In both experiments, after hand movement onset, the hand was either visible or hidden, which enabled us to manipulate the presence (or absence) of visual feedback during movement execution. The Prism experiment highlighted late and reduced correction of the planning error, even when natural visual feedback of the moving hand was available. In the Jump experiment, early and automatic corrections of the planning error were observed, even in the absence of visual feedback from the moving hand. Therefore, when predicted reafferences were accurate (the Jump experiment), visual feedback was processed rapidly and automatically. When they were erroneous (the Prism experiment), the same visual feedback was less efficient, and required voluntary, and late, control. Our study clearly demonstrates that in natural environments, reliable prediction is critical in the preprocessing of visual feedback, for fast and accurate movement.

10.
Conscious Cogn ; 64: 135-145, 2018 09.
Artículo en Inglés | MEDLINE | ID: mdl-30025675

RESUMEN

Visuo-motor adaptation has been classically studied using movements aimed at visual targets with visual feedback. In this type of experimental design, the respective roles of the different error signals cannot be fully disentangled. Here, we show that visuo-motor adaptation occurs despite the terminal success of the action and the compensation of the external error by a jump of the visual target. By using three grasping task conditions we manipulated the retinal error signal between the seen hand and the target (external error) and the conflict between the hand's visual reafference and either the proprioceptive or the efference copy signal (internal error), in order to estimate their respective roles in prism adaptation. In all conditions, subjects were asked to rapidly grasp an object. In the classical 'Prism' condition the object was stationary, which provided both external and internal errors. In the 'Prism & Jump' condition, at movement onset the object was suddenly displaced (jump) toward its virtual image location (visually displaced by the prism) which also corresponded to the location where the movement was planned to and executed through prisms. This jump therefore cancelled the external error (between the seen target and the seen hand), whereas the internal error (between the seen hand and the expected visual reafference of the hand, or between the seen hand and the hand felt by proprioception) was unchanged (because it is independent of the presence of the goal). In the 'Jump' condition, the movement was planned and executed without prismatic goggles and consequently with no internal error (no difference between where the hand visual reafference is expected to be and where it actually is), but the object was suddenly displaced at movement onset by a displacement equivalent to a prism shift which provided an external error. The 'Prism' and 'Prism & Jump' conditions exhibited similar aftereffects, whereas no aftereffect was observed in the 'Jump' condition. These results suggest that successful actions can be subjected to adaptation and that internal error is the only signal necessary to elicit true visuomotor adaptation characterized by context-independent generalization.


Asunto(s)
Retroalimentación Sensorial/fisiología , Propiocepción/fisiología , Desempeño Psicomotor/fisiología , Adaptación Fisiológica , Adulto , Femenino , Humanos , Masculino , Movimiento , Percepción Visual , Adulto Joven
11.
Ann Phys Rehabil Med ; 61(6): 401-406, 2018 Nov.
Artículo en Inglés | MEDLINE | ID: mdl-29782953

RESUMEN

OBJECTIVES: After a coma, one major challenge is the detection of awareness in patients with disorders of consciousness. In some patients, the only manifestation indicative of awareness is an appropriate emotional response. Preferred music is a powerful medium to elicit emotions and autobiographical memory. Furthermore, music has been shown to improve cognitive functions both in healthy subjects and patients with neurological impairment. We hypothesized that signs of awareness could be enhanced in some patients with disorders of consciousness under appropriate emotional stimulation such as preferred music and also probably preferred odors. METHODS: To investigate an objective, easily recordable marker of emotions at the patients' bedside, electrodermal activity (skin conductance level, SCL) was assessed with stimulations in auditory and olfactory modalities, notably with preferred music, neutral sound, preferred odors, and neutral odors. The study was conducted in 11 patients with disorders of consciousness (DOC) and 7 healthy participants. RESULTS: In healthy subjects, the mean amplitude of the SCL was increased during exposure to preferred music as compared to neutral sounds (respectively: 0.00037±0.0004 vs. - 0.00004±0.00019µS). No significant difference between conditions was detected in patients. CONCLUSION: The results of this study suggest that electrodermal activity could be a useful marker of emotions induced by music in healthy controls. However, it failed to show any significant difference between conditions in patients with DOC.


Asunto(s)
Percepción Auditiva/fisiología , Trastornos de la Conciencia/fisiopatología , Emociones/fisiología , Respuesta Galvánica de la Piel/fisiología , Percepción Olfatoria/fisiología , Estimulación Acústica/métodos , Adulto , Aromaterapia/métodos , Estudios de Casos y Controles , Estado de Conciencia/fisiología , Trastornos de la Conciencia/psicología , Trastornos de la Conciencia/rehabilitación , Femenino , Humanos , Masculino , Persona de Mediana Edad , Música/psicología
12.
J Neurophysiol ; 119(5): 1981-1992, 2018 05 01.
Artículo en Inglés | MEDLINE | ID: mdl-29465322

RESUMEN

When reaching to an object, information about the target location as well as the initial hand position is required to program the motor plan for the arm. The initial hand position can be determined by proprioceptive information as well as visual information, if available. Bayes-optimal integration posits that we utilize all information available, with greater weighting on the sense that is more reliable, thus generally weighting visual information more than the usually less reliable proprioceptive information. The criterion by which information is weighted has not been explicitly investigated; it has been assumed that the weights are based on task- and effector-dependent sensory reliability requiring an explicit neuronal representation of variability. However, the weights could also be determined implicitly through learned modality-specific integration weights and not on effector-dependent reliability. While the former hypothesis predicts different proprioceptive weights for left and right hands, e.g., due to different reliabilities of dominant vs. nondominant hand proprioception, we would expect the same integration weights if the latter hypothesis was true. We found that the proprioceptive weights for the left and right hands were extremely consistent regardless of differences in sensory variability for the two hands as measured in two separate complementary tasks. Thus we propose that proprioceptive weights during reaching are learned across both hands, with high interindividual range but independent of each hand's specific proprioceptive variability. NEW & NOTEWORTHY How visual and proprioceptive information about the hand are integrated to plan a reaching movement is still debated. The goal of this study was to clarify how the weights assigned to vision and proprioception during multisensory integration are determined. We found evidence that the integration weights are modality specific rather than based on the sensory reliabilities of the effectors.


Asunto(s)
Mano/fisiología , Actividad Motora/fisiología , Propiocepción/fisiología , Desempeño Psicomotor/fisiología , Percepción Visual/fisiología , Adulto , Femenino , Humanos , Masculino , Persona de Mediana Edad , Adulto Joven
13.
Front Hum Neurosci ; 10: 91, 2016.
Artículo en Inglés | MEDLINE | ID: mdl-27014023

RESUMEN

Perception of our visual environment strongly depends on saccadic eye movements, which in turn are calibrated by saccadic adaptation mechanisms elicited by systematic movement errors. Current models of saccadic adaptation assume that visual error signals are acquired only after saccade completion, because the high speed of saccade execution disturbs visual processing (saccadic "suppression" and "mislocalization"). Complementing a previous study from our group, here we report that visual information presented during saccades can drive adaptation mechanisms and we further determine the critical time window of such error processing. In 15 healthy volunteers, shortening adaptation of reactive saccades toward a ±8° visual target was induced by flashing the target for 2 ms less eccentrically than its initial location either near saccade peak velocity ("PV" condition) or peak deceleration ("PD") or saccade termination ("END"). Results showed that, as compared to the "CONTROL" condition (target flashed at its initial location upon saccade termination), saccade amplitude decreased all throughout the "PD" and "END" conditions, reaching significant levels in the second adaptation and post-adaptation blocks. The results of nine other subjects tested in a saccade lengthening adaptation paradigm with the target flashing near peak deceleration ("PD" and "CONTROL" conditions) revealed no significant change of gain, confirming that saccade shortening adaptation is easier to elicit. Also, together with this last result, the stable gain observed in the "CONTROL" conditions of both experiments suggests that mislocalization of the target flash is not responsible for the saccade shortening adaptation demonstrated in the first group. Altogether, these findings reveal that the visual "suppression" and "mislocalization" phenomena related to saccade execution do not prevent brief visual information delivered "in-flight" from being processed to elicit oculomotor adaptation.

14.
Front Hum Neurosci ; 8: 880, 2014.
Artículo en Inglés | MEDLINE | ID: mdl-25408644

RESUMEN

The processes underlying short-term plasticity induced by visuomotor adaptation to a shifted visual field are still debated. Two main sources of error can induce motor adaptation: reaching feedback errors, which correspond to visually perceived discrepancies between hand and target positions, and errors between predicted and actual visual reafferences of the moving hand. These two sources of error are closely intertwined and difficult to disentangle, as both the target and the reaching limb are simultaneously visible. Accordingly, the goal of the present study was to clarify the relative contributions of these two types of errors during a pointing task under prism-displaced vision. In "terminal feedback error" condition, viewing of their hand by subjects was allowed only at movement end, simultaneously with viewing of the target. In "movement prediction error" condition, viewing of the hand was limited to movement duration, in the absence of any visual target, and error signals arose solely from comparisons between predicted and actual reafferences of the hand. In order to prevent intentional corrections of errors, a subthreshold, progressive stepwise increase in prism deviation was used, so that subjects remained unaware of the visual deviation applied in both conditions. An adaptive aftereffect was observed in the "terminal feedback error" condition only. As far as subjects remained unaware of the optical deviation and self-assigned pointing errors, prediction error alone was insufficient to induce adaptation. These results indicate a critical role of hand-to-target feedback error signals in visuomotor adaptation; consistent with recent neurophysiological findings, they suggest that a combination of feedback and prediction error signals is necessary for eliciting aftereffects. They also suggest that feedback error updates the prediction of reafferences when a visual perturbation is introduced gradually and cognitive factors are eliminated or strongly attenuated.

15.
Science ; 346(6206): 241-4, 2014 Oct 10.
Artículo en Inglés | MEDLINE | ID: mdl-25278504

RESUMEN

In 2010, the international community, under the auspices of the Convention on Biological Diversity, agreed on 20 biodiversity-related "Aichi Targets" to be achieved within a decade. We provide a comprehensive mid-term assessment of progress toward these global targets using 55 indicator data sets. We projected indicator trends to 2020 using an adaptive statistical framework that incorporated the specific properties of individual time series. On current trajectories, results suggest that despite accelerating policy and management responses to the biodiversity crisis, the impacts of these efforts are unlikely to be reflected in improved trends in the state of biodiversity by 2020. We highlight areas of societal endeavor requiring additional efforts to achieve the Aichi Targets, and provide a baseline against which to assess future progress.


Asunto(s)
Biodiversidad , Conservación de los Recursos Naturales , Extinción Biológica
16.
Neuropsychologia ; 55: 25-40, 2014 Mar.
Artículo en Inglés | MEDLINE | ID: mdl-24334110

RESUMEN

Following the princeps investigations of Marc Jeannerod on action-perception, specifically, goal-directed movement, this review article addresses visual and non-visual processes involved in guiding the hand in reaching or grasping tasks. The contributions of different sources of correction of ongoing movements are considered; these include visual feedback of the hand, as well as the often-neglected but important spatial updating and sharpening of goal localization following gaze-saccade orientation. The existence of an automatic online process guiding limb trajectory toward its goal is highlighted by a series of princeps experiments of goal-directed pointing movements. We then review psychophysical, electrophysiological, neuroimaging and clinical studies that have explored the properties of these automatic corrective mechanisms and their neural bases, and established their generality. Finally, the functional significance of automatic corrective mechanisms-referred to as motor flexibility-and their potential use in rehabilitation are discussed.


Asunto(s)
Brazo/fisiología , Mano/fisiología , Desempeño Psicomotor/fisiología , Animales , Fuerza de la Mano/fisiología , Humanos , Percepción Visual/fisiología
17.
Neuropsychologia ; 55: 15-24, 2014 Mar.
Artículo en Inglés | MEDLINE | ID: mdl-24056297

RESUMEN

This study investigated the motor control mechanisms that enable healthy individuals to adapt their pointing movements during prism exposure to a rightward optical shift. In the prism adaptation literature, two processes are typically distinguished. Strategic motor adjustments are thought to drive the pattern of rapid endpoint error correction typically observed during the early stage of prism exposure. This is distinguished from so-called 'true sensorimotor realignment', normally measured with a different pointing task, at the end of prism exposure, which reveals a compensatory leftward 'prism after-effect'. Here, we tested whether each mode of motor compensation - strategic adjustments versus 'true sensorimotor realignment' - could be distinguished, by analyzing patterns of kinematic change during prism exposure. We hypothesized that fast feedforward versus slower feedback error corrective processes would map onto two distinct phases of the reach trajectory. Specifically, we predicted that feedforward adjustments would drive rapid compensation of the initial (acceleration) phase of the reach, resulting in the rapid reduction of endpoint errors typically observed early during prism exposure. By contrast, we expected visual-proprioceptive realignment to unfold more slowly and to reflect feedback influences during the terminal (deceleration) phase of the reach. The results confirmed these hypotheses. Rapid error reduction during the early stage of prism exposure was achieved by trial-by-trial adjustments of the motor plan, which were proportional to the endpoint error feedback from the previous trial. By contrast, compensation of the terminal reach phase unfolded slowly across the duration of prism exposure. Even after 100 trials of pointing through prisms, adaptation was incomplete, with participants continuing to exhibit a small rightward shift in both the reach endpoints and in the terminal phase of reach trajectories. Individual differences in the degree of adaptation of the terminal reach phase predicted the magnitude of prism after-effects. In summary, this study identifies distinct kinematic signatures of fast strategic versus slow sensorimotor realignment processes, which combine to adjust motor performance to compensate for a prismatic shift.


Asunto(s)
Adaptación Fisiológica , Adaptación Psicológica , Desempeño Psicomotor/fisiología , Percepción Visual , Adulto , Análisis de Varianza , Fenómenos Biomecánicos , Retroalimentación , Mano/fisiología , Humanos , Estimulación Luminosa , Propiocepción , Psicofísica , Análisis y Desempeño de Tareas , Factores de Tiempo
18.
PLoS One ; 8(1): e54641, 2013.
Artículo en Inglés | MEDLINE | ID: mdl-23382932

RESUMEN

Movement accuracy depends crucially on the ability to detect errors while actions are being performed. When inaccuracies occur repeatedly, both an immediate motor correction and a progressive adaptation of the motor command can unfold. Of all the movements in the motor repertoire of humans, saccadic eye movements are the fastest. Due to the high speed of saccades, and to the impairment of visual perception during saccades, a phenomenon called "saccadic suppression", it is widely believed that the adaptive mechanisms maintaining saccadic performance depend critically on visual error signals acquired after saccade completion. Here, we demonstrate that, contrary to this widespread view, saccadic adaptation can be based entirely on visual information presented during saccades. Our results show that visual error signals introduced during saccade execution--by shifting a visual target at saccade onset and blanking it at saccade offset--induce the same level of adaptation as error signals, presented for the same duration, but after saccade completion. In addition, they reveal that this processing of intra-saccadic visual information for adaptation depends critically on visual information presented during the deceleration phase, but not the acceleration phase, of the saccade. These findings demonstrate that the human central nervous system can use short intra-saccadic glimpses of visual information for motor adaptation, and they call for a reappraisal of current models of saccadic adaptation.


Asunto(s)
Encéfalo/fisiología , Movimientos Oculares , Desempeño Psicomotor , Percepción Visual , Adaptación Fisiológica , Adolescente , Adulto , Femenino , Humanos , Masculino , Estimulación Luminosa , Adulto Joven
19.
Front Neurosci ; 6: 127, 2012.
Artículo en Inglés | MEDLINE | ID: mdl-22969703

RESUMEN

Much theoretical attention is currently devoted to social learning. Yet, empirical studies formally comparing its effectiveness relative to individual learning are rare. Here, we focus on free choice, which is at the heart of individual reward-based learning, but absent in social learning. Choosing among two equally valued options is known to create a preference for the selected option in both humans and monkeys. We thus surmised that social learning should be more helpful when choice-induced preferences retard individual learning than when they optimize it. To test this prediction, the same task requiring to find which among two items concealed a reward was applied to rhesus macaques and humans. The initial trial was individual or social, rewarded or unrewarded. Learning was assessed on the second trial. Choice-induced preference strongly affected individual learning. Monkeys and humans performed much more poorly after an initial negative choice than after an initial positive choice. Comparison with social learning verified our prediction. For negative outcome, social learning surpassed or at least equaled individual learning in all subjects. For positive outcome, the predicted superiority of individual learning did occur in a majority of subjects (5/6 monkeys and 6/12 humans). A minority kept learning better socially though, perhaps due to a more dominant/aggressive attitude toward peers. Poor learning from errors due to over-valuation of personal choices is among the decision-making biases shared by humans and animals. The present study suggests that choice-immune social learning may help curbing this potentially harmful tendency. Learning from successes is an easier path. The present data suggest that whether one tends to walk it alone or with a peer's help might depend on the social dynamics within the actor/observer dyad.

20.
Hum Mov Sci ; 30(6): 1009-21, 2011 Dec.
Artículo en Inglés | MEDLINE | ID: mdl-21575995

RESUMEN

To foveate a visual target, subjects usually execute a primary hypometric saccade (S1) bringing the target in perifoveal vision, followed by a corrective saccade (S2) or by more than one S2. It is still debated to what extent these S2 are pre-programmed or dependent only on post-saccadic retinal error. To answer this question, we used a visually-triggered saccade task in which target position and target visibility were manipulated. In one-third of the trials, the target was slightly displaced at S1 onset (so-called double step paradigm) and was maintained until the end of S1, until the start of the first S2 or until the end of the trial. Experiments took place in two visual environments: in the dark and in a dimly lit room with a visible random square background. The results showed that S2 were less accurate for shortest target durations. The duration of post-saccadic visual integration thus appears as the main factor responsible for corrective saccade accuracy. We also found that the visual context modulates primary saccade accuracy, especially for the most hypometric subjects. These findings suggest that the saccadic system is sensitive to the visual properties of the environment and uses different strategies to maintain final gaze accuracy.


Asunto(s)
Fijación Ocular/fisiología , Retina/fisiología , Movimientos Sacádicos/fisiología , Campos Visuales/fisiología , Percepción Visual/fisiología , Adulto , Adaptación a la Oscuridad/fisiología , Femenino , Fóvea Central/fisiología , Humanos , Masculino , Orientación/fisiología , Reconocimiento Visual de Modelos/fisiología , Tiempo de Reacción/fisiología , Medio Social
SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA
...