Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 20 de 91
Filter
Add more filters

Publication year range
1.
Ear Hear ; 44(1): 189-198, 2023.
Article in English | MEDLINE | ID: mdl-35982520

ABSTRACT

OBJECTIVES: We assessed if spatial hearing training improves sound localization in bilateral cochlear implant (BCI) users and whether its benefits can generalize to untrained sound localization tasks. DESIGN: In 20 BCI users, we assessed the effects of two training procedures (spatial versus nonspatial control training) on two different tasks performed before and after training (head-pointing to sound and audiovisual attention orienting). In the spatial training, participants identified sound position by reaching toward the sound sources with their hand. In the nonspatial training, comparable reaching movements served to identify sound amplitude modulations. A crossover randomized design allowed comparison of training procedures within the same participants. Spontaneous head movements while listening to the sounds were allowed and tracked to correlate them with localization performance. RESULTS: During spatial training, BCI users reduced their sound localization errors in azimuth and adapted their spontaneous head movements as a function of sound eccentricity. These effects generalized to the head-pointing sound localization task, as revealed by greater reduction of sound localization error in azimuth and more accurate first head-orienting response, as compared to the control nonspatial training. BCI users benefited from auditory spatial cues for orienting visual attention, but the spatial training did not enhance this multisensory attention ability. CONCLUSIONS: Sound localization in BCI users improves with spatial reaching-to-sound training, with benefits to a nontrained sound localization task. These findings pave the way to novel rehabilitation procedures in clinical contexts.


Subject(s)
Cochlear Implantation , Cochlear Implants , Sound Localization , Humans , Auditory Perception/physiology , Cochlear Implantation/methods , Hearing/physiology , Hearing Tests/methods , Sound Localization/physiology , Cross-Over Studies
2.
Conscious Cogn ; 109: 103490, 2023 03.
Article in English | MEDLINE | ID: mdl-36842317

ABSTRACT

In spoken languages, face masks represent an obstacle to speech understanding and influence metacognitive judgments, reducing confidence and increasing effort while listening. To date, all studies on face masks and communication involved spoken languages and hearing participants, leaving us with no insight on how masked communication impacts on non-spoken languages. Here, we examined the effects of face masks on sign language comprehension and metacognition. In an online experiment, deaf participants (N = 60) watched three parts of a story signed without mask, with a transparent mask or with an opaque mask, and answered questions about story content, as well as their perceived effort, feeling of understanding, and confidence in their answers. Results showed that feeling of understanding and perceived effort worsened as the visual condition changed from no mask to transparent or opaque masks, while comprehension of the story was not significantly different across visual conditions. We propose that metacognitive effects could be due to the reduction of pragmatic, linguistic and para-linguistic cues from the lower face, hidden by the mask. This reduction could impact on lower-face linguistic components perception, attitude attribution, classification of emotions and prosody of a conversation, driving the observed effects on metacognitive judgments but leaving sign language comprehension substantially unchanged, even if with a higher effort. These results represent a novel step towards better understanding what drives metacognitive effects of face masks while communicating face to face and highlight the importance of including the metacognitive dimension in human communication research.


Subject(s)
Metacognition , Humans , Comprehension , Masks , Speech , Auditory Perception
3.
Eur Arch Otorhinolaryngol ; 280(8): 3661-3672, 2023 Aug.
Article in English | MEDLINE | ID: mdl-36905419

ABSTRACT

BACKGROUND AND PURPOSE: Use of unilateral cochlear implant (UCI) is associated with limited spatial hearing skills. Evidence that training these abilities in UCI user is possible remains limited. In this study, we assessed whether a Spatial training based on hand-reaching to sounds performed in virtual reality improves spatial hearing abilities in UCI users METHODS: Using a crossover randomized clinical trial, we compared the effects of a Spatial training protocol with those of a Non-Spatial control training. We tested 17 UCI users in a head-pointing to sound task and in an audio-visual attention orienting task, before and after each training.
Study is recorded in clinicaltrials.gov (NCT04183348). RESULTS: During the Spatial VR training, sound localization errors in azimuth decreased. Moreover, when comparing head-pointing to sounds before vs. after training, localization errors decreased after the Spatial more than the control training. No training effects emerged in the audio-visual attention orienting task. CONCLUSIONS: Our results showed that sound localization in UCI users improves during a Spatial training, with benefits that extend also to a non-trained sound localization task (generalization). These findings have potentials for novel rehabilitation procedures in clinical contexts.


Subject(s)
Cochlear Implantation , Cochlear Implants , Sound Localization , Speech Perception , Humans , Hearing , Cochlear Implantation/methods , Hearing Tests/methods
4.
Exp Brain Res ; 240(3): 813-824, 2022 Mar.
Article in English | MEDLINE | ID: mdl-35048159

ABSTRACT

In noisy contexts, sound discrimination improves when the auditory sources are separated in space. This phenomenon, named Spatial Release from Masking (SRM), arises from the interaction between the auditory information reaching the ear and spatial attention resources. To examine the relative contribution of these two factors, we exploited an audio-visual illusion in a hearing-in-noise task to create conditions in which the initial stimulation to the ears is held constant, while the perceived separation between speech and masker is changed illusorily (visual capture of sound). In two experiments, we asked participants to identify a string of five digits pronounced by a female voice, embedded in either energetic (Experiment 1) or informational (Experiment 2) noise, before reporting the perceived location of the heard digits. Critically, the distance between target digits and masking noise was manipulated both physically (from 22.5 to 75.0 degrees) and illusorily, by pairing target sounds with visual stimuli either at same (audio-visual congruent) or different positions (15 degrees offset, leftward or rightward: audio-visual incongruent). The proportion of correctly reported digits increased with the physical separation between the target and masker, as expected from SRM. However, despite effective visual capture of sounds, performance was not modulated by illusory changes of target sound position. Our results are compatible with a limited role of central factors in the SRM phenomenon, at least in our experimental setting. Moreover, they add to the controversial literature on the limited effects of audio-visual capture in auditory stream separation.


Subject(s)
Perceptual Masking , Speech Perception , Acoustic Stimulation , Female , Hearing , Humans , Noise , Speech
5.
Ear Hear ; 43(1): 192-205, 2022.
Article in English | MEDLINE | ID: mdl-34225320

ABSTRACT

OBJECTIVES: The aim of this study was to assess three-dimensional (3D) spatial hearing abilities in reaching space of children and adolescents fitted with bilateral cochlear implants (BCI). The study also investigated the impact of spontaneous head movements on sound localization abilities. DESIGN: BCI children (N = 18, aged between 8 and 17) and age-matched normal-hearing (NH) controls (N = 18) took part in the study. Tests were performed using immersive virtual reality equipment that allowed control over visual information and initial eye position, as well as real-time 3D motion tracking of head and hand position with subcentimeter accuracy. The experiment exploited these technical features to achieve trial-by-trial exact positioning in head-centered coordinates of a single loudspeaker used for real, near-field sound delivery, which was reproducible across trials and participants. Using this novel approach, broadband sounds were delivered at different azimuths within the participants' arm length, in front and back space, at two different distances from their heads. Continuous head-monitoring allowed us to compare two listening conditions: "head immobile" (no head movements allowed) and "head moving" (spontaneous head movements allowed). Sound localization performance was assessed by computing the mean 3D error (i.e. the difference in space between the X-Y-Z position of the loudspeaker and the participant's final hand position used to indicate the localization of the sound's source), as well as the percentage of front-back and left-right confusions in azimuth, and the discriminability between two nearby distances. Several clinical factors (i.e. age at test, interimplant interval, and duration of binaural experience) were also correlated with the mean 3D error. Finally, the Speech Spatial and Qualities of Hearing Scale was administered to BCI participants and their parents. RESULTS: Although BCI participants distinguished well between left and right sound sources, near-field spatial hearing remained challenging, particularly under the " head immobile" condition. Without visual priors of the sound position, response accuracy was lower than that of their NH peers, as evidenced by the mean 3D error (BCI: 55 cm, NH: 24 cm, p = 0.008). The BCI group mainly pointed along the interaural axis, corresponding to the position of their CI microphones. This led to important front-back confusions (44.6%). Distance discrimination also remained challenging for BCI users, mostly due to sound compression applied by their processor. Notably, BCI users benefitted from head movements under the "head moving" condition, with a significant decrease of the 3D error when pointing to front targets (p < 0.001). Interimplant interval was correlated with 3D error (p < 0.001), whereas no correlation with self-assessment of spatial hearing difficulties emerged (p = 0.9). CONCLUSIONS: In reaching space, BCI children and adolescents are able to extract enough auditory cues to discriminate sound side. However, without any visual cues or spontaneous head movements during sound emission, their localization abilities are substantially impaired for front-back and distance discrimination. Exploring the environment with head movements was a valuable strategy for improving sound localization within individuals with different clinical backgrounds. These novel findings could prompt new perspectives to better understand sound localization maturation in BCI children, and more broadly in patients with hearing loss.


Subject(s)
Cochlear Implantation , Cochlear Implants , Hearing Loss , Sound Localization , Speech Perception , Adolescent , Child , Cochlear Implantation/methods , Head Movements , Hearing , Humans
6.
Int J Audiol ; 61(7): 561-573, 2022 07.
Article in English | MEDLINE | ID: mdl-34634214

ABSTRACT

OBJECTIVE: The aim of this study was to assess to what extent simultaneously-obtained measures of listening effort (task-evoked pupil dilation, verbal response time [RT], and self-rating) could be sensitive to auditory and cognitive manipulations in a speech perception task. The study also aimed to explore the possible relationship between RT and pupil dilation. DESIGN: A within-group design was adopted. All participants were administered the Matrix Sentence Test in 12 conditions (signal-to-noise ratios [SNR] of -3, -6, -9 dB; attentional resources focussed vs divided; spatial priors present vs absent). STUDY SAMPLE: Twenty-four normal-hearing adults, 20-41 years old (M = 23.5), were recruited in the study. RESULTS: A significant effect of the SNR was found for all measures. However, pupil dilation discriminated only partially between the SNRs. Neither of the cognitive manipulations were effective in modulating the measures. No relationship emerged between pupil dilation, RT and self-ratings. CONCLUSIONS: RT, pupil dilation, and self-ratings can be obtained simultaneously when administering speech perception tasks, even though some limitations remain related to the absence of a retention period after the listening phase. The sensitivity of the three measures to changes in the auditory environment differs. RTs and self-ratings proved most sensitive to changes in SNR.


Subject(s)
Pupil , Speech Perception , Adult , Auditory Perception , Humans , Listening Effort , Pupil/physiology , Reaction Time , Speech Perception/physiology , Young Adult
7.
Exp Brain Res ; 238(3): 727-739, 2020 Mar.
Article in English | MEDLINE | ID: mdl-32080750

ABSTRACT

When asked to identify the position of a sound, listeners can report its perceived location as well as their subjective certainty about this spatial judgement. Yet, research to date focused primarily on measures of perceived location (e.g., accuracy and precision of pointing responses), neglecting instead the phenomenological experience of subjective spatial certainty. The present study aimed to investigate: (1) changes in subjective certainty about sound position induced by listening with one ear plugged (simulated monaural listening), compared to typical binaural listening and (2) the relation between subjective certainty about sound position and localisation accuracy. In two experiments (N = 20 each), participants localised single sounds delivered from one of 60 speakers hidden from view in front space. In each trial, they also provided a subjective rating of their spatial certainty about sound position. No feedback on response was provided. Overall, participants were mostly accurate and certain about sound position in binaural listening, whereas their accuracy and subjective certainty decreased in monaural listening. Interestingly, accuracy and certainty dissociated within single trials during monaural listening: in some trials participants were certain but incorrect, in others they were uncertain but correct. Furthermore, unlike accuracy, subjective certainty rapidly increased as a function of time during the monaural listening block. Finally, subjective certainty changed as a function of perceived location of the sound source. These novel findings reveal that listeners quickly update their subjective confidence on sound position, when they experience an altered listening condition, even in the absence of feedback. Furthermore, they document a dissociation between accuracy and subjective certainty when mapping auditory input to space.


Subject(s)
Attention/physiology , Auditory Pathways/physiology , Sound Localization/physiology , Adult , Dichotic Listening Tests/methods , Dominance, Cerebral/physiology , Female , Humans , Male , Young Adult
8.
Psychol Res ; 84(4): 932-949, 2020 Jun.
Article in English | MEDLINE | ID: mdl-30467818

ABSTRACT

The self-serving bias is the tendency to consider oneself in unrealistically positive terms. This phenomenon has been documented for body attractiveness, but it remains unclear to what extent it can also emerge for own body size perception. In the present study, we examined this issue in healthy young adults (45 females and 40 males), using two body size estimation (BSE) measures and taking into account inter-individual differences in eating disorder risk. Participants observed pictures of avatars, built from whole body photos of themselves or an unknown other matched for gender. Avatars were parametrically distorted along the thinness-heaviness dimension, and individualised by adding the head of the self or the other. In the first BSE task, participants indicated in each trial whether the seen avatar was thinner or fatter than themselves (or the other). In the second BSE task, participants chose the best representative body size for self and other from a set of avatars. Greater underestimation for self than other body size emerged in both tasks, comparably for women and men. Thinner bodies were also judged as more attractive, in line with standard of beauty in modern western society. Notably, this self-serving bias in BSE was stronger in people with low eating disorder risk. In sum, positive attitudes towards the self can extend to body size estimation in young adults, making own body size closer to the ideal body. We propose that this bias could play an adaptive role in preserving a positive body image.


Subject(s)
Body Image/psychology , Self Concept , Size Perception , Thinness , Beauty , Bias , Female , Humans , Male , Photic Stimulation , Young Adult
9.
Proc Natl Acad Sci U S A ; 114(31): E6437-E6446, 2017 08 01.
Article in English | MEDLINE | ID: mdl-28652333

ABSTRACT

Brain systems supporting face and voice processing both contribute to the extraction of important information for social interaction (e.g., person identity). How does the brain reorganize when one of these channels is absent? Here, we explore this question by combining behavioral and multimodal neuroimaging measures (magneto-encephalography and functional imaging) in a group of early deaf humans. We show enhanced selective neural response for faces and for individual face coding in a specific region of the auditory cortex that is typically specialized for voice perception in hearing individuals. In this region, selectivity to face signals emerges early in the visual processing hierarchy, shortly after typical face-selective responses in the ventral visual pathway. Functional and effective connectivity analyses suggest reorganization in long-range connections from early visual areas to the face-selective temporal area in individuals with early and profound deafness. Altogether, these observations demonstrate that regions that typically specialize for voice processing in the hearing brain preferentially reorganize for face processing in born-deaf people. Our results support the idea that cross-modal plasticity in the case of early sensory deprivation relates to the original functional specialization of the reorganized brain regions.


Subject(s)
Auditory Cortex/physiology , Deafness/physiopathology , Facial Recognition/physiology , Neuronal Plasticity/physiology , Visual Pathways/physiology , Acoustic Stimulation , Adult , Brain Mapping , Female , Humans , Magnetic Resonance Imaging , Male , Neuroimaging/methods , Photic Stimulation , Sensory Deprivation/physiology , Visual Perception/physiology
10.
J Cogn Neurosci ; 31(8): 1141-1154, 2019 08.
Article in English | MEDLINE | ID: mdl-30321094

ABSTRACT

Peripersonal space is a multisensory representation relying on the processing of tactile and visual stimuli presented on and close to different body parts. The most studied peripersonal space representation is perihand space (PHS), a highly plastic representation modulated following tool use and by the rapid approach of visual objects. Given these properties, PHS may serve different sensorimotor functions, including guidance of voluntary actions such as object grasping. Strong support for this hypothesis would derive from evidence that PHS plastic changes occur before the upcoming movement rather than after its initiation, yet to date, such evidence is scant. Here, we tested whether action-dependent modulation of PHS, behaviorally assessed via visuotactile perception, may occur before an overt movement as early as the action planning phase. To do so, we probed tactile and visuotactile perception at different time points before and during the grasping action. Results showed that visuotactile perception was more strongly affected during the planning phase (250 msec after vision of the target) than during a similarly static but earlier phase (50 msec after vision of the target). Visuotactile interaction was also enhanced at the onset of hand movement, and it further increased during subsequent phases of hand movement. Such a visuotactile interaction featured interference effects during all phases from action planning onward as well as a facilitation effect at the movement onset. These findings reveal that planning to grab an object strengthens the multisensory interaction of visual information from the target and somatosensory information from the hand. Such early updating of the visuotactile interaction reflects multisensory processes supporting motor planning of actions.


Subject(s)
Personal Space , Psychomotor Performance/physiology , Touch Perception/physiology , Visual Perception/physiology , Adult , Female , Humans , Male , Young Adult
11.
Child Dev ; 90(5): 1525-1534, 2019 09.
Article in English | MEDLINE | ID: mdl-31301066

ABSTRACT

The susceptibility to gaze cueing in deaf children aged 7-14 years old (N = 16) was tested using a nonlinguistic task. Participants performed a peripheral shape-discrimination task, whereas uninformative central gaze cues validly or invalidly cued the location of the target. To assess the role of sign language experience and bilingualism in deaf participants, three groups of age-matched hearing children were recruited: bimodal bilinguals (vocal and sign-language, N = 19), unimodal bilinguals (two vocal languages, N = 17), and monolinguals (N = 14). Although all groups showed a gaze-cueing effect and were faster to respond to validly than invalidly cued targets, this effect was twice as large in deaf participants. This result shows that atypical sensory experience can tune the saliency of a fundamental social cue.


Subject(s)
Cues , Fixation, Ocular , Multilingualism , Persons With Hearing Impairments , Adolescent , Child , Female , Fixation, Ocular/physiology , Humans , Male , Orientation, Spatial/physiology , Reaction Time , Sign Language
12.
Exp Brain Res ; 235(1): 181-191, 2017 01.
Article in English | MEDLINE | ID: mdl-27683004

ABSTRACT

The aim of the present study was to investigate the impact of dynamic distractors on the time-course of oculomotor selection using saccade trajectory deviations. Participants were instructed to make a speeded eye movement (pro-saccade) to a target presented above or below the fixation point while an irrelevant distractor was presented. Four types of distractors were varied within participants: (1) static, (2) flicker, (3) rotating apparent motion and (4) continuous motion. The eccentricity of the distractor was varied between participants. The results showed that saccadic trajectories curved towards distractors presented near the vertical midline; no reliable deviation was found for distractors presented further away from the vertical midline. Differences between the flickering and rotating distractor were found when distractor eccentricity was small and these specific effects developed over time such that there was a clear differentiation between saccadic deviation based on apparent motion for long-latency saccades, but not short-latency saccades. The present results suggest that the influence on performance of apparent motion stimuli is relatively delayed and acts in a more sustained manner compared to the influence of salient static, flickering and continuous moving stimuli.


Subject(s)
Attention/physiology , Eye Movements/physiology , Flicker Fusion/physiology , Motion Perception/physiology , Orientation/physiology , Reaction Time/physiology , Analysis of Variance , Female , Humans , Male , Photic Stimulation , Students , Time Factors , Universities
13.
Brain Topogr ; 30(1): 122-135, 2017 01.
Article in English | MEDLINE | ID: mdl-27620801

ABSTRACT

Rubber hand illusion (RHI) is an important phenomenon for the investigation of body ownership and self/other distinction. The illusion is promoted by the spatial and temporal contingencies of visual inputs near a fake hand and physical touches to the real hand. The neural basis of this phenomenon is not fully understood. We hypothesized that the RHI is associated with a fronto-parietal circuit, and the goal of this study was to determine the dynamics of neural oscillation associated with this phenomenon. We measured electroencephalography while delivering spatially congruent/incongruent visuo-tactile stimulations to fake and real hands. We applied time-frequency analyses and calculated renormalized partial directed coherence (rPDC) to examine cortical dynamics during the bodily illusion. When visuo-tactile stimulation was spatially congruent, and the fake and real hands were aligned, we observed a reduced causal relationship from the medial frontal to the parietal regions with respect to baseline, around 200 ms post-stimulus. This change in rPDC was negatively correlated with a subjective report of the RHI intensity. Moreover, we observed a link between the proprioceptive drift and an increased causal relationship from the parietal cortex to the right somatosensory cortex during a relatively late period (550-750 ms post-stimulus). These findings suggest a two-stage process in which (1) reduced influence from the medial frontal regions over the parietal areas unlocks the mechanisms that preserve body integrity, allowing RHI to emerge; and (2) information processed at the parietal cortex is back-projected to the somatosensory cortex contralateral to the real hand, inducing proprioceptive drift.


Subject(s)
Illusions/physiology , Parietal Lobe/physiology , Proprioception/physiology , Touch Perception/physiology , Visual Perception/physiology , Adult , Body Image , Electroencephalography , Female , Humans , Male , Scalp/physiology , Young Adult
14.
Brain Cogn ; 111: 25-33, 2017 02.
Article in English | MEDLINE | ID: mdl-27816777

ABSTRACT

Localizing tactile stimuli on our body requires sensory information to be represented in multiple frames of reference along the sensory pathways. These reference frames include the representation of sensory information in skin coordinates, in which the spatial relationship of skin regions is maintained. The organization of the primary somatosensory cortex matches such somatotopic reference frame. In contrast, higher-order representations are based on external coordinates, in which body posture and gaze direction are taken into account in order to localise touch in other meaningful ways according to task demands. Dominance of one representation or the other, or the use of multiple representations with different weights, is thought to depend on contextual factors of cognitive and/or sensory origins. However, it is unclear under which situations a reference frame takes over another or when different reference frames are jointly used at the same time. The study of tactile mislocalizations at the fingers has shown a key role of the somatotopic frame of reference, both when touches are delivered unilaterally to a single hand, and when they are delivered bilaterally to both hands. Here, we took advantage of a well-established tactile mislocalization paradigm to investigate whether the reference frame used to integrate bilateral tactile stimuli can change as a function of the spatial relationship between the two hands. Specifically, supra-threshold interference stimuli were applied to the index or little fingers of the left hand 200ms prior to the application of a test stimulus on a finger of the right hand. Crucially, different hands postures were adopted (uncrossed or crossed). Results show that introducing a change in hand-posture triggered the concurrent use of somatotopic and external reference frames when processing bilateral touch at the fingers. This demonstrates that both somatotopic and external reference frames can be concurrently used to localise tactile stimuli on the fingers.


Subject(s)
Hand/physiology , Posture/physiology , Space Perception/physiology , Touch Perception/physiology , Adult , Female , Fingers/physiology , Humans , Male , Young Adult
15.
J Deaf Stud Deaf Educ ; 22(4): 422-433, 2017 Oct 01.
Article in English | MEDLINE | ID: mdl-28961871

ABSTRACT

Multisensory interactions in deaf cognition are largely unexplored. Unisensory studies suggest that behavioral/neural changes may be more prominent for visual compared to tactile processing in early deaf adults. Here we test whether such an asymmetry results in increased saliency of vision over touch during visuo-tactile interactions. About 23 early deaf and 25 hearing adults performed two consecutive visuo-tactile spatial interference tasks. Participants responded either to the elevation of the tactile target while ignoring a concurrent visual distractor at central or peripheral locations (respond to touch/ignore vision), or they performed the opposite task (respond to vision/ignore touch). Multisensory spatial interference emerged in both tasks for both groups. Crucially, deaf participants showed increased interference compared to hearing adults when they attempted to respond to tactile targets and ignore visual distractors, with enhanced difficulties with ipsilateral visual distractors. Analyses on task-order revealed that in deaf adults, interference of visual distractors on tactile targets was much stronger when this task followed the task in which vision was behaviorally relevant (respond to vision/ignore touch). These novel results suggest that behavioral/neural changes related to early deafness determine enhanced visual dominance during visuo-tactile multisensory conflict.


Subject(s)
Deafness/psychology , Adult , Case-Control Studies , Female , Humans , Male , Middle Aged , Photic Stimulation , Reaction Time , Spatial Processing , Touch Perception , Visual Perception , Young Adult
16.
Cogn Neuropsychol ; 33(1-2): 48-66, 2016.
Article in English | MEDLINE | ID: mdl-27314449

ABSTRACT

According to current textbook knowledge, the primary somatosensory cortex (SI) supports unilateral tactile representations, whereas structures beyond SI, in particular the secondary somatosensory cortex (SII), support bilateral tactile representations. However, dexterous and well-coordinated bimanual motor tasks require early integration of bilateral tactile information. Sequential processing, first of unilateral and subsequently of bilateral sensory information, might not be sufficient to accomplish these tasks. This view of sequential processing in the somatosensory system might therefore be questioned, at least for demanding bimanual tasks. Evidence from the last 15 years is forcing a revision of this textbook notion. Studies in animals and humans indicate that SI is more than a simple relay for unilateral sensory information and, together with SII, contributes to the integration of somatosensory inputs from both sides of the body. Here, we review a series of recent works from our own and other laboratories in favour of interactions between tactile stimuli on the two sides of the body at early stages of processing. We focus on tactile processing, although a similar logic may also apply to other aspects of somatosensation. We begin by describing the basic anatomy and physiology of interhemispheric transfer, drawing on neurophysiological studies in animals and behavioural studies in humans that showed tactile interactions between body sides, both in healthy and in brain-damaged individuals. Then we describe the neural substrates of bilateral interactions in somatosensation as revealed by neurophysiological work in animals and neuroimaging studies in humans (i.e., functional magnetic resonance imaging, magnetoencephalography, and transcranial magnetic stimulation). Finally, we conclude with considerations on the dilemma of how efficiently integrating bilateral sensory information at early processing stages can coexist with more lateralized representations of somatosensory input, in the context of motor control.


Subject(s)
Somatosensory Cortex/physiology , Touch/physiology , Adult , Animals , Female , Humans , Male
17.
Eur J Neurosci ; 41(11): 1459-65, 2015 May.
Article in English | MEDLINE | ID: mdl-25879687

ABSTRACT

Moving and interacting with the world requires that the sensory and motor systems share information, but while some information about tactile events is preserved during sensorimotor transfer the spatial specificity of this information is unknown. Afferent inhibition (AI) studies, in which corticospinal excitability (CSE) is inhibited when a single tactile stimulus is presented before a transcranial magnetic stimulation pulse over the motor cortex, offer contradictory results regarding the sensory-to-motor transfer of spatial information. Here, we combined the techniques of AI and tactile repetition suppression (the decreased neurophysiological response following double stimulation of the same vs. different fingers) to investigate whether topographic information is preserved in the sensory-to-motor transfer in humans. We developed a double AI paradigm to examine both spatial (same vs. different finger) and temporal (short vs. long delay) aspects of sensorimotor interactions. Two consecutive electrocutaneous stimuli (separated by either 30 or 125 ms) were delivered to either the same or different fingers on the left hand (i.e. index finger stimulated twice or middle finger stimulated before index finger). Information about which fingers were stimulated was reflected in the size of the motor responses in a time-constrained manner: CSE was modulated differently by same and different finger stimulation only when the two stimuli were separated by the short delay (P = 0.004). We demonstrate that the well-known response of the somatosensory cortices following repetitive stimulation is mirrored in the motor cortex and that CSE is modulated as a function of the temporal and spatial relationship between afferent stimuli.


Subject(s)
Afferent Pathways/physiology , Motor Cortex/physiology , Pyramidal Tracts/physiology , Somatosensory Cortex/physiology , Adult , Electric Stimulation , Electromyography , Evoked Potentials, Motor , Female , Fingers/innervation , Fingers/physiology , Humans , Male , Transcranial Magnetic Stimulation , Young Adult
18.
Hum Brain Mapp ; 36(4): 1506-23, 2015 Apr.
Article in English | MEDLINE | ID: mdl-25514844

ABSTRACT

Animal, as well as behavioural and neuroimaging studies in humans have documented integration of bilateral tactile information at the level of primary somatosensory cortex (SI). However, it is still debated whether integration in SI occurs early or late during tactile processing, and whether it is somatotopically organized. To address both the spatial and temporal aspects of bilateral tactile processing we used magnetoencephalography in a tactile repetition-suppression paradigm. We examined somatosensory evoked-responses produced by probe stimuli preceded by an adaptor, as a function of the relative position of adaptor and probe (probe always at the left index finger; adaptor at the index or middle finger of the left or right hand) and as a function of the delay between adaptor and probe (0, 25, or 125 ms). Percentage of response-amplitude suppression was computed by comparing paired (adaptor + probe) with single stimulations of adaptor and probe. Results show that response suppression varies differentially in SI and SII as a function of both spatial and temporal features of the stimuli. Remarkably, repetition suppression of SI activity emerged early in time, regardless of whether the adaptor stimulus was presented on the same and the opposite body side with respect to the probe. These novel findings support the notion of an early and somatotopically organized inter-hemispheric integration of tactile information in SI.


Subject(s)
Fingers/physiology , Functional Laterality/physiology , Somatosensory Cortex/physiology , Touch Perception/physiology , Adult , Evoked Potentials, Somatosensory , Female , Humans , Magnetoencephalography , Male , Physical Stimulation/methods , Time Factors
19.
Brain Cogn ; 96: 12-27, 2015 Jun.
Article in English | MEDLINE | ID: mdl-25829265

ABSTRACT

Previous work investigating the consequence of bilateral deafness on attentional selection suggests that experience-dependent changes in this population may result in increased automatic processing of stimulus-driven visual information (e.g., saliency). However, adaptive behavior also requires observers to prioritize goal-driven information relevant to the task at hand. In order to investigate whether auditory deprivation alters the balance between these two components of attentional selection, we assessed the time-course of overt visual selection in deaf adults. Twenty early-deaf adults and twenty hearing controls performed an oculomotor additional singleton paradigm. Participants made a speeded eye-movement to a unique orientation target, embedded among homogenous non-targets and one additional unique orientation distractor that was more, equally or less salient than the target. Saliency was manipulated through color. For deaf participants proficiency in sign language was assessed. Overall, results showed that fast initiated saccades were saliency-driven, whereas later initiated saccades were goal-driven. However, deaf participants were overall slower than hearing controls at initiating saccades and also less captured by task-irrelevant salient distractors. The delayed oculomotor behavior of deaf adults was not explained by any of the linguistic measures acquired. Importantly, a multinomial model applied to the data revealed a comparable evolution over time of the underlying saliency- and goal-driven processes between the two groups, confirming the crucial role of saccadic latencies in determining the outcome of visual selection performance. The present findings indicate that prioritization of saliency-driven information is not an unavoidable phenomenon in deafness. Possible neural correlates of the documented behavioral effect are also discussed.


Subject(s)
Attention/physiology , Deafness/physiopathology , Eye Movements/physiology , Pattern Recognition, Visual/physiology , Psychomotor Performance/physiology , Adult , Female , Goals , Humans , Male , Models, Psychological , Saccades/physiology , Young Adult
20.
J Deaf Stud Deaf Educ ; 20(2): 163-71, 2015 Apr.
Article in English | MEDLINE | ID: mdl-25583708

ABSTRACT

How are lexical representations retrieved during sign production? Similar to spoken languages, lexical representation in sign language must be accessed through semantics when naming pictures. However, it remains an open issue whether lexical representations in sign language can be accessed via routes that bypass semantics when retrieval is elicited by written words. Here we address this issue by exploring under which circumstances sign retrieval is sensitive to semantic context. To this end we replicate in sign language production the cumulative semantic cost: The observation that naming latencies increase monotonically with each additional within-category item that is named in a sequence of pictures. In the experiment reported here, deaf participants signed sequences of pictures or signed sequences of Italian written words using Italian Sign Language. The results showed a cumulative semantic cost in picture naming but, strikingly, not in word naming. This suggests that only picture naming required access to semantics, whereas deaf signers accessed the sign language lexicon directly (i.e., bypassing semantics) when naming written words. The implications of these findings for the architecture of the sign production system are discussed in the context of current models of lexical access in spoken language production.


Subject(s)
Deafness/psychology , Semantics , Sign Language , Adolescent , Adult , Humans , Pattern Recognition, Visual/physiology , Reaction Time , Reading , Visual Perception/physiology , Vocabulary , Young Adult
SELECTION OF CITATIONS
SEARCH DETAIL