Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 20 de 16.722
Filter
1.
PLoS One ; 19(7): e0299784, 2024.
Article in English | MEDLINE | ID: mdl-38950011

ABSTRACT

Observers can discriminate between correct versus incorrect perceptual decisions with feelings of confidence. The centro-parietal positivity build-up rate (CPP slope) has been suggested as a likely neural signature of accumulated evidence, which may guide both perceptual performance and confidence. However, CPP slope also covaries with reaction time, which also covaries with confidence in previous studies, and performance and confidence typically covary; thus, CPP slope may index signatures of perceptual performance rather than confidence per se. Moreover, perceptual metacognition-including neural correlates-has largely been studied in vision, with few exceptions. Thus, we lack understanding of domain-general neural signatures of perceptual metacognition outside vision. Here we designed a novel auditory pitch identification task and collected behavior with simultaneous 32-channel EEG in healthy adults. Participants saw two tone labels which varied in tonal distance on each trial (e.g., C vs D, C vs F), then heard a single auditory tone; they identified which label was correct and rated confidence. We found that pitch identification confidence varied with tonal distance, but performance, metacognitive sensitivity (trial-by-trial covariation of confidence with accuracy), and reaction time did not. Interestingly, however, while CPP slope covaried with performance and reaction time, it did not significantly covary with confidence. We interpret these results to mean that CPP slope is likely a signature of first-order perceptual processing and not confidence-specific signals or computations in auditory tasks. Our novel pitch identification task offers a valuable method to examine the neural correlates of auditory and domain-general perceptual confidence.


Subject(s)
Electroencephalography , Pitch Perception , Reaction Time , Humans , Male , Female , Adult , Reaction Time/physiology , Young Adult , Pitch Perception/physiology , Acoustic Stimulation , Metacognition/physiology , Auditory Perception/physiology
2.
Proc Natl Acad Sci U S A ; 121(26): e2318361121, 2024 Jun 25.
Article in English | MEDLINE | ID: mdl-38889147

ABSTRACT

When listeners hear a voice, they rapidly form a complex first impression of who the person behind that voice might be. We characterize how these multivariate first impressions from voices emerge over time across different levels of abstraction using electroencephalography and representational similarity analysis. We find that for eight perceived physical (gender, age, and health), trait (attractiveness, dominance, and trustworthiness), and social characteristics (educatedness and professionalism), representations emerge early (~80 ms after stimulus onset), with voice acoustics contributing to those representations between ~100 ms and 400 ms. While impressions of person characteristics are highly correlated, we can find evidence for highly abstracted, independent representations of individual person characteristics. These abstracted representationse merge gradually over time. That is, representations of physical characteristics (age, gender) arise early (from ~120 ms), while representations of some trait and social characteristics emerge later (~360 ms onward). The findings align with recent theoretical models and shed light on the computations underpinning person perception from voices.


Subject(s)
Auditory Perception , Brain , Electroencephalography , Voice , Humans , Male , Female , Voice/physiology , Adult , Brain/physiology , Auditory Perception/physiology , Young Adult , Social Perception
3.
Sci Rep ; 14(1): 14895, 2024 06 28.
Article in English | MEDLINE | ID: mdl-38942761

ABSTRACT

Older adults (OAs) are typically slower and/or less accurate in forming perceptual choices relative to younger adults. Despite perceptual deficits, OAs gain from integrating information across senses, yielding multisensory benefits. However, the cognitive processes underlying these seemingly discrepant ageing effects remain unclear. To address this knowledge gap, 212 participants (18-90 years old) performed an online object categorisation paradigm, whereby age-related differences in Reaction Times (RTs) and choice accuracy between audiovisual (AV), visual (V), and auditory (A) conditions could be assessed. Whereas OAs were slower and less accurate across sensory conditions, they exhibited greater RT decreases between AV and V conditions, showing a larger multisensory benefit towards decisional speed. Hierarchical Drift Diffusion Modelling (HDDM) was fitted to participants' behaviour to probe age-related impacts on the latent multisensory decision formation processes. For OAs, HDDM demonstrated slower evidence accumulation rates across sensory conditions coupled with increased response caution for AV trials of higher difficulty. Notably, for trials of lower difficulty we found multisensory benefits in evidence accumulation that increased with age, but not for trials of higher difficulty, in which increased response caution was instead evident. Together, our findings reconcile age-related impacts on multisensory decision-making, indicating greater multisensory evidence accumulation benefits with age underlying enhanced decisional speed.


Subject(s)
Aging , Auditory Perception , Decision Making , Reaction Time , Visual Perception , Humans , Aged , Adult , Middle Aged , Female , Male , Aged, 80 and over , Decision Making/physiology , Adolescent , Reaction Time/physiology , Young Adult , Auditory Perception/physiology , Aging/physiology , Aging/psychology , Visual Perception/physiology , Photic Stimulation , Acoustic Stimulation
4.
Behav Brain Funct ; 20(1): 17, 2024 Jun 28.
Article in English | MEDLINE | ID: mdl-38943215

ABSTRACT

BACKGROUND: Left-handedness is a condition that reverses the typical left cerebral dominance of motor control to an atypical right dominance. The impact of this distinct control - and its associated neuroanatomical peculiarities - on other cognitive functions such as music processing or playing a musical instrument remains unexplored. Previous studies in right-handed population have linked musicianship to a larger volume in the (right) auditory cortex and a larger volume in the (right) arcuate fasciculus. RESULTS: In our study, we reveal that left-handed musicians (n = 55), in comparison to left-handed non-musicians (n = 75), exhibit a larger gray matter volume in both the left and right Heschl's gyrus, critical for auditory processing. They also present a higher number of streamlines across the anterior segment of the right arcuate fasciculus. Importantly, atypical hemispheric lateralization of speech (notably prevalent among left-handers) was associated to a rightward asymmetry of the AF, in contrast to the leftward asymmetry exhibited by the typically lateralized. CONCLUSIONS: These findings suggest that left-handed musicians share similar neuroanatomical characteristics with their right-handed counterparts. However, atypical lateralization of speech might potentiate the right audiomotor pathway, which has been associated with musicianship and better musical skills. This may help explain why musicians are more prevalent among left-handers and shed light on their cognitive advantages.


Subject(s)
Functional Laterality , Music , Humans , Male , Functional Laterality/physiology , Female , Adult , Young Adult , Auditory Cortex/anatomy & histology , Auditory Cortex/physiology , Magnetic Resonance Imaging , Gray Matter/anatomy & histology , Gray Matter/diagnostic imaging , Auditory Perception/physiology , Brain/anatomy & histology , Brain/physiology
5.
Sci Rep ; 14(1): 14575, 2024 06 25.
Article in English | MEDLINE | ID: mdl-38914752

ABSTRACT

People often interact with groups (i.e., ensembles) during social interactions. Given that group-level information is important in navigating social environments, we expect perceptual sensitivity to aspects of groups that are relevant for personal threat as well as social belonging. Most ensemble perception research has focused on visual ensembles, with little research looking at auditory or vocal ensembles. Across four studies, we present evidence that (i) perceivers accurately extract the sex composition of a group from voices alone, (ii) judgments of threat increase concomitantly with the number of men, and (iii) listeners' sense of belonging depends on the number of same-sex others in the group. This work advances our understanding of social cognition, interpersonal communication, and ensemble coding to include auditory information, and reveals people's ability to extract relevant social information from brief exposures to vocalizing groups.


Subject(s)
Voice , Humans , Male , Female , Adult , Sex Ratio , Social Perception , Young Adult , Auditory Perception/physiology , Interpersonal Relations , Social Interaction
6.
Exp Brain Res ; 242(7): 1787-1795, 2024 Jul.
Article in English | MEDLINE | ID: mdl-38822826

ABSTRACT

The vigilance decrement, a temporal decline in detection performance, has been observed across multiple sensory modalities. Spatial uncertainty about the location of task-relevant stimuli has been demonstrated to increase the demands of vigilance and increase the severity of the vigilance decrement when attending to visual displays. The current study investigated whether spatial uncertainty also increases the severity of the vigilance decrement and task demands when an auditory display is used. Individuals monitored an auditory display to detect critical signals that were shorter in duration than non-target stimuli. These auditory stimuli were presented in either a consistent, predictable pattern that alternated sound presentation from left to right (spatial certainty) or an inconsistent, unpredictable pattern that randomly presented sounds from the left or right (spatial uncertainty). Cerebral blood flow velocity (CBFV) was measured to assess the neurophysiological demands of the task. A decline in performance and CBFV was observed in both the spatially certain and spatially uncertain conditions, suggesting that spatial auditory vigilance tasks are demanding and can result in a vigilance decrement. Spatial uncertainty resulted in a more severe vigilance decrement in correct detections compared to spatial certainty. Reduced right-hemispheric CBFV was also observed during spatial uncertainty compared to spatial certainty. Together, these results suggest that auditory spatial uncertainty hindered performance and required greater attentional demands compared to spatial certainty. These results concur with previous research showing the negative impact of spatial uncertainty in visual vigilance tasks, but the current results contrast recent research showing no effect of spatial uncertainty on tactile vigilance.


Subject(s)
Auditory Perception , Cerebrovascular Circulation , Space Perception , Humans , Male , Female , Young Adult , Uncertainty , Adult , Auditory Perception/physiology , Cerebrovascular Circulation/physiology , Space Perception/physiology , Acoustic Stimulation/methods , Hemodynamics/physiology , Attention/physiology , Arousal/physiology , Psychomotor Performance/physiology
7.
Proc Natl Acad Sci U S A ; 121(25): e2405588121, 2024 Jun 18.
Article in English | MEDLINE | ID: mdl-38861607

ABSTRACT

Many animals can extract useful information from the vocalizations of other species. Neuroimaging studies have evidenced areas sensitive to conspecific vocalizations in the cerebral cortex of primates, but how these areas process heterospecific vocalizations remains unclear. Using fMRI-guided electrophysiology, we recorded the spiking activity of individual neurons in the anterior temporal voice patches of two macaques while they listened to complex sounds including vocalizations from several species. In addition to cells selective for conspecific macaque vocalizations, we identified an unsuspected subpopulation of neurons with strong selectivity for human voice, not merely explained by spectral or temporal structure of the sounds. The auditory representational geometry implemented by these neurons was strongly related to that measured in the human voice areas with neuroimaging and only weakly to low-level acoustical structure. These findings provide new insights into the neural mechanisms involved in auditory expertise and the evolution of communication systems in primates.


Subject(s)
Auditory Perception , Magnetic Resonance Imaging , Neurons , Vocalization, Animal , Voice , Animals , Humans , Neurons/physiology , Voice/physiology , Magnetic Resonance Imaging/methods , Vocalization, Animal/physiology , Auditory Perception/physiology , Male , Macaca mulatta , Brain/physiology , Acoustic Stimulation , Brain Mapping/methods
8.
Cereb Cortex ; 34(6)2024 Jun 04.
Article in English | MEDLINE | ID: mdl-38879756

ABSTRACT

Midbrain multisensory neurons undergo a significant postnatal transition in how they process cross-modal (e.g. visual-auditory) signals. In early stages, signals derived from common events are processed competitively; however, at later stages they are processed cooperatively such that their salience is enhanced. This transition reflects adaptation to cross-modal configurations that are consistently experienced and become informative about which correspond to common events. Tested here was the assumption that overt behaviors follow a similar maturation. Cats were reared in omnidirectional sound thereby compromising the experience needed for this developmental process. Animals were then repeatedly exposed to different configurations of visual and auditory stimuli (e.g. spatiotemporally congruent or spatially disparate) that varied on each side of space and their behavior was assessed using a detection/localization task. Animals showed enhanced performance to stimuli consistent with the experience provided: congruent stimuli elicited enhanced behaviors where spatially congruent cross-modal experience was provided, and spatially disparate stimuli elicited enhanced behaviors where spatially disparate cross-modal experience was provided. Cross-modal configurations not consistent with experience did not enhance responses. The presumptive benefit of such flexibility in the multisensory developmental process is to sensitize neural circuits (and the behaviors they control) to the features of the environment in which they will function. These experiments reveal that these processes have a high degree of flexibility, such that two (conflicting) multisensory principles can be implemented by cross-modal experience on opposite sides of space even within the same animal.


Subject(s)
Acoustic Stimulation , Auditory Perception , Brain , Photic Stimulation , Visual Perception , Animals , Cats , Auditory Perception/physiology , Visual Perception/physiology , Photic Stimulation/methods , Brain/physiology , Brain/growth & development , Male , Female , Behavior, Animal/physiology
9.
Sensors (Basel) ; 24(11)2024 May 27.
Article in English | MEDLINE | ID: mdl-38894232

ABSTRACT

Sound localization is a crucial aspect of human auditory perception. VR (virtual reality) technologies provide immersive audio platforms that allow human listeners to experience natural sounds based on their ability to localize sound. However, the simulations of sound generated by these platforms, which are based on the general head-related transfer function (HRTF), often lack accuracy in terms of individual sound perception and localization due to significant individual differences in this function. In this study, we aimed to investigate the disparities between the perceived locations of sound sources by users and the locations generated by the platform. Our goal was to determine if it is possible to train users to adapt to the platform-generated sound sources. We utilized the Microsoft HoloLens 2 virtual platform and collected data from 12 subjects based on six separate training sessions arranged in 2 weeks. We employed three modes of training to assess their effects on sound localization, in particular for studying the impacts of multimodal error, visual, and sound guidance in combination with kinesthetic/postural guidance, on the effectiveness of the training. We analyzed the collected data in terms of the training effect between pre- and post-sessions as well as the retention effect between two separate sessions based on subject-wise paired statistics. Our findings indicate that, as far as the training effect between pre- and post-sessions is concerned, the effect is proven to be statistically significant, in particular in the case wherein kinesthetic/postural guidance is mixed with visual and sound guidance. Conversely, visual error guidance alone was found to be largely ineffective. On the other hand, as far as the retention effect between two separate sessions is concerned, we could not find any meaningful statistical implication on the effect for all three error guidance modes out of the 2-week session of training. These findings can contribute to the improvement of VR technologies by ensuring they are designed to optimize human sound localization abilities.


Subject(s)
Sound Localization , Humans , Sound Localization/physiology , Female , Male , Adult , Virtual Reality , Young Adult , Auditory Perception/physiology , Sound
10.
Codas ; 36(3): e20230098, 2024.
Article in Portuguese, English | MEDLINE | ID: mdl-38896744

ABSTRACT

OBJECTIVE: To describe and analyze auditory and academic complaints of students and employees of a federal public university. METHODS: The study was carried out using a non-probabilistic. The EAPAC Scale with adaptations was used to fulfill the research objectives. It has 14 questions about complaints related to listening skills and 12 questions related to the academic environment. Descriptive data analysis was performed through the frequency distribution of categorical variables and Pearson's chi-square test was used for association analyses. RESULTS: 646 individuals aged between 17 and 67 years old participated in the research. The most prevalent complaints were academic difficulty related to memory, concentration, and planning, hearing and understanding speech in noise, and memorization of tasks that were only heard. There was an association with bidirectional statistical significance between academic and auditory complaints. CONCLUSION: It was possible to observe that there is an association between auditory and academic complaints in adults, marked by the relationship between cognitive and auditory aspects. It is relevant that these factors are considered when performing assessments of Central Auditory Processing when intervening in patients with auditory complaints, and in student life.


OBJETIVO: Descrever e analisar queixas auditivas e acadêmicas de universitários e funcionários de uma universidade pública federal. MÉTODOS: O estudo foi realizado por amostra não-probabilística. A Escala de Autopercepção de Habilidades do Processamento Auditivo Central com adaptações foi utilizada para cumprir os objetivos da pesquisa. Esta possui 14 questões sobre queixas relacionadas às habilidades auditivas e 12 relacionadas ao ambiente acadêmico. Foi realizada a análise descritiva dos dados por meio da distribuição de frequência das variáveis categóricas e, para as análises de associação, foi utilizado o teste Qui-quadrado de Pearson. RESULTADOS: Participaram da pesquisa 646 indivíduos com faixa etária entre 17 e 67 anos. As queixas mais prevalentes foram: dificuldade acadêmica relacionada à memória, concentração e planejamento, ouvir e compreender a fala no ruído, e memorização de tarefas que foram apenas ouvidas. Houve associação com significância estatística bidirecional entre as queixas acadêmicas e auditivas. CONCLUSÃO: Foi possível observar que há associação entre queixas auditivas e acadêmicas em adultos, marcada pela relação de aspectos cognitivos com aspectos auditivos. É relevante que esses fatores sejam considerados ao realizar avaliações do Processamento Auditivo Central, ao se intervir em pacientes com queixas auditivas, e na vida estudantil.


Subject(s)
Auditory Perception , Humans , Adult , Adolescent , Male , Female , Young Adult , Middle Aged , Aged , Auditory Perception/physiology , Self Concept , Students , Brazil , Surveys and Questionnaires , Universities , Cross-Sectional Studies
11.
J Neural Eng ; 21(3)2024 Jun 20.
Article in English | MEDLINE | ID: mdl-38834062

ABSTRACT

Objective.In this study, we use electroencephalography (EEG) recordings to determine whether a subject is actively listening to a presented speech stimulus. More precisely, we aim to discriminate between an active listening condition, and a distractor condition where subjects focus on an unrelated distractor task while being exposed to a speech stimulus. We refer to this task as absolute auditory attention decoding.Approach.We re-use an existing EEG dataset where the subjects watch a silent movie as a distractor condition, and introduce a new dataset with two distractor conditions (silently reading a text and performing arithmetic exercises). We focus on two EEG features, namely neural envelope tracking (NET) and spectral entropy (SE). Additionally, we investigate whether the detection of such an active listening condition can be combined with a selective auditory attention decoding (sAAD) task, where the goal is to decide to which of multiple competing speakers the subject is attending. The latter is a key task in so-called neuro-steered hearing devices that aim to suppress unattended audio, while preserving the attended speaker.Main results.Contrary to a previous hypothesis of higher SE being related with actively listening rather than passively listening (without any distractors), we find significantly lower SE in the active listening condition compared to the distractor conditions. Nevertheless, the NET is consistently significantly higher when actively listening. Similarly, we show that the accuracy of a sAAD task improves when evaluating the accuracy only on the highest NET segments. However, the reverse is observed when evaluating the accuracy only on the lowest SE segments.Significance.We conclude that the NET is more reliable for decoding absolute auditory attention as it is consistently higher when actively listening, whereas the relation of the SE between actively and passively listening seems to depend on the nature of the distractor.


Subject(s)
Attention , Electroencephalography , Speech Perception , Humans , Attention/physiology , Electroencephalography/methods , Female , Male , Speech Perception/physiology , Adult , Young Adult , Acoustic Stimulation/methods , Auditory Perception/physiology
12.
Brain Behav ; 14(6): e3571, 2024 Jun.
Article in English | MEDLINE | ID: mdl-38841736

ABSTRACT

OBJECTIVE: This study aims to control all hearing thresholds, including extended high frequencies (EHFs), presents stimuli of varying difficulty levels, and measures electroencephalography (EEG) and pupillometry responses to determine whether listening difficulty in tinnitus patients is effort or fatigue-related. METHODS: Twenty-one chronic tinnitus patients and 26 matched healthy controls having normal pure-tone averages with symmetrical hearing thresholds were included. Subjects were evaluated with 0.125-20 kHz pure-tone audiometry, Montreal Cognitive Assessment Test (MoCA), Tinnitus Handicap Inventory (THI), EEG, and pupillometry. RESULTS: Pupil dilatation and EEG alpha power during the "encoding" phase of the presented sentence in tinnitus patients were less in all listening conditions (p < .05). Also, there was no statistically significant relationship between EEG and pupillometry components for all listening conditions and THI or MoCA (p > .05). CONCLUSION: EEG and pupillometry results under various listening conditions indicate potential listening effort in tinnitus patients even if all frequencies, including EHFs, are controlled. Also, we suggest that pupillometry should be interpreted with caution in autonomic nervous system-related conditions such as tinnitus.


Subject(s)
Electroencephalography , Pupil , Tinnitus , Humans , Tinnitus/physiopathology , Tinnitus/diagnosis , Male , Female , Electroencephalography/methods , Adult , Middle Aged , Pupil/physiology , Audiometry, Pure-Tone , Auditory Perception/physiology , Auditory Threshold/physiology
13.
J Acoust Soc Am ; 155(6): 3639-3653, 2024 Jun 01.
Article in English | MEDLINE | ID: mdl-38836771

ABSTRACT

The estimation of auditory evoked potentials requires deconvolution when the duration of the responses to be recovered exceeds the inter-stimulus interval. Based on least squares deconvolution, in this article we extend the procedure to the case of a multi-response convolutional model, that is, a model in which different categories of stimulus are expected to evoke different responses. The computational cost of the multi-response deconvolution significantly increases with the number of responses to be deconvolved, which restricts its applicability in practical situations. In order to alleviate this restriction, we propose to perform the multi-response deconvolution in a reduced representation space associated with a latency-dependent filtering of auditory responses, which provides a significant dimensionality reduction. We demonstrate the practical viability of the multi-response deconvolution with auditory responses evoked by clicks presented at different levels and categorized according to their stimulation level. The multi-response deconvolution applied in a reduced representation space provides the least squares estimation of the responses with a reasonable computational load. matlab/Octave code implementing the proposed procedure is included as supplementary material.


Subject(s)
Acoustic Stimulation , Evoked Potentials, Auditory , Evoked Potentials, Auditory/physiology , Humans , Acoustic Stimulation/methods , Male , Adult , Electroencephalography/methods , Female , Least-Squares Analysis , Young Adult , Signal Processing, Computer-Assisted , Reaction Time , Auditory Perception/physiology
14.
J Acoust Soc Am ; 155(6): 3742-3759, 2024 Jun 01.
Article in English | MEDLINE | ID: mdl-38856312

ABSTRACT

Amplitude modulation (AM) of a masker reduces its masking on a simultaneously presented unmodulated pure-tone target, which likely involves dip listening. This study tested the idea that dip-listening efficiency may depend on stimulus context, i.e., the match in AM peakedness (AMP) between the masker and a precursor or postcursor stimulus, assuming a form of temporal pattern analysis process. Masked thresholds were measured in normal-hearing listeners using Schroeder-phase harmonic complexes as maskers and precursors or postcursors. Experiment 1 showed threshold elevation (i.e., interference) when a flat cursor preceded or followed a peaked masker, suggesting proactive and retroactive temporal pattern analysis. Threshold decline (facilitation) was observed when the masker AMP was matched to the precursor, irrespective of stimulus AMP, suggesting only proactive processing. Subsequent experiments showed that both interference and facilitation (1) remained robust when a temporal gap was inserted between masker and cursor, (2) disappeared when an F0-difference was introduced between masker and precursor, and (3) decreased when the presentation level was reduced. These results suggest an important role of envelope regularity in dip listening, especially when masker and cursor are F0-matched and, therefore, form one perceptual stream. The reported effects seem to represent a time-domain variant of comodulation masking release.


Subject(s)
Acoustic Stimulation , Auditory Threshold , Perceptual Masking , Humans , Young Adult , Adult , Time Factors , Female , Male , Audiometry, Pure-Tone , Auditory Perception/physiology
15.
Nat Commun ; 15(1): 4835, 2024 Jun 06.
Article in English | MEDLINE | ID: mdl-38844457

ABSTRACT

Humans produce two forms of cognitively complex vocalizations: speech and song. It is debated whether these differ based primarily on culturally specific, learned features, or if acoustical features can reliably distinguish them. We study the spectro-temporal modulation patterns of vocalizations produced by 369 people living in 21 urban, rural, and small-scale societies across six continents. Specific ranges of spectral and temporal modulations, overlapping within categories and across societies, significantly differentiate speech from song. Machine-learning classification shows that this effect is cross-culturally robust, vocalizations being reliably classified solely from their spectro-temporal features across all 21 societies. Listeners unfamiliar with the cultures classify these vocalizations using similar spectro-temporal cues as the machine learning algorithm. Finally, spectro-temporal features are better able to discriminate song from speech than a broad range of other acoustical variables, suggesting that spectro-temporal modulation-a key feature of auditory neuronal tuning-accounts for a fundamental difference between these categories.


Subject(s)
Machine Learning , Speech , Humans , Speech/physiology , Male , Female , Adult , Acoustics , Cross-Cultural Comparison , Auditory Perception/physiology , Sound Spectrography , Singing/physiology , Music , Middle Aged , Young Adult
16.
Cereb Cortex ; 34(6)2024 Jun 04.
Article in English | MEDLINE | ID: mdl-38897817

ABSTRACT

Recent work suggests that the adult human brain is very adaptable when it comes to sensory processing. In this context, it has also been suggested that structural "blueprints" may fundamentally constrain neuroplastic change, e.g. in response to sensory deprivation. Here, we trained 12 blind participants and 14 sighted participants in echolocation over a 10-week period, and used MRI in a pre-post design to measure functional and structural brain changes. We found that blind participants and sighted participants together showed a training-induced increase in activation in left and right V1 in response to echoes, a finding difficult to reconcile with the view that sensory cortex is strictly organized by modality. Further, blind participants and sighted participants showed a training induced increase in activation in right A1 in response to sounds per se (i.e. not echo-specific), and this was accompanied by an increase in gray matter density in right A1 in blind participants and in adjacent acoustic areas in sighted participants. The similarity in functional results between sighted participants and blind participants is consistent with the idea that reorganization may be governed by similar principles in the two groups, yet our structural analyses also showed differences between the groups suggesting that a more nuanced view may be required.


Subject(s)
Auditory Cortex , Blindness , Magnetic Resonance Imaging , Visual Cortex , Humans , Blindness/physiopathology , Blindness/diagnostic imaging , Male , Adult , Female , Auditory Cortex/diagnostic imaging , Auditory Cortex/physiology , Auditory Cortex/physiopathology , Visual Cortex/diagnostic imaging , Visual Cortex/physiology , Young Adult , Neuronal Plasticity/physiology , Acoustic Stimulation , Brain Mapping , Middle Aged , Auditory Perception/physiology , Echolocation/physiology
17.
PLoS One ; 19(6): e0304913, 2024.
Article in English | MEDLINE | ID: mdl-38900836

ABSTRACT

Research has shown that perceiving the order of successive auditory stimuli could be affected by their nameability. The present research re-examined this hypothesis, using tasks requiring participants to report the order of successively presented (with no interstimulus gaps) environmental (i.e., easily named stimuli) and abstract (i.e., hard-to-name stimuli) sounds of short duration (i.e., 200 ms). Using the same sequences, we also examined the accuracy of the sounds perceived by administering enumeration tasks. Data analyses showed that accuracy in the ordering tasks was equally low for both environmental and abstract sounds, whereas accuracy in the enumeration tasks was higher for the former as compared to the latter sounds. Importantly, overall accuracy in the enumeration tasks did not reach ceiling levels, suggesting some limitations in the perception of successively presented stimuli. Overall, naming fluency seemed to affect sound enumeration, but no effects were obtained for order perception. Furthermore, an effect of each sound's location in a sequence on ordering accuracy was noted. Our results question earlier notions suggesting that order perception is mediated by stimuli's nameability and leave open the possibility that memory capacity limits may play a role.


Subject(s)
Acoustic Stimulation , Auditory Perception , Memory, Short-Term , Sound , Humans , Male , Female , Auditory Perception/physiology , Adult , Memory, Short-Term/physiology , Young Adult , Names
18.
Elife ; 122024 Jun 21.
Article in English | MEDLINE | ID: mdl-38904659

ABSTRACT

Dynamic attending theory proposes that the ability to track temporal cues in the auditory environment is governed by entrainment, the synchronization between internal oscillations and regularities in external auditory signals. Here, we focused on two key properties of internal oscillators: their preferred rate, the default rate in the absence of any input; and their flexibility, how they adapt to changes in rhythmic context. We developed methods to estimate oscillator properties (Experiment 1) and compared the estimates across tasks and individuals (Experiment 2). Preferred rates, estimated as the stimulus rates with peak performance, showed a harmonic relationship across measurements and were correlated with individuals' spontaneous motor tempo. Estimates from motor tasks were slower than those from the perceptual task, and the degree of slowing was consistent for each individual. Task performance decreased with trial-to-trial changes in stimulus rate, and responses on individual trials were biased toward the preceding trial's stimulus properties. Flexibility, quantified as an individual's ability to adapt to faster-than-previous rates, decreased with age. These findings show domain-specific rate preferences for the assumed oscillatory system underlying rhythm perception and production, and that this system loses its ability to flexibly adapt to changes in the external rhythmic context during aging.


Subject(s)
Attention , Auditory Perception , Humans , Adult , Attention/physiology , Female , Male , Young Adult , Aged , Auditory Perception/physiology , Middle Aged , Aging/physiology , Acoustic Stimulation , Adolescent
19.
Neuroimage ; 296: 120686, 2024 Aug 01.
Article in English | MEDLINE | ID: mdl-38871037

ABSTRACT

Centromedian nucleus (CM) is one of several intralaminar nuclei of the thalamus and is thought to be involved in consciousness, arousal, and attention. CM has been suggested to play a key role in the control of attention, by regulating the flow of information to different brain regions such as the ascending reticular system, basal ganglia, and cortex. While the neurophysiology of attention in visual and auditory systems has been studied in animal models, combined single unit and LFP recordings in human have not, to our knowledge, been reported. Here, we recorded neuronal activity in the CM nucleus in 11 patients prior to insertion of deep brain stimulation electrodes for the treatment of epilepsy while subjects performed an auditory attention task. Patients were requested to attend and count the infrequent (p = 0.2) odd or "deviant" tones, ignore the frequent standard tones and report the total number of deviant tones at trial completion. Spikes were discriminated, and LFPs were band pass filtered (5-45 Hz). Average peri­stimulus time histograms and spectra were constructed by aligning on tone onsets and statistically compared. The firing rate of CM neurons showed selective, multi-phasic responses to deviant tones in 81% of the tested neurons. Local field potential analysis showed selective beta and low gamma (13-45 Hz) modulations in response to deviant tones, also in a multi-phasic pattern. The current study demonstrates that CM neurons are under top-down control and participate in the selective processing during auditory attention and working memory. These results, taken together, implicate the CM in selective auditory attention and working memory and support a role of beta and low gamma oscillatory activity in cognitive processes. It also has potential implications for DBS therapy for epilepsy and non-motor symptoms of PD, such as apathy and other disorders of attention.


Subject(s)
Attention , Auditory Perception , Intralaminar Thalamic Nuclei , Memory, Short-Term , Neurons , Humans , Attention/physiology , Male , Female , Memory, Short-Term/physiology , Adult , Auditory Perception/physiology , Intralaminar Thalamic Nuclei/physiology , Middle Aged , Neurons/physiology , Young Adult , Acoustic Stimulation , Deep Brain Stimulation/methods
20.
Med Sci Monit ; 30: e944090, 2024 Jun 11.
Article in English | MEDLINE | ID: mdl-38859565

ABSTRACT

BACKGROUND The dichotic digit test (DDT) is one of the tests for the behavioral assessment of central auditory processing. Dichotic listening tests are sensitive ways of assessing cortical structures, the corpus callossum, and binaural integration mechanisms, showing strong correlations with learning difficulties. The DDT is presently available in a number of languages, each appropriate for the subject's native language. However, there is presently no test in the Italian language. The goal of this study was to develop an Italian version of the one-pair dichotic digit test (DDT-IT) and analyze results in 39 normal-hearing Italian children 11 to 13 years old. We used 2 conditions of presentation: free recall and directed attention (left or right ear), and looked at possible effects of sex and ear side. MATERIAL AND METHODS This study involved 3 steps: creation of the stimuli, checking their quality with Italian speakers, and assessment of the DDT-IT in our subject pool. The study involved 39 children (26 girls and 13 boys), aged 11-13 years. All participants underwent basic audiological assessment, auditory brainstem response, and then DDT-IT. RESULTS Results under free recall and directed attention conditions were similar for right and left ears, and there were no sex or age effects. CONCLUSIONS The Italian version of DDT (DDT-IT) has been developed and its performance on 39 normal-hearing Italian children was assessed. We found there were no age or sex effects for either the free recall condition or the directed attention condition.


Subject(s)
Dichotic Listening Tests , Humans , Female , Male , Child , Adolescent , Dichotic Listening Tests/methods , Italy , Language , Hearing/physiology , Auditory Perception/physiology , Attention/physiology
SELECTION OF CITATIONS
SEARCH DETAIL
...