Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 32.651
Filtrar
1.
J Acoust Soc Am ; 155(5): 3101-3117, 2024 May 01.
Artículo en Inglés | MEDLINE | ID: mdl-38722101

RESUMEN

Cochlear implant (CI) users often report being unsatisfied by music listening through their hearing device. Vibrotactile stimulation could help alleviate those challenges. Previous research has shown that musical stimuli was given higher preference ratings by normal-hearing listeners when concurrent vibrotactile stimulation was congruent in intensity and timing with the corresponding auditory signal compared to incongruent. However, it is not known whether this is also the case for CI users. Therefore, in this experiment, we presented 18 CI users and 24 normal-hearing listeners with five melodies and five different audio-to-tactile maps. Each map varied the congruence between the audio and tactile signals related to intensity, fundamental frequency, and timing. Participants were asked to rate the maps from zero to 100, based on preference. It was shown that almost all normal-hearing listeners, as well as a subset of the CI users, preferred tactile stimulation, which was congruent with the audio in intensity and timing. However, many CI users had no difference in preference between timing aligned and timing unaligned stimuli. The results provide evidence that vibrotactile music enjoyment enhancement could be a solution for some CI users; however, more research is needed to understand which CI users can benefit from it most.


Asunto(s)
Estimulación Acústica , Percepción Auditiva , Implantes Cocleares , Música , Humanos , Femenino , Masculino , Adulto , Persona de Mediana Edad , Anciano , Percepción Auditiva/fisiología , Adulto Joven , Prioridad del Paciente , Implantación Coclear/instrumentación , Percepción del Tacto/fisiología , Vibración , Tacto
2.
JASA Express Lett ; 4(5)2024 May 01.
Artículo en Inglés | MEDLINE | ID: mdl-38727569

RESUMEN

Bimodal stimulation, a cochlear implant (CI) in one ear and a hearing aid (HA) in the other, provides highly asymmetrical inputs. To understand how asymmetry affects perception and memory, forward and backward digit spans were measured in nine bimodal listeners. Spans were unchanged from monotic to diotic presentation; there was an average two-digit decrease for dichotic presentation with some extreme cases of decreases to zero spans. Interaurally asymmetrical decreases were not predicted based on the device or better-functioning ear. Therefore, bimodal listeners can demonstrate a strong ear dominance, diminishing memory recall dichotically even when perception was intact monaurally.


Asunto(s)
Implantes Cocleares , Humanos , Persona de Mediana Edad , Anciano , Masculino , Femenino , Pruebas de Audición Dicótica , Adulto , Percepción Auditiva/fisiología , Audífonos
3.
Headache ; 64(5): 482-493, 2024 May.
Artículo en Inglés | MEDLINE | ID: mdl-38693749

RESUMEN

OBJECTIVE: In this cross-sectional observational study, we aimed to investigate sensory profiles and multisensory integration processes in women with migraine using virtual dynamic interaction systems. BACKGROUND: Compared to studies on unimodal sensory processing, fewer studies show that multisensory integration differs in patients with migraine. Multisensory integration of visual, auditory, verbal, and haptic modalities has not been evaluated in migraine. METHODS: A 12-min virtual dynamic interaction game consisting of four parts was played by the participants. During the game, the participants were exposed to either visual stimuli only or multisensory stimuli in which auditory, verbal, and haptic stimuli were added to the visual stimuli. A total of 78 women participants (28 with migraine without aura and 50 healthy controls) were enrolled in this prospective exploratory study. Patients with migraine and healthy participants who met the inclusion criteria were randomized separately into visual and multisensory groups: Migraine multisensory (14 adults), migraine visual (14 adults), healthy multisensory (25 adults), and healthy visual (25 adults). The Sensory Profile Questionnaire was utilized to assess the participants' sensory profiles. The game scores and survey results were analyzed. RESULTS: In visual stimulus, the gaming performance scores of patients with migraine without aura were similar to the healthy controls, at a median (interquartile range [IQR]) of 81.8 (79.5-85.8) and 80.9 (77.1-84.2) (p = 0.149). Error rate of visual stimulus in patients with migraine without aura were comparable to healthy controls, at a median (IQR) of 0.11 (0.08-0.13) and 0.12 (0.10-0.14), respectively (p = 0,166). In multisensory stimulation, average gaming score was lower in patients with migraine without aura compared to healthy individuals (median [IQR] 82.2 [78.8-86.3] vs. 78.6 [74.0-82.4], p = 0.028). In women with migraine, exposure to new sensory modality upon visual stimuli in the fourth, seventh, and tenth rounds (median [IQR] 78.1 [74.1-82.0], 79.7 [77.2-82.5], 76.5 [70.2-82.1]) exhibited lower game scores compared to visual stimuli only (median [IQR] 82.3 [77.9-87.8], 84.2 [79.7-85.6], 80.8 [79.0-85.7], p = 0.044, p = 0.049, p = 0.016). According to the Sensory Profile Questionnaire results, sensory sensitivity, and sensory avoidance scores of patients with migraine (median [IQR] score 45.5 [41.0-54.7] and 47.0 [41.5-51.7]) were significantly higher than healthy participants (median [IQR] score 39.0 [34.0-44.2] and 40.0 [34.0-48.0], p < 0.001, p = 0.001). CONCLUSION: The virtual dynamic game approach showed for the first time that the gaming performance of patients with migraine without aura was negatively affected by the addition of auditory, verbal, and haptic stimuli onto visual stimuli. Multisensory integration of sensory modalities including haptic stimuli is disturbed even in the interictal period in women with migraine. Virtual games can be employed to assess the impact of sensory problems in the course of the disease. Also, sensory training could be a potential therapy target to improve multisensory processing in migraine.


Asunto(s)
Trastornos Migrañosos , Humanos , Femenino , Adulto , Estudios Transversales , Trastornos Migrañosos/fisiopatología , Estudios Prospectivos , Juegos de Video , Percepción Visual/fisiología , Adulto Joven , Realidad Virtual , Estimulación Luminosa/métodos , Percepción Auditiva/fisiología
4.
Brain Cogn ; 177: 106161, 2024 Jun.
Artículo en Inglés | MEDLINE | ID: mdl-38696928

RESUMEN

Narrative comprehension relies on basic sensory processing abilities, such as visual and auditory processing, with recent evidence for utilizing executive functions (EF), which are also engaged during reading. EF was previously related to the "supporter" of engaging the auditory and visual modalities in different cognitive tasks, with evidence of lower efficiency in this process among those with reading difficulties in the absence of a visual stimulus (i.e. while listening to stories). The current study aims to fill out the gap related to the level of reliance on these neural circuits while visual aids (pictures) are involved during story listening in relation to reading skills. Functional MRI data were collected from 44 Hebrew-speaking children aged 8-12 years while listening to stories with vs without visual stimuli (i.e., pictures). Functional connectivity of networks supporting reading was defined in each condition and compared between the conditions against behavioral reading measures. Lower reading skills were related to greater functional connectivity values between EF networks (default mode and memory networks), and between the auditory and memory networks for the stories with vs without the visual stimulation. A greater difference in functional connectivity between the conditions was related to lower reading scores. We conclude that lower reading skills in children may be related to a need for greater scaffolding, i.e., visual stimulation such as pictures describing the narratives when listening to stories, which may guide future intervention approaches.


Asunto(s)
Función Ejecutiva , Imagen por Resonancia Magnética , Lectura , Percepción Visual , Humanos , Niño , Masculino , Femenino , Función Ejecutiva/fisiología , Percepción Visual/fisiología , Percepción Auditiva/fisiología , Comprensión/fisiología , Estimulación Luminosa/métodos , Red Nerviosa/fisiología , Red Nerviosa/diagnóstico por imagen , Encéfalo/fisiología
5.
Trends Hear ; 28: 23312165241239541, 2024.
Artículo en Inglés | MEDLINE | ID: mdl-38738337

RESUMEN

Cochlear synaptopathy, a form of cochlear deafferentation, has been demonstrated in a number of animal species, including non-human primates. Both age and noise exposure contribute to synaptopathy in animal models, indicating that it may be a common type of auditory dysfunction in humans. Temporal bone and auditory physiological data suggest that age and occupational/military noise exposure also lead to synaptopathy in humans. The predicted perceptual consequences of synaptopathy include tinnitus, hyperacusis, and difficulty with speech-in-noise perception. However, confirming the perceptual impacts of this form of cochlear deafferentation presents a particular challenge because synaptopathy can only be confirmed through post-mortem temporal bone analysis and auditory perception is difficult to evaluate in animals. Animal data suggest that deafferentation leads to increased central gain, signs of tinnitus and abnormal loudness perception, and deficits in temporal processing and signal-in-noise detection. If equivalent changes occur in humans following deafferentation, this would be expected to increase the likelihood of developing tinnitus, hyperacusis, and difficulty with speech-in-noise perception. Physiological data from humans is consistent with the hypothesis that deafferentation is associated with increased central gain and a greater likelihood of tinnitus perception, while human data on the relationship between deafferentation and hyperacusis is extremely limited. Many human studies have investigated the relationship between physiological correlates of deafferentation and difficulty with speech-in-noise perception, with mixed findings. A non-linear relationship between deafferentation and speech perception may have contributed to the mixed results. When differences in sample characteristics and study measurements are considered, the findings may be more consistent.


Asunto(s)
Cóclea , Percepción del Habla , Acúfeno , Humanos , Cóclea/fisiopatología , Acúfeno/fisiopatología , Acúfeno/diagnóstico , Animales , Percepción del Habla/fisiología , Hiperacusia/fisiopatología , Ruido/efectos adversos , Percepción Auditiva/fisiología , Sinapsis/fisiología , Pérdida Auditiva Provocada por Ruido/fisiopatología , Pérdida Auditiva Provocada por Ruido/diagnóstico , Percepción Sonora
6.
J Acoust Soc Am ; 155(5): 3183-3194, 2024 May 01.
Artículo en Inglés | MEDLINE | ID: mdl-38738939

RESUMEN

Medial olivocochlear (MOC) efferents modulate outer hair cell motility through specialized nicotinic acetylcholine receptors to support encoding of signals in noise. Transgenic mice lacking the alpha9 subunits of these receptors (α9KOs) have normal hearing in quiet and noise, but lack classic cochlear suppression effects and show abnormal temporal, spectral, and spatial processing. Mice deficient for both the alpha9 and alpha10 receptor subunits (α9α10KOs) may exhibit more severe MOC-related phenotypes. Like α9KOs, α9α10KOs have normal auditory brainstem response (ABR) thresholds and weak MOC reflexes. Here, we further characterized auditory function in α9α10KO mice. Wild-type (WT) and α9α10KO mice had similar ABR thresholds and acoustic startle response amplitudes in quiet and noise, and similar frequency and intensity difference sensitivity. α9α10KO mice had larger ABR Wave I amplitudes than WTs in quiet and noise. Other ABR metrics of hearing-in-noise function yielded conflicting findings regarding α9α10KO susceptibility to masking effects. α9α10KO mice also had larger startle amplitudes in tone backgrounds than WTs. Overall, α9α10KO mice had grossly normal auditory function in quiet and noise, although their larger ABR amplitudes and hyperreactive startles suggest some auditory processing abnormalities. These findings contribute to the growing literature showing mixed effects of MOC dysfunction on hearing.


Asunto(s)
Estimulación Acústica , Umbral Auditivo , Potenciales Evocados Auditivos del Tronco Encefálico , Ratones Noqueados , Ruido , Receptores Nicotínicos , Reflejo de Sobresalto , Animales , Ruido/efectos adversos , Receptores Nicotínicos/genética , Receptores Nicotínicos/deficiencia , Enmascaramiento Perceptual , Conducta Animal , Ratones , Ratones Endogámicos C57BL , Cóclea/fisiología , Cóclea/fisiopatología , Masculino , Fenotipo , Núcleo Olivar/fisiología , Vías Auditivas/fisiología , Vías Auditivas/fisiopatología , Femenino , Percepción Auditiva/fisiología , Audición
7.
Multisens Res ; 37(2): 89-124, 2024 Feb 13.
Artículo en Inglés | MEDLINE | ID: mdl-38714311

RESUMEN

Prior studies investigating the effects of routine action video game play have demonstrated improvements in a variety of cognitive processes, including improvements in attentional tasks. However, there is little evidence indicating that the cognitive benefits of playing action video games generalize from simplified unisensory stimuli to multisensory scenes - a fundamental characteristic of natural, everyday life environments. The present study addressed if video game experience has an impact on crossmodal congruency effects when searching through such multisensory scenes. We compared the performance of action video game players (AVGPs) and non-video game players (NVGPs) on a visual search task for objects embedded in video clips of realistic scenes. We conducted two identical online experiments with gender-balanced samples, for a total of N = 130. Overall, the data replicated previous findings reporting search benefits when visual targets were accompanied by semantically congruent auditory events, compared to neutral or incongruent ones. However, according to the results, AVGPs did not consistently outperform NVGPs in the overall search task, nor did they use multisensory cues more efficiently than NVGPs. Exploratory analyses with self-reported gender as a variable revealed a potential difference in response strategy between experienced male and female AVGPs when dealing with crossmodal cues. These findings suggest that the generalization of the advantage of AVG experience to realistic, crossmodal situations should be made with caution and considering gender-related issues.


Asunto(s)
Atención , Juegos de Video , Percepción Visual , Humanos , Masculino , Femenino , Percepción Visual/fisiología , Adulto Joven , Adulto , Atención/fisiología , Percepción Auditiva/fisiología , Estimulación Luminosa , Adolescente , Tiempo de Reacción/fisiología , Señales (Psicología) , Estimulación Acústica
8.
Multisens Res ; 37(2): 143-162, 2024 Apr 30.
Artículo en Inglés | MEDLINE | ID: mdl-38714315

RESUMEN

A vital heuristic used when making judgements on whether audio-visual signals arise from the same event, is the temporal coincidence of the respective signals. Previous research has highlighted a process, whereby the perception of simultaneity rapidly recalibrates to account for differences in the physical temporal offsets of stimuli. The current paper investigated whether rapid recalibration also occurs in response to differences in central arrival latencies, driven by visual-intensity-dependent processing times. In a behavioural experiment, observers completed a temporal-order judgement (TOJ), simultaneity judgement (SJ) and simple reaction-time (RT) task and responded to audio-visual trials that were preceded by other audio-visual trials with either a bright or dim visual stimulus. It was found that the point of subjective simultaneity shifted, due to the visual intensity of the preceding stimulus, in the TOJ, but not SJ task, while the RT data revealed no effect of preceding intensity. Our data therefore provide some evidence that the perception of simultaneity rapidly recalibrates based on stimulus intensity.


Asunto(s)
Estimulación Acústica , Percepción Auditiva , Estimulación Luminosa , Tiempo de Reacción , Percepción Visual , Humanos , Percepción Visual/fisiología , Percepción Auditiva/fisiología , Masculino , Femenino , Tiempo de Reacción/fisiología , Adulto , Adulto Joven , Juicio/fisiología
9.
J Neurodev Disord ; 16(1): 24, 2024 May 08.
Artículo en Inglés | MEDLINE | ID: mdl-38720271

RESUMEN

BACKGROUND: Autism spectrum disorder (ASD) is currently diagnosed in approximately 1 in 44 children in the United States, based on a wide array of symptoms, including sensory dysfunction and abnormal language development. Boys are diagnosed ~ 3.8 times more frequently than girls. Auditory temporal processing is crucial for speech recognition and language development. Abnormal development of temporal processing may account for ASD language impairments. Sex differences in the development of temporal processing may underlie the differences in language outcomes in male and female children with ASD. To understand mechanisms of potential sex differences in temporal processing requires a preclinical model. However, there are no studies that have addressed sex differences in temporal processing across development in any animal model of ASD. METHODS: To fill this major gap, we compared the development of auditory temporal processing in male and female wildtype (WT) and Fmr1 knock-out (KO) mice, a model of Fragile X Syndrome (FXS), a leading genetic cause of ASD-associated behaviors. Using epidural screw electrodes, we recorded auditory event related potentials (ERP) and auditory temporal processing with a gap-in-noise auditory steady state response (ASSR) paradigm at young (postnatal (p)21 and p30) and adult (p60) ages from both auditory and frontal cortices of awake, freely moving mice. RESULTS: The results show that ERP amplitudes were enhanced in both sexes of Fmr1 KO mice across development compared to WT counterparts, with greater enhancement in adult female than adult male KO mice. Gap-ASSR deficits were seen in the frontal, but not auditory, cortex in early development (p21) in female KO mice. Unlike male KO mice, female KO mice show WT-like temporal processing at p30. There were no temporal processing deficits in the adult mice of both sexes. CONCLUSIONS: These results show a sex difference in the developmental trajectories of temporal processing and hypersensitive responses in Fmr1 KO mice. Male KO mice show slower maturation of temporal processing than females. Female KO mice show stronger hypersensitive responses than males later in development. The differences in maturation rates of temporal processing and hypersensitive responses during various critical periods of development may lead to sex differences in language function, arousal and anxiety in FXS.


Asunto(s)
Modelos Animales de Enfermedad , Potenciales Evocados Auditivos , Proteína de la Discapacidad Intelectual del Síndrome del Cromosoma X Frágil , Síndrome del Cromosoma X Frágil , Ratones Noqueados , Caracteres Sexuales , Animales , Síndrome del Cromosoma X Frágil/fisiopatología , Femenino , Masculino , Ratones , Potenciales Evocados Auditivos/fisiología , Proteína de la Discapacidad Intelectual del Síndrome del Cromosoma X Frágil/genética , Percepción Auditiva/fisiología , Trastorno del Espectro Autista/fisiopatología , Corteza Auditiva/fisiopatología , Ratones Endogámicos C57BL
10.
PLoS One ; 19(5): e0303565, 2024.
Artículo en Inglés | MEDLINE | ID: mdl-38781127

RESUMEN

In this study, we attempted to improve brain-computer interface (BCI) systems by means of auditory stream segregation in which alternately presented tones are perceived as sequences of various different tones (streams). A 3-class BCI using three tone sequences, which were perceived as three different tone streams, was investigated and evaluated. Each presented musical tone was generated by a software synthesizer. Eleven subjects took part in the experiment. Stimuli were presented to each user's right ear. Subjects were requested to attend to one of three streams and to count the number of target stimuli in the attended stream. In addition, 64-channel electroencephalogram (EEG) and two-channel electrooculogram (EOG) signals were recorded from participants with a sampling frequency of 1000 Hz. The measured EEG data were classified based on Riemannian geometry to detect the object of the subject's selective attention. P300 activity was elicited by the target stimuli in the segregated tone streams. In five out of eleven subjects, P300 activity was elicited only by the target stimuli included in the attended stream. In a 10-fold cross validation test, a classification accuracy over 80% for five subjects and over 75% for nine subjects was achieved. For subjects whose accuracy was lower than 75%, either the P300 was also elicited for nonattended streams or the amplitude of P300 was small. It was concluded that the number of selected BCI systems based on auditory stream segregation can be increased to three classes, and these classes can be detected by a single ear without the aid of any visual modality.


Asunto(s)
Estimulación Acústica , Atención , Interfaces Cerebro-Computador , Electroencefalografía , Humanos , Masculino , Femenino , Electroencefalografía/métodos , Adulto , Atención/fisiología , Estimulación Acústica/métodos , Percepción Auditiva/fisiología , Adulto Joven , Potenciales Relacionados con Evento P300/fisiología , Electrooculografía/métodos
11.
Curr Biol ; 34(9): R346-R348, 2024 05 06.
Artículo en Inglés | MEDLINE | ID: mdl-38714161

RESUMEN

Animals including humans often react to sounds by involuntarily moving their face and body. A new study shows that facial movements provide a simple and reliable readout of a mouse's hearing ability that is more sensitive than traditional measurements.


Asunto(s)
Cara , Animales , Ratones , Cara/fisiología , Percepción Auditiva/fisiología , Audición/fisiología , Sonido , Movimiento/fisiología , Humanos
12.
Curr Biol ; 34(10): 2162-2174.e5, 2024 05 20.
Artículo en Inglés | MEDLINE | ID: mdl-38718798

RESUMEN

Humans make use of small differences in the timing of sounds at the two ears-interaural time differences (ITDs)-to locate their sources. Despite extensive investigation, however, the neural representation of ITDs in the human brain is contentious, particularly the range of ITDs explicitly represented by dedicated neural detectors. Here, using magneto- and electro-encephalography (MEG and EEG), we demonstrate evidence of a sparse neural representation of ITDs in the human cortex. The magnitude of cortical activity to sounds presented via insert earphones oscillated as a function of increasing ITD-within and beyond auditory cortical regions-and listeners rated the perceptual quality of these sounds according to the same oscillating pattern. This pattern was accurately described by a population of model neurons with preferred ITDs constrained to the narrow, sound-frequency-dependent range evident in other mammalian species. When scaled for head size, the distribution of ITD detectors in the human cortex is remarkably like that recorded in vivo from the cortex of rhesus monkeys, another large primate that uses ITDs for source localization. The data solve a long-standing issue concerning the neural representation of ITDs in humans and suggest a representation that scales for head size and sound frequency in an optimal manner.


Asunto(s)
Corteza Auditiva , Señales (Psicología) , Localización de Sonidos , Corteza Auditiva/fisiología , Humanos , Masculino , Localización de Sonidos/fisiología , Animales , Femenino , Adulto , Electroencefalografía , Macaca mulatta/fisiología , Magnetoencefalografía , Estimulación Acústica , Adulto Joven , Percepción Auditiva/fisiología
13.
Hear Res ; 447: 109028, 2024 Jun.
Artículo en Inglés | MEDLINE | ID: mdl-38733711

RESUMEN

Amplitude modulation is an important acoustic cue for sound discrimination, and humans and animals are able to detect small modulation depths behaviorally. In the inferior colliculus (IC), both firing rate and phase-locking may be used to detect amplitude modulation. How neural representations that detect modulation change with age are poorly understood, including the extent to which age-related changes may be attributed to the inherited properties of ascending inputs to IC neurons. Here, simultaneous measures of local field potentials (LFPs) and single-unit responses were made from the inferior colliculus of Young and Aged rats using both noise and tone carriers in response to sinusoidally amplitude-modulated sounds of varying depths. We found that Young units had higher firing rates than Aged for noise carriers, whereas Aged units had higher phase-locking (vector strength), especially for tone carriers. Sustained LFPs were larger in Young animals for modulation frequencies 8-16 Hz and comparable at higher modulation frequencies. Onset LFP amplitudes were much larger in Young animals and were correlated with the evoked firing rates, while LFP onset latencies were shorter in Aged animals. Unit neurometric thresholds by synchrony or firing rate measures did not differ significantly across age and were comparable to behavioral thresholds in previous studies whereas LFP thresholds were lower than behavior.


Asunto(s)
Estimulación Acústica , Envejecimiento , Colículos Inferiores , Animales , Colículos Inferiores/fisiología , Envejecimiento/fisiología , Ratas , Factores de Edad , Percepción Auditiva/fisiología , Masculino , Umbral Auditivo , Potenciales Evocados Auditivos , Neuronas/fisiología , Potenciales de Acción , Tiempo de Reacción , Ruido/efectos adversos , Factores de Tiempo , Vías Auditivas/fisiología
14.
Curr Biol ; 34(10): 2200-2211.e6, 2024 05 20.
Artículo en Inglés | MEDLINE | ID: mdl-38733991

RESUMEN

The activity of neurons in sensory areas sometimes covaries with upcoming choices in decision-making tasks. However, the prevalence, causal origin, and functional role of choice-related activity remain controversial. Understanding the circuit-logic of decision signals in sensory areas will require understanding their laminar specificity, but simultaneous recordings of neural activity across the cortical layers in forced-choice discrimination tasks have not yet been performed. Here, we describe neural activity from such recordings in the auditory cortex of mice during a frequency discrimination task with delayed report, which, as we show, requires the auditory cortex. Stimulus-related information was widely distributed across layers but disappeared very quickly after stimulus offset. Choice selectivity emerged toward the end of the delay period-suggesting a top-down origin-but only in the deep layers. Early stimulus-selective and late choice-selective deep neural ensembles were correlated, suggesting that the choice-selective signal fed back to the auditory cortex is not just action specific but develops as a consequence of the sensory-motor contingency imposed by the task.


Asunto(s)
Corteza Auditiva , Conducta de Elección , Animales , Corteza Auditiva/fisiología , Ratones , Conducta de Elección/fisiología , Estimulación Acústica , Ratones Endogámicos C57BL , Percepción Auditiva/fisiología , Masculino , Neuronas/fisiología
15.
Nat Commun ; 15(1): 4071, 2024 May 22.
Artículo en Inglés | MEDLINE | ID: mdl-38778078

RESUMEN

Adaptive behavior requires integrating prior knowledge of action outcomes and sensory evidence for making decisions while maintaining prior knowledge for future actions. As outcome- and sensory-based decisions are often tested separately, it is unclear how these processes are integrated in the brain. In a tone frequency discrimination task with two sound durations and asymmetric reward blocks, we found that neurons in the medial prefrontal cortex of male mice represented the additive combination of prior reward expectations and choices. The sensory inputs and choices were selectively decoded from the auditory cortex irrespective of reward priors and the secondary motor cortex, respectively, suggesting localized computations of task variables are required within single trials. In contrast, all the recorded regions represented prior values that needed to be maintained across trials. We propose localized and global computations of task variables in different time scales in the cerebral cortex.


Asunto(s)
Corteza Auditiva , Conducta de Elección , Recompensa , Animales , Masculino , Conducta de Elección/fisiología , Ratones , Corteza Auditiva/fisiología , Neuronas/fisiología , Corteza Prefrontal/fisiología , Estimulación Acústica , Ratones Endogámicos C57BL , Corteza Cerebral/fisiología , Corteza Motora/fisiología , Percepción Auditiva/fisiología
16.
Codas ; 36(2): e20230048, 2024.
Artículo en Portugués, Inglés | MEDLINE | ID: mdl-38695432

RESUMEN

PURPOSE: To correlate behavioral assessment results of central auditory processing and the self-perception questionnaire after acoustically controlled auditory training. METHODS: The study assessed 10 individuals with a mean age of 44.5 years who had suffered mild traumatic brain injury. They underwent behavioral assessment of central auditory processing and answered the Formal Auditory Training self-perception questionnaire after the therapeutic intervention - whose questions address auditory perception, understanding orders, request to repeat statements, occurrence of misunderstandings, attention span, auditory performance in noisy environments, telephone communication, and self-esteem. Patients were asked to indicate the frequency with which the listed behaviors occurred. RESULTS: Figure-ground, sequential memory for sounds, and temporal processing correlated with improvement in following instructions, fewer requests to repeat statements, increased attention span, improved communication, and understanding on the phone and when watching TV. CONCLUSION: Auditory closure, figure-ground, and temporal processing had improved in the assessment after the acoustically controlled auditory training, and there were fewer auditory behavior complaints.


OBJETIVO: Correlacionar os resultados da avaliação comportamental do processamento auditivo central e do questionário de autopercepção após o treinamento auditivo acusticamente controlado. MÉTODO: Foram avaliados dez indivíduos com média de idade de 44,5 anos, que sofreram traumatismo cranioencefálico de grau leve. Os indivíduos foram submetidos a avaliação comportamental do processamento auditivo central e também responderam ao questionário de autopercepção "Treinamento Auditivo Formal" após a intervenção terapêutica. O questionário foi composto por questões referentes a percepção auditiva, compreensão de ordens, solicitação de repetição de enunciados, ocorrência mal-entendidos, tempo de atenção, desempenho auditivo em ambiente ruidoso, comunicação ao telefone e autoestima e os pacientes foram solicitados a assinalar a frequência de ocorrência dos comportamentos listados. RESULTADOS: As habilidades auditivas de figura-fundo e memória para sons em sequência e processamento temporal correlacionaram-se com melhora para seguir instruções, diminuição das solicitações de repetições e aumento do tempo de atenção e melhora da comunicação e da compreensão ao telefone e para assistir TV. CONCLUSÃO: Observou-se adequação das habilidades auditivas de fechamento auditivo, figura fundo, e processamento temporal na avaliação pós-treinamento auditivo acusticamente controlado, além de redução das queixas quanto ao comportamento auditivo.


Asunto(s)
Percepción Auditiva , Autoimagen , Humanos , Adulto , Masculino , Femenino , Percepción Auditiva/fisiología , Encuestas y Cuestionarios , Persona de Mediana Edad , Conmoción Encefálica/psicología , Conmoción Encefálica/rehabilitación , Estimulación Acústica/métodos , Adulto Joven
17.
Nat Commun ; 15(1): 3941, 2024 May 10.
Artículo en Inglés | MEDLINE | ID: mdl-38729937

RESUMEN

A relevant question concerning inter-areal communication in the cortex is whether these interactions are synergistic. Synergy refers to the complementary effect of multiple brain signals conveying more information than the sum of each isolated signal. Redundancy, on the other hand, refers to the common information shared between brain signals. Here, we dissociated cortical interactions encoding complementary information (synergy) from those sharing common information (redundancy) during prediction error (PE) processing. We analyzed auditory and frontal electrocorticography (ECoG) signals in five common awake marmosets performing two distinct auditory oddball tasks and investigated to what extent event-related potentials (ERP) and broadband (BB) dynamics encoded synergistic and redundant information about PE processing. The information conveyed by ERPs and BB signals was synergistic even at lower stages of the hierarchy in the auditory cortex and between auditory and frontal regions. Using a brain-constrained neural network, we simulated the synergy and redundancy observed in the experimental results and demonstrated that the emergence of synergy between auditory and frontal regions requires the presence of strong, long-distance, feedback, and feedforward connections. These results indicate that distributed representations of PE signals across the cortical hierarchy can be highly synergistic.


Asunto(s)
Estimulación Acústica , Corteza Auditiva , Callithrix , Electrocorticografía , Animales , Corteza Auditiva/fisiología , Callithrix/fisiología , Masculino , Femenino , Potenciales Evocados/fisiología , Lóbulo Frontal/fisiología , Potenciales Evocados Auditivos/fisiología , Percepción Auditiva/fisiología , Mapeo Encefálico/métodos
18.
eNeuro ; 11(5)2024 May.
Artículo en Inglés | MEDLINE | ID: mdl-38702187

RESUMEN

Mismatch negativity (MMN) is commonly recognized as a neural signal of prediction error evoked by deviants from the expected patterns of sensory input. Studies show that MMN diminishes when sequence patterns become more predictable over a longer timescale. This implies that MMN is composed of multiple subcomponents, each responding to different levels of temporal regularities. To probe the hypothesized subcomponents in MMN, we record human electroencephalography during an auditory local-global oddball paradigm where the tone-to-tone transition probability (local regularity) and the overall sequence probability (global regularity) are manipulated to control temporal predictabilities at two hierarchical levels. We find that the size of MMN is correlated with both probabilities and the spatiotemporal structure of MMN can be decomposed into two distinct subcomponents. Both subcomponents appear as negative waveforms, with one peaking early in the central-frontal area and the other late in a more frontal area. With a quantitative predictive coding model, we map the early and late subcomponents to the prediction errors that are tied to local and global regularities, respectively. Our study highlights the hierarchical complexity of MMN and offers an experimental and analytical platform for developing a multitiered neural marker applicable in clinical settings.


Asunto(s)
Estimulación Acústica , Electroencefalografía , Potenciales Evocados Auditivos , Humanos , Masculino , Femenino , Electroencefalografía/métodos , Adulto Joven , Adulto , Potenciales Evocados Auditivos/fisiología , Estimulación Acústica/métodos , Percepción Auditiva/fisiología , Encéfalo/fisiología , Mapeo Encefálico , Adolescente
19.
J Neural Eng ; 21(3)2024 May 22.
Artículo en Inglés | MEDLINE | ID: mdl-38729132

RESUMEN

Objective.This study develops a deep learning (DL) method for fast auditory attention decoding (AAD) using electroencephalography (EEG) from listeners with hearing impairment (HI). It addresses three classification tasks: differentiating noise from speech-in-noise, classifying the direction of attended speech (left vs. right) and identifying the activation status of hearing aid noise reduction algorithms (OFF vs. ON). These tasks contribute to our understanding of how hearing technology influences auditory processing in the hearing-impaired population.Approach.Deep convolutional neural network (DCNN) models were designed for each task. Two training strategies were employed to clarify the impact of data splitting on AAD tasks: inter-trial, where the testing set used classification windows from trials that the training set had not seen, and intra-trial, where the testing set used unseen classification windows from trials where other segments were seen during training. The models were evaluated on EEG data from 31 participants with HI, listening to competing talkers amidst background noise.Main results.Using 1 s classification windows, DCNN models achieve accuracy (ACC) of 69.8%, 73.3% and 82.9% and area-under-curve (AUC) of 77.2%, 80.6% and 92.1% for the three tasks respectively on inter-trial strategy. In the intra-trial strategy, they achieved ACC of 87.9%, 80.1% and 97.5%, along with AUC of 94.6%, 89.1%, and 99.8%. Our DCNN models show good performance on short 1 s EEG samples, making them suitable for real-world applications. Conclusion: Our DCNN models successfully addressed three tasks with short 1 s EEG windows from participants with HI, showcasing their potential. While the inter-trial strategy demonstrated promise for assessing AAD, the intra-trial approach yielded inflated results, underscoring the important role of proper data splitting in EEG-based AAD tasks.Significance.Our findings showcase the promising potential of EEG-based tools for assessing auditory attention in clinical contexts and advancing hearing technology, while also promoting further exploration of alternative DL architectures and their potential constraints.


Asunto(s)
Atención , Percepción Auditiva , Aprendizaje Profundo , Electroencefalografía , Pérdida Auditiva , Humanos , Atención/fisiología , Femenino , Electroencefalografía/métodos , Masculino , Persona de Mediana Edad , Pérdida Auditiva/fisiopatología , Pérdida Auditiva/rehabilitación , Pérdida Auditiva/diagnóstico , Anciano , Percepción Auditiva/fisiología , Ruido , Adulto , Audífonos , Percepción del Habla/fisiología , Redes Neurales de la Computación
20.
Cereb Cortex ; 34(5)2024 May 02.
Artículo en Inglés | MEDLINE | ID: mdl-38700440

RESUMEN

While the auditory and visual systems each provide distinct information to our brain, they also work together to process and prioritize input to address ever-changing conditions. Previous studies highlighted the trade-off between auditory change detection and visual selective attention; however, the relationship between them is still unclear. Here, we recorded electroencephalography signals from 106 healthy adults in three experiments. Our findings revealed a positive correlation at the population level between the amplitudes of event-related potential indices associated with auditory change detection (mismatch negativity) and visual selective attention (posterior contralateral N2) when elicited in separate tasks. This correlation persisted even when participants performed a visual task while disregarding simultaneous auditory stimuli. Interestingly, as visual attention demand increased, participants whose posterior contralateral N2 amplitude increased the most exhibited the largest reduction in mismatch negativity, suggesting a within-subject trade-off between the two processes. Taken together, our results suggest an intimate relationship and potential shared mechanism between auditory change detection and visual selective attention. We liken this to a total capacity limit that varies between individuals, which could drive correlated individual differences in auditory change detection and visual selective attention, and also within-subject competition between the two, with task-based modulation of visual attention causing within-participant decrease in auditory change detection sensitivity.


Asunto(s)
Atención , Percepción Auditiva , Electroencefalografía , Percepción Visual , Humanos , Atención/fisiología , Masculino , Femenino , Adulto Joven , Adulto , Percepción Auditiva/fisiología , Percepción Visual/fisiología , Estimulación Acústica/métodos , Estimulación Luminosa/métodos , Potenciales Evocados/fisiología , Encéfalo/fisiología , Adolescente
SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA