Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 2.125
Filtrar
1.
Acta Otolaryngol ; : 1-6, 2024 Oct 01.
Artigo em Inglês | MEDLINE | ID: mdl-39351976

RESUMO

BACKGROUND: Hearing can be preserved in patients with considerable low-frequency hearing implanted with cochlear implants. However, the most favorable electrode type for hearing preservation and speech perception has been debated. OBJECTIVE: The aim was to evaluate hearing preservation and speech discrimination one year post-implantation for all types of cochlear implant electrode used for adult patients implanted between 2014 and 2022. METHODS: The HEARING group formula was used to calculate the degree of hearing preservation, which was defined as minimal (0-25%), partial (25-75%) or complete (≥ 75%). Speech perception was measured by monosyllabic words. RESULTS: Analysis of hearing preservation for the various electrode types revealed that FLEX 24 preserved hearing statistically significantly better (p < 0.05) than FLEX 28, FLEX soft, and contour advance. Also, FLEX 20 preserved hearing statistically significantly better (p < 0.05) than contour advance. No statistically significant difference was found for the monosyllabic word score for the different electrode types. DISCUSSION: There was a statistically significant difference between the electrode types in terms of hearing preservation but not for speech perception. The result of this study contributes important information about hearing preservation and speech perception that can be used for pre-surgery patient counselling.

2.
Front Psychol ; 15: 1484655, 2024.
Artigo em Inglês | MEDLINE | ID: mdl-39355294

RESUMO

[This corrects the article DOI: 10.3389/fpsyg.2023.1270743.].

3.
J Commun Disord ; 112: 106467, 2024 Sep 21.
Artigo em Inglês | MEDLINE | ID: mdl-39362063

RESUMO

INTRODUCTION: Remote microphone (RM) systems are designed to enhance speech recognition in noisy environments by improving the signal-to-noise ratio (SNR) for individuals with typical hearing (TH) and hearing impairment (HI). The aim of this investigation was to evaluate the advantages of speech recognition in noise for individuals with TH in a simulated group setting using two different remote microphones. METHODS: A quasi-experimental, repeated-measures design was employed, involving ten participants with TH, ages 20 to 63 years. Each were fit with Roger Focus receivers bilaterally to listen to three RM conditions: Roger Select, Roger Pen, and no technology. Participants were instructed to transcribe sentences that were presented randomly at varying signal-to-noise ratios (SNRs: 0, -5, and -10 dB) from five speakers positioned equidistant around a circular table to simulate a group dining scenario. RESULTS: Significant main effects of the technology condition and noise level (p < .05) were found. Participants exhibited superior performance with Roger Select compared to Roger Pen. As expected, recognition rates decreased with lower SNRs across all three technology conditions. CONCLUSIONS: To enhance speech recognition in group settings for individuals with TH, the utilization of the Roger Select microphone in conjunction with bilateral Roger Focus receivers is recommended over the Roger Pen.

4.
Front Psychol ; 15: 1399084, 2024.
Artigo em Inglês | MEDLINE | ID: mdl-39380752

RESUMO

This review examines how visual information enhances speech perception in individuals with hearing loss, focusing on the impact of age, linguistic stimuli, and specific hearing loss factors on the effectiveness of audiovisual (AV) integration. While existing studies offer varied and sometimes conflicting findings regarding the use of visual cues, our analysis shows that these key factors can distinctly shape AV speech perception outcomes. For instance, younger individuals and those who receive early intervention tend to benefit more from visual cues, particularly when linguistic complexity is lower. Additionally, languages with dense phoneme spaces demonstrate a higher dependency on visual information, underscoring the importance of tailoring rehabilitation strategies to specific linguistic contexts. By considering these influences, we highlight areas where understanding is still developing and suggest how personalized rehabilitation strategies and supportive systems could be tailored to better meet individual needs. Furthermore, this review brings attention to important aspects that warrant further investigation, aiming to refine theoretical models and contribute to more effective, customized approaches to hearing rehabilitation.

5.
Indian J Otolaryngol Head Neck Surg ; 76(5): 4356-4364, 2024 Oct.
Artigo em Inglês | MEDLINE | ID: mdl-39376318

RESUMO

In current age of technology, artificial intelligence is used in the medical field to improve the quality and accuracy in patient care and achieve better clientele satisfaction. The use of artificial intelligence in the field of hearing rehabilitation and cochlear implantation has an immense scope and it enhances the accuracy in placement of electrode array, forecasting site of surgical location and optimization of speech processing. This study aims to compare the audiological outcomes of conventional versus artificial intelligence technology enabled cochlear implant speech processors. Additionally, it compares the individual performance and satisfaction level with use of both types of speech processors. All children who underwent upgradation of their cochlear implant speech processors at a tertiary care cochlear implant centre with artificial intelligence enabled speech processors were included in the study. The comparison of audiological outcomes of conventional versus artificial intelligence integrated speech processors were assessed by using Aided Audiometry, Categories of Auditory Perception Score and Speech Intelligibility Rating scale. Children using the basic model cochlear implant speech processor which was provided at the time of implantation are referred as conventional cochlear implant speech processor user. Their speech processors were subsequently upgraded with current generation artificial intelligence integrated speech processors which is referred here as artificial intelligence upgraded cochlear implant speech processor. During the study, a total of thirty-four (34) patients underwent upgradation of cochlear implant speech processors. The mean categories of auditory perception score were 11.58 and 11.94 using conventional and artificial intelligence upgraded speech processor respectively. The mean speech intelligibility rating score was 4.5 and 4.6 respectively. The audiological outcomes of conventional speech processors are comparable with those using artificial intelligence enabled speech processors. However, the clientele satisfaction in respect to quality of sound, ease of listening in difficult listening environment, smart connectivity options for both phone and television is available and better with the artificial intelligence enabled cochlear implant speech processor. This also has the advantages of auto switching of programming with change in ambient noise, better signal to noise ratio and better 360* hearing.

6.
Artigo em Inglês | MEDLINE | ID: mdl-39369438

RESUMO

OBJECTIVE: Electrode array design may impact hearing outcomes in patients who receive cochlear implants. The goal of this work was to assess differences in post operative speech perception among patients who received cochlear implants of differing designs and lengths. STUDY DESIGN: Retrospective chart review. SETTING: Tertiary Care Hospital. METHODS: Patients (n = 129) received 1 of 9 electrode arrays, which were categorized by design: Lateral wall electrodes (n = 36) included CI522, CI622 (Cochlear Americas), Flex24, and Flex28 (Med El). Midscala electrodes (n = 16) included HiRes Ultra 3D (Advanced Bionics). Perimodiolar electrodes (n = 77) included CI512, CI532, CI612, and CI632 (Cochlear Americas). Speech perception was evaluated using consonant-nucleus-consonant (CNC) tests and at 3, 6, 12, and 24 months postimplantation. RESULTS: Perimodiolar electrodes showed significantly higher CNC scores compared to lateral wall electrodes at 6 and 24 months. Perimodiolar electrodes also outperformed midscala electrodes at 12 months. An inverse relationship was observed between electrode length and CNC scores noted at 6, 12, and 24 months. CONCLUSION: Perimodiolar electrode arrays, which tend to be shorter, demonstrated better speech perception outcomes compared to the longer lateral wall and midscala arrays at some timepoints. These findings suggest a potential advantages of perimodiolar electrodes for optimizing hearing outcomes.

7.
J Neurosci Methods ; 412: 110277, 2024 Sep 06.
Artigo em Inglês | MEDLINE | ID: mdl-39245330

RESUMO

BACKGROUND: Speech sounds are processed in the human brain through intricate and interconnected cortical and subcortical structures. Two neural signatures, one largely from cortical sources (mismatch response, MMR) and one largely from subcortical sources (frequency-following response, FFR) are critical for assessing speech processing as they both show sensitivity to high-level linguistic information. However, there are distinct prerequisites for recording MMR and FFR, making them difficult to acquire simultaneously NEW METHOD: Using a new paradigm, our study aims to concurrently capture both signals and test them against the following criteria: (1) replicating the effect that the MMR to a native speech contrast significantly differs from the MMR to a nonnative speech contrast, and (2) demonstrating that FFRs to three speech sounds can be reliably differentiated. RESULTS: Using EEG from 18 adults, we observed a decoding accuracy of 72.2 % between the MMR to native vs. nonnative speech contrasts. A significantly larger native MMR was shown in the expected time window. Similarly, a significant decoding accuracy of 79.6 % was found for FFR. A high stimulus-to-response cross-correlation with a 9 ms lag suggested that FFR closely tracks speech sounds. COMPARISON WITH EXISTING METHOD(S): These findings demonstrate that our paradigm reliably captures both MMR and FFR concurrently, replicating and extending past research with much fewer trials (MMR: 50 trials; FFR: 200 trials) and shorter experiment time (12 minutes). CONCLUSIONS: This study paves the way to understanding cortical-subcortical interactions for speech and language processing, with the ultimate goal of developing an assessment tool specific to early development.

8.
Front Psychol ; 15: 1446240, 2024.
Artigo em Inglês | MEDLINE | ID: mdl-39315043

RESUMO

The temporal dynamics of the perception of within-word coarticulatory cues remain a subject of ongoing debate in speech perception research. This behavioral gating study sheds light on the unfolding predictive use of anticipatory coarticulation in onset fricatives. Word onset fricatives (/f/ and /s/) were split into four gates (15, 35, 75 and 135 milliseconds). Listeners made a forced choice about the word they were listening to, based on the stimulus gates. The results showed fast predictive use of coarticulatory lip rounding during /s/ word onsets, as early as 15 ms from word onset. For /f/ onsets, coarticulatory backness and height began to be used predictively after 75 ms. These findings indicate that onset times of the occurrence and use of coarticulatory cues can be extremely fast and have a time course that differs depending on fricative type.

9.
Infant Behav Dev ; 77: 101992, 2024 Sep 18.
Artigo em Inglês | MEDLINE | ID: mdl-39298930

RESUMO

In the current preregistered study, we tested n = 67 6-month-old Norwegian infants' discrimination of a native vowel contrast /y-i/ and a non-native (British) vowel contrast /ʌ-æ/ in an eye-tracking habituation paradigm. Our results showed that, on a group level, infants did not discriminate either contrast. Yet, exploratory analyses revealed a negative association between infants' performance in each experiment, that is, better discrimination of the native contrast was associated with worse discrimination of the non-native contrast. Potentially, infants in this study might have been on the cusp of perceptual reorganisation towards their native language.

10.
Artigo em Inglês | MEDLINE | ID: mdl-39299967

RESUMO

OBJECTIVE: To evaluate objective and subjective hearing outcomes in experienced cochlear implant users with single sided deafness (SSD CI) who used fitting maps created via anatomy-based fitting (ABF) and clinically-based fitting (CBF). PARTICIPANTS: Twelve SSD CI users with postlingual hearing loss. INTERVENTION: OTOPLAN (Version 3. (MED-EL) was used to determine intracochlear electrode contact positions using post-operative high-resolution flat panel volume computed tomography. From these positions, the corresponding center frequencies and bandwidths were derived for each channel. These were implemented in the clinical fitting software MAESTRO to yield an ABF map individualized to each user. MAIN OUTCOME MEASURES: ABF and CBF maps were compared. Objective speech perception in quiet and in noise, binaural effects, and self-perceived sound quality were evaluated. RESULTS: Significantly higher speech perception in noise scores were observed with the ABF map compared to the CBF map (mean SRT50: -6.49 vs. -4.8 dB SNR for the S0NCI configuration and - 3.85 vs. -2.75 dB SNR for the S0N0 configuration). Summation and squelch effects were significantly increased with the ABF map (0.86 vs. 0.21 dB SNR for summation and 0.85 vs. -0.09 dB SNR for squelch). No improvement in speech perception in quiet or spatial release from masking were observed with the ABF map. A similar level of self-perceived sound quality was reported for each map. Upon the end of the study, all users opted to keep the ABF map. This preference was independent of the angular insertion depth of the electrode array. CONCLUSIONS: Experienced SSD CI users preferred using the ABF map, which gave them significant improvements in binaural hearing and some aspects of speech perception.

11.
Front Hum Neurosci ; 18: 1424920, 2024.
Artigo em Inglês | MEDLINE | ID: mdl-39234407

RESUMO

Past studies have explored formant centering, a corrective behavior of convergence over the duration of an utterance toward the formants of a putative target vowel. In this study, we establish the existence of a similar centering phenomenon for pitch in healthy elderly controls and examine how such corrective behavior is altered in Alzheimer's Disease (AD). We found the pitch centering response in healthy elderly was similar when correcting pitch errors below and above the target (median) pitch. In contrast, patients with AD showed an asymmetry with a larger correction for the pitch errors below the target phonation than above the target phonation. These findings indicate that pitch centering is a robust compensation behavior in human speech. Our findings also explore the potential impacts on pitch centering from neurodegenerative processes impacting speech in AD.

12.
Artigo em Inglês | MEDLINE | ID: mdl-39311007

RESUMO

Recent studies suggest that benefiting early from both a cochlear implant (CI) and exposure to cued speech (CS, support system for the perception of oral language) positively impacts deaf children's speech perception, speech intelligibility, and reading. This study aims to show how: 1/CS-based speech perception ("cue reading"), and speech intelligibility might also constitute precise measures for determining the impact of CI and CS on deaf students' literary performance; 2/print exposure might also be a predictive factor in this equation. We conducted regression analyses to examine the impact of these three variables in two experiments conducted on Grade 2-3 deaf children and Grade 6-9 deaf adolescents. Results indicate print exposure significantly contributes to literacy skills across experiments, with additional contributions from cue reading and speech intelligibility in older students. The predictive aspect of the print exposure, cue reading, and speech intelligibility variables will be discussed, as will the consequences for educational and pedagogical practices.

13.
J Cogn ; 7(1): 69, 2024.
Artigo em Inglês | MEDLINE | ID: mdl-39280724

RESUMO

Music making across cultures arguably involves a blend of innovation and adherence to established norms. This integration allows listeners to recognise a range of innovative, surprising, and functional elements in music, while also associating them to a certain tradition or style. In this light, musical creativity may be seen to involve the novel recombination of shared elements and rules, which can in itself give rise to new cultural conventions. Put simply, future norms rely on past knowledge and present action; this holds for music as it does for other cultural domains. A key process permeating this temporal transition, with regards to both music making and music listening, is prediction. Recent findings suggest that as we listen to music, our brain is constantly generating predictions based on prior knowledge acquired in a given enculturation context. Those predictions, in turn, can shape our appraisal of the music, in a continual perception-action loop. This dynamic process of predicting and calibrating expectations may enable shared musical realities, that is, sets of norms that are transmitted, with some modification, either vertically between generations of a given musical culture, or horizontally between peers of the same or different cultures. As music transforms through cultural evolution, so do the predictive models in our minds and the expectancy they give rise to, influenced by cultural exposure and individual experience. Thus, creativity and prediction are both fundamental and complementary to the transmission of cultural systems, including music, across generations and societies. For these reasons, prediction, creativity and cultural evolution were the central themes in a symposium we organised in 2022. The symposium aimed to study their interplay from an interdisciplinary perspective, guided by contemporary theories and methodologies. This special issue compiles research discussed during or inspired by that symposium, concluding with potential directions for the field of music cognition in that spirit.

14.
Entropy (Basel) ; 26(9)2024 Aug 28.
Artigo em Inglês | MEDLINE | ID: mdl-39330066

RESUMO

Misunderstandings in dyadic interactions often persist despite our best efforts, particularly between native and non-native speakers, resembling a broken duet that refuses to harmonise. This paper delves into the computational mechanisms underpinning these misunderstandings through the lens of the broken Lorenz system-a continuous dynamical model. By manipulating a specific parameter regime, we induce bistability within the Lorenz equations, thereby confining trajectories to distinct attractors based on initial conditions. This mirrors the persistence of divergent interpretations that often result in misunderstandings. Our simulations reveal that differing prior beliefs between interlocutors result in misaligned generative models, leading to stable yet divergent states of understanding when exposed to the same percept. Specifically, native speakers equipped with precise (i.e., overconfident) priors expect inputs to align closely with their internal models, thus struggling with unexpected variations. Conversely, non-native speakers with imprecise (i.e., less confident) priors exhibit a greater capacity to adjust and accommodate unforeseen inputs. Our results underscore the important role of generative models in facilitating mutual understanding (i.e., establishing a shared narrative) and highlight the necessity of accounting for multistable dynamics in dyadic interactions.

15.
Neuroimage ; 300: 120875, 2024 Oct 15.
Artigo em Inglês | MEDLINE | ID: mdl-39341475

RESUMO

In speech perception, low-frequency cortical activity tracks hierarchical linguistic units (e.g., syllables, phrases, and sentences) on top of acoustic features (e.g., speech envelope). Since the fluctuation of speech envelope typically corresponds to the syllabic boundaries, one common interpretation is that the acoustic envelope underlies the extraction of discrete syllables from continuous speech for subsequent linguistic processing. However, it remains unclear whether and how cortical activity encodes linguistic information when the speech envelope does not provide acoustic correlates of syllables. To address the issue, we introduced a frequency-tagging speech stream where the syllabic rhythm was obscured by echoic envelopes and investigated neural encoding of hierarchical linguistic information using electroencephalography (EEG). When listeners attended to the echoic speech, cortical activity showed reliable tracking of syllable, phrase, and sentence levels, among which the higher-level linguistic units elicited more robust neural responses. When attention was diverted from the echoic speech, reliable neural tracking of the syllable level was also observed in contrast to deteriorated neural tracking of the phrase and sentence levels. Further analyses revealed that the envelope aligned with the syllabic rhythm could be recovered from the echoic speech through a neural adaptation model, and the reconstructed envelope yielded higher predictive power for the neural tracking responses than either the original echoic envelope or anechoic envelope. Taken together, these results suggest that neural adaptation and attentional modulation jointly contribute to neural encoding of linguistic information in distorted speech where the syllabic rhythm is obscured by echoes.


Assuntos
Eletroencefalografia , Percepção da Fala , Humanos , Percepção da Fala/fisiologia , Masculino , Feminino , Eletroencefalografia/métodos , Adulto Jovem , Adulto , Córtex Cerebral/fisiologia , Linguística , Estimulação Acústica
16.
Lang Speech ; : 238309241269059, 2024 Sep 19.
Artigo em Inglês | MEDLINE | ID: mdl-39297582

RESUMO

Kiezdeutsch is a multiethnolectal variety of German spoken by young people from multicultural communities that exhibits lexical, syntactic, and phonetic differences from standard German. A rather salient and pervasive feature of this variety is the fronting of the standard palatal fricative /ç/ (as in ich "I") to [ɕ] or [ʃ]. Previous perception work shows that this difference is salient and carries social meaning but dependent on the listener group. Further investigations also point to the significance of /ɔɪ/-fronting in production; however, whether this is salient in perception has not yet been investigated. In several (multi)ethnolectal varieties, differences in voice quality compared to the standard have been identified. Therefore, in this study, we present an acoustic comparison of voice quality in adolescent speakers of Kiezdeutsch and standard German, with results showing that Kiezdeutsch speakers produce a breathier voice quality. In addition, we report on a perception test designed to examine the social meaning of voice quality in combination with two segmental cues: coronalization of /ç/ and /ɔɪ/-fronting. The results indicate perceptual gradience for phonetic alternations detected in Kiezdeutsch with coronalization of /ç/ being a highly salient and reliable marker, whereas fronting of /ɔɪ/ and breathy voice do not appear to be clearly enregistered features of Kiezdeutsch by all listeners. Thus, even though we find differences in production, these may not necessarily be relevant in perception, pointing toward enregisterment- like sound change-being a continuous process of forming learned associations through tokens of experiences.

17.
J Clin Med ; 13(17)2024 Sep 05.
Artigo em Inglês | MEDLINE | ID: mdl-39274482

RESUMO

(1) Background: The aim of the present study was to assess the impact of reverberation on speech perception in noise and spatial release from masking (SRM) in bimodal or bilateral cochlear implant (CI) users and CI subjects with low-frequency residual hearing using combined electric-acoustic stimulation (EAS). (2) Methods: In total, 10 bimodal, 14 bilateral CI users and 14 EAS users, and 17 normal hearing (NH) controls, took part in the study. Speech reception thresholds (SRTs) in unmodulated noise were assessed in co-located masker condition (S0N0) with a spatial separation of speech and noise (S0N60) in both free-field and loudspeaker-based room simulation for two different reverberation times. (3) Results: There was a significant detrimental effect of reverberation on SRTs and SRM in all subject groups. A significant difference between the NH group and all the CI/EAS groups was found. There was no significant difference in SRTs between any CI and EAS group. Only NH subjects achieved spatial release from masking in reverberation, whereas no beneficial effect of spatial separation of speech and noise was found in any CI/EAS group. (4) Conclusions: The subject group with electric-acoustic stimulation did not yield a superior outcome in terms of speech perception in noise under reverberation when the noise was presented towards the better hearing ear.

18.
Brain Sci ; 14(9)2024 Sep 02.
Artigo em Inglês | MEDLINE | ID: mdl-39335391

RESUMO

Transcranial magnetic stimulation (TMS) has been widely used to study the mechanisms that underlie motor output. Yet, the extent to which TMS acts upon the cortical neurons implicated in volitional motor commands and the focal limitations of TMS remain subject to debate. Previous research links TMS to improved subject performance in behavioral tasks, including a bias in phoneme discrimination. Our study replicates this result, which implies a causal relationship between electro-magnetic stimulation and psychomotor activity, and tests whether TMS-facilitated psychomotor activity recorded via electroencephalography (EEG) may thus serve as a superior input for neural decoding. First, we illustrate that site-specific TMS elicits a double dissociation in discrimination ability for two phoneme categories. Next, we perform a classification analysis on the EEG signals recorded during TMS and find a dissociation between the stimulation site and decoding accuracy that parallels the behavioral results. We observe weak to moderate evidence for the alternative hypothesis in a Bayesian analysis of group means, with more robust results upon stimulation to a brain region governing multiple phoneme features. Overall, task accuracy was a significant predictor of decoding accuracy for phoneme categories (F(1,135) = 11.51, p < 0.0009) and individual phonemes (F(1,119) = 13.56, p < 0.0003), providing new evidence for a causal link between TMS, neural function, and behavior.

19.
Front Psychol ; 15: 1394309, 2024.
Artigo em Inglês | MEDLINE | ID: mdl-39323581

RESUMO

Previous research on the perception of segmental features of languages has established a correlation between the phoneme inventory of a language and its speakers' perceptual abilities, as indexed by discrimination tasks and Mismatch Negativity (MMN). Building on this background, the current study elucidated the relationship between perceptual ability and tonal inventory by utilizing two tonal languages. Two groups of participants were included in the present experiment: Mandarin speakers and Hakka-Mandarin speakers. Onset latency analysis revealed a significant difference in the Mandarin syllable condition, with Hakka-Mandarin speakers demonstrating earlier MMN latency than Mandarin speakers. This suggests a more efficient auditory processing mechanism in Hakka-Mandarin speakers. Both groups, however, showed similar MMN latency in the Hakka syllable condition. The interaction between language background and syllable type indicates that other factors, such as syllable sonority, also influence MMN responses. These findings highlight the importance of considering multiple phonemic inventories and syllable characteristics in studies of tonal perception.

20.
Cereb Cortex ; 34(9)2024 Sep 03.
Artigo em Inglês | MEDLINE | ID: mdl-39329356

RESUMO

Evidence suggests that the articulatory motor system contributes to speech perception in a context-dependent manner. This study tested 2 hypotheses using magnetoencephalography: (i) the motor cortex is involved in phonological processing, and (ii) it aids in compensating for speech-in-noise challenges. A total of 32 young adults performed a phonological discrimination task under 3 noise conditions while their brain activity was recorded using magnetoencephalography. We observed simultaneous activation in the left ventral primary motor cortex and bilateral posterior-superior temporal gyrus when participants correctly identified pairs of syllables. This activation was significantly more pronounced for phonologically different than identical syllable pairs. Notably, phonological differences were resolved more quickly in the left ventral primary motor cortex than in the left posterior-superior temporal gyrus. Conversely, the noise level did not modulate the activity in frontal motor regions and the involvement of the left ventral primary motor cortex in phonological discrimination was comparable across all noise conditions. Our results show that the ventral primary motor cortex is crucial for phonological processing but not for compensation in challenging listening conditions. Simultaneous activation of left ventral primary motor cortex and bilateral posterior-superior temporal gyrus supports an interactive model of speech perception, where auditory and motor regions shape perception. The ventral primary motor cortex may be involved in a predictive coding mechanism that influences auditory-phonetic processing.


Assuntos
Magnetoencefalografia , Córtex Motor , Fonética , Percepção da Fala , Humanos , Masculino , Feminino , Córtex Motor/fisiologia , Adulto Jovem , Percepção da Fala/fisiologia , Adulto , Lateralidade Funcional/fisiologia , Discriminação Psicológica/fisiologia , Estimulação Acústica , Mapeamento Encefálico , Ruído
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA
...