Your browser doesn't support javascript.
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 24.077
Filtrar
1.
Harefuah ; 159(1): 123-127, 2020 Feb.
Artigo em Hebraico | MEDLINE | ID: mdl-32048493

RESUMO

INTRODUCTION: In normal hearing, the brain receives bilateral auditory input from both ears. In individuals with only one functioning ear listening in noisy environments and sound localization may become difficult. Historically, the impact of unilateral hearing loss in children had typically been minimized by clinicians, as it was assumed that one normal hearing ear provided sufficient auditory input for speech development and normal hearing experience. Data supporting the negative effects of unilateral deafness has been accumulating during the last decades. The effects of unilateral deafness extend beyond spatial hearing to language development, slower rates of educational progress, problems in social interaction and in cognitively demanding tasks. Until recently, treatments for single sided deafness were limited to routing signals from the deaf ear to the contralateral hearing ear either through conventional CROS aids or through bone anchored technologies. These technologies simply transfer sounds to the single functioning ear which allow sound awareness from the deaf side and minor improvement in hearing in noisy environments and localization. The cochlear implant is a surgically implanted electronic device that contains an array of electrodes which is placed into the cochlea, and stimulates the cochlear nerve. The cochlear implant bypasses the injured parts of the inner ear. Currently it is the only treatment to restore binaural hearing. This review aims to discuss the different aspects, the benefits and disadvantages of cochlear implantation in children with single sided deafness.


Assuntos
Implante Coclear , Implantes Cocleares , Surdez , Criança , Perda Auditiva Unilateral , Humanos , Percepção da Fala
2.
HNO ; 68(Suppl 1): 43-49, 2020 Jan.
Artigo em Inglês | MEDLINE | ID: mdl-31915885

RESUMO

OBJECTIVE: To develop a new, German, age-appropriate speech audiometry test for children, by using 26 nouns that are most likely part of the lexicon of 2-year-olds. The test is a picture-pointing task with a four-option non-forced choice method. MATERIALS AND METHODS: In total, 179 children aged 2;11 to 6;9 y were included for standardizing and validating the speech test. Of these, 51 had a hearing impairment in both ears ranging up to 90 dB hearing level (HL). The normal-hearing collective was divided into three groups according to age. For each group, the speech reception threshold (SRT) and the slope of the psychometric function of intelligibility were determined. For validation, the test-retest reliability was measured in 85 ears, and the correlation between the pure tone average (PTA) at 0.5, 1, 2, and 4 kHz and the SRT was measured in 86 ears. RESULTS: The sound spectrum of the 26 items was in good accordance with the international long-term speech spectrum, and the relative frequency of phonemes matched the distribution of the 50 more frequent German phonemes. The SRTs ranged from 24.6 ± 0.6 dB sound pressure level (SPL) for the oldest group (> 5.5 y) to 29.3 ± 1.3 dB SPL for the youngest group (< 4.25 y). The slopes of the psychometric function ranged from 4.3 ± 0.5%/dB for the oldest group to 2.6 ± 0.4%/dB for the youngest. The test and retest showed good correlation (r  = 0.89, p < 0.0001) as did the PTA and SRT (r = 0.84, p < 0.0001). CONCLUSION: The newly developed Mainz speech-test effectively measures age-related speech perception from the age of three years.


Assuntos
Percepção da Fala , Teste do Limiar de Recepção da Fala , Fala , Audiometria da Fala , Limiar Auditivo , Criança , Pré-Escolar , Humanos , Reprodutibilidade dos Testes
3.
HNO ; 68(1): 25-31, 2020 Jan.
Artigo em Alemão | MEDLINE | ID: mdl-31690970

RESUMO

BACKGROUND: Logatomes, nonsensical combinations of consonants and vowels, are suitable for a precise capture and analysis of individual phonemes as fundamental modules of speech in audiometric diagnostics. OBJECTIVE: The aim of this prospective study was to capture the audiometric characteristics of a closed-set logatome test. The slope of the discrimination function at the speech reception threshold (SRT) and the reproducibility were analyzed. MATERIAL AND METHODS: A set of 102 intensity varied and randomized logatomes were presented in the form of consonant-vowel-consonant to 25 hearing unimpaired adults. The measurements were performed in a free field setting and were each repeated after a 2-week interval. The subjects were requested to repeat the heard logatome in a closed response test of 10 items per sound item on a touchscreen. RESULTS: The slope of the mean discrimination function at the SRT was on average 4%/dB; however, the mean discrimination function slope was steeper for the initial consonant than for the final one. The differences of the test and retest results at the SRT showed a standard deviation of 13% for consonants. These differences were normally distributed. There were no significant differences between test and retest. CONCLUSION: The slope of the discrimination function at the SRT appeared to be shallow but was comparable to established word tests. Finally, there was no evidence of a learning effect in the retest, which emphasizes the low redundancy of the speech material and makes it an attractive complementary option to routine audiometric diagnostics.


Assuntos
Testes Auditivos , Percepção da Fala , Teste do Limiar de Recepção da Fala , Adulto , Documentação , Humanos , Estudos Prospectivos , Reprodutibilidade dos Testes
4.
Codas ; 32(1): e20180202, 2020.
Artigo em Inglês | MEDLINE | ID: mdl-31721925

RESUMO

PURPOSE: To assess the hearing abilities of temporal ordering, temporal resolution and sound localization before and after the fitting of a hearing aid (HA) in individuals with unilateral hearing loss (UHL). METHODS: There were evaluated 22 subjects, aged 18 to 60 years, diagnosed with sensorineural or mixed UHL, from mild to severe degrees. The study was divided into two stages: the pre and post-adaptation of HA. In both phases, subjects performed an interview, application of Questionnaire for Disabilities Associated with Impaired Auditory Localization, auditory processing screening protocol (APSP) and Random Gap Detection Test (RGDT). RESULTS: This study found no statistically significant difference in sound localization and memory evaluations for verbal sounds in sequence, in RGDT and Questionnaire for Disabilities Associated with Impaired Auditory Localization. CONCLUSION: With the effective use of hearing aids, individuals with UHL showed improvement in the auditory abilities of sound localization, ordering and temporal resolution.


Assuntos
Auxiliares de Audição , Perda Auditiva Unilateral/reabilitação , Localização de Som , Adolescente , Adulto , Feminino , Perda Auditiva Unilateral/diagnóstico , Testes Auditivos , Humanos , Masculino , Pessoa de Meia-Idade , Estudos Prospectivos , Percepção da Fala , Inquéritos e Questionários , Adulto Jovem
5.
HNO ; 68(1): 14-24, 2020 Jan.
Artigo em Alemão | MEDLINE | ID: mdl-31598771

RESUMO

BACKGROUND: Since 2017, the Freiburg monosyllabic speech test can be used in hearing aid evaluation with background noise in Germany. The results are used to compare the aided versus the unaided condition. However, there is currently no reference speech recognition curve for a comparison to listeners with normal hearing. OBJECTIVE: The goal is to establish a reference speech recognition curve for listeners with normal hearing and to analyze the perceptual equivalence of the test lists in continuous CCITT noise (according to Comité Consultatif International Téléphonique et Télégraphique). MATERIALS AND METHODS: The measurements were conducted at two different sites with 90 participants in total. Monosyllables and CCITT noise were presented at different signal-to-noise ratios by one loudspeaker from the front (S0N0). Individual and test-list specific discrimination functions were fitted to differentiate between the sites and among test lists. RESULTS: The reference speech recognition curve and its region of tolerance were established. Three perceptively deviating test lists (1, 3, 20) were identified. CONCLUSION: The reference speech recognition curve enables quantification of hearing difficulties with Freiburg monosyllables in noise and an estimation of the rehabilitation with hearing aids. This reference curve is only valid for frontal stimulus presentation (S0N0) and continuous CCITT noise. Perceptually deviating test lists were different to those in quiet, but partly correspond to literature data.


Assuntos
Auxiliares de Audição , Perda Auditiva , Percepção da Fala , Fala , Alemanha , Humanos , Ruído , Testes de Discriminação da Fala
6.
HNO ; 68(1): 40-47, 2020 Jan.
Artigo em Alemão | MEDLINE | ID: mdl-31728573

RESUMO

BACKGROUND: Improvement of speech perception in quiet is an important goal of hearing aid provision. In practice, results are highly variable. The aim of this study was to investigate the relationship between type and extent of hearing loss (audiogram type), maximum word recognition score, and aided speech perception. MATERIALS AND METHODS: Pure tone and speech audiometric data of 740 ears in 370 patients were reviewed. All subjects visited our hearing center for hearing aid evaluation between 2012 and 2017. The maximum word recognition score (WRSmax) and the monosyllabic speech recognition score with hearing aids, WRS65(HA) were analyzed for 10 different standard audiogram types. RESULTS: The WRS65(HA) with hearing aids for different degrees of hearing loss is, within error boundaries, comparable to previous investigations and shows a difference of 10-20 percentage points to the WRSmax. This difference tends to be larger for flat and moderately sloping audiograms compared to steep-sloping audiograms. The ratio WRS65(HA)/WRSmax can be interpreted as an efficiency factor for hearing aid provision, since it relates speech recognition with hearing aids to the maximally achievable information carrying capacity of the hearing impaired. CONCLUSION: The expectation regarding hearing aid provision has to be adjusted according to maximum word recognition score, the derived quality measures, degree of hearing loss, and audiogram type.


Assuntos
Auxiliares de Audição , Perda Auditiva Neurossensorial , Perda Auditiva , Percepção da Fala , Audiometria de Tons Puros , Audiometria da Fala , Humanos , Fala
7.
Dev Sci ; 23(1): e12857, 2020 01.
Artigo em Inglês | MEDLINE | ID: mdl-31090993

RESUMO

There is an ongoing debate whether phonological deficits in dyslexics should be attributed to (a) less specified representations of speech sounds, like suggested by studies in young children with a familial risk for dyslexia, or (b) to an impaired access to these phonemic representations, as suggested by studies in adults with dyslexia. These conflicting findings are rooted in between study differences in sample characteristics and/or testing techniques. The current study uses the same multivariate functional MRI (fMRI) approach as previously used in adults with dyslexia to investigate phonemic representations in 30 beginning readers with a familial risk and 24 beginning readers without a familial risk of dyslexia, of whom 20 were later retrospectively classified as dyslexic. Based on fMRI response patterns evoked by listening to different utterances of /bA/ and /dA/ sounds, multivoxel analyses indicate that the underlying activation patterns of the two phonemes were distinct in children with a low family risk but not in children with high family risk. However, no group differences were observed between children that were later classified as typical versus dyslexic readers, regardless of their family risk status, indicating that poor phonemic representations constitute a risk for dyslexia but are not sufficient to result in reading problems. We hypothesize that poor phonemic representations are trait (family risk) and not state (dyslexia) dependent, and that representational deficits only lead to reading difficulties when they are present in conjunction with other neuroanatomical or-functional deficits.


Assuntos
Encéfalo/fisiopatologia , Dislexia/fisiopatologia , Fonética , Leitura , Transtorno Fonológico/fisiopatologia , Adulto , Percepção Auditiva/fisiologia , Criança , Pré-Escolar , Feminino , Humanos , Estudos Longitudinais , Imagem por Ressonância Magnética , Masculino , Percepção da Fala/fisiologia
8.
Codas ; 32(2): e20180242, 2020.
Artigo em Português, Inglês | MEDLINE | ID: mdl-31855224

RESUMO

PURPOSE: To identify the parameters that influences the decision of singing teachers to seek speech-language pathology (SLP) assistance for their students. METHODS: The study sample comprised 48 popular-music singing teachers, male and female, aged 37.96 years on average. The participants responded to a 10 closed-question questionnaire prepared by the researchers via the SurveyMonkey platform. The questions referred to the reasons why singing teachers seek SLP assistance, as well as to the knowledge of these teachers regarding chronic hoarseness as a risk symptom to identify other lesions in the larynx. RESULTS: Singing teachers seek SLP assistance for their students in the presence of hoarseness complaints and impaired speech sound articulation. The singing teachers assessed did not consider vocal tiredness complaint as a determining factor for referral to SLP evaluation. Most study participants were not aware that a hoarseness complaint for over 15 days can be indicative of larynx tumor. There was no influence of the variables age and time of professional experience in the referral to SLP assistance. CONCLUSION: Most of the singing teachers who participated in this study sought SLP assistance for their students when they presented hoarseness complaints and impaired speech sound articulation.


Assuntos
Canto , Distúrbios da Voz/diagnóstico , Qualidade da Voz , Adulto , Feminino , Conhecimentos, Atitudes e Prática em Saúde , Humanos , Masculino , Percepção da Fala , Patologia da Fala e Linguagem , Estudantes , Inquéritos e Questionários , Distúrbios da Voz/terapia
9.
Biol Lett ; 15(12): 20190555, 2019 12 24.
Artigo em Inglês | MEDLINE | ID: mdl-31795850

RESUMO

Domesticated animals have been shown to recognize basic phonemic information from human speech sounds and to recognize familiar speakers from their voices. However, whether animals can spontaneously identify words across unfamiliar speakers (speaker normalization) or spontaneously discriminate between unfamiliar speakers across words remains to be investigated. Here, we assessed these abilities in domestic dogs using the habituation-dishabituation paradigm. We found that while dogs habituated to the presentation of a series of different short words from the same unfamiliar speaker, they significantly dishabituated to the presentation of a novel word from a new speaker of the same gender. This suggests that dogs spontaneously categorized the initial speaker across different words. Conversely, dogs who habituated to the same short word produced by different speakers of the same gender significantly dishabituated to a novel word, suggesting that they had spontaneously categorized the word across different speakers. Our results indicate that the ability to spontaneously recognize both the same phonemes across different speakers, and cues to identity across speech utterances from unfamiliar speakers, is present in domestic dogs and thus not a uniquely human trait.


Assuntos
Percepção da Fala , Voz , Animais , Sinais (Psicologia) , Cães , Humanos , Fonética , Fala
10.
PLoS One ; 14(12): e0226288, 2019.
Artigo em Inglês | MEDLINE | ID: mdl-31881550

RESUMO

Temporal-envelope cues are essential for successful speech perception. We asked here whether training on stimuli containing temporal-envelope cues without speech content can improve the perception of spectrally-degraded (vocoded) speech in which the temporal-envelope (but not the temporal fine structure) is mainly preserved. Two groups of listeners were trained on different amplitude-modulation (AM) based tasks, either AM detection or AM-rate discrimination (21 blocks of 60 trials during two days, 1260 trials; frequency range: 4Hz, 8Hz, and 16Hz), while an additional control group did not undertake any training. Consonant identification in vocoded vowel-consonant-vowel stimuli was tested before and after training on the AM tasks (or at an equivalent time interval for the control group). Following training, only the trained groups showed a significant improvement in the perception of vocoded speech, but the improvement did not significantly differ from that observed for controls. Thus, we do not find convincing evidence that this amount of training with temporal-envelope cues without speech content provide significant benefit for vocoded speech intelligibility. Alternative training regimens using vocoded speech along the linguistic hierarchy should be explored.


Assuntos
Percepção Auditiva/fisiologia , Inteligibilidade da Fala/fisiologia , Percepção da Fala/fisiologia , Adulto , Sinais (Psicologia) , Feminino , Humanos , Masculino , Percepção do Tempo , Adulto Jovem
11.
Int. arch. otorhinolaryngol. (Impr.) ; 23(4): 433-439, Out.-Dez. 2019. ilus
Artigo em Inglês | LILACS | ID: biblio-1024413

RESUMO

Introduction: Studies have reported that although speech perception in noise was unaltered with and without digital noise reduction (DNR), the annoyance toward noise measured by acceptable noise level (ANL) was significantly improved by DNR with the range between 2.5 and 4.5 dB. It is unclear whether a similar improvement would be observed in those individuals who have an ANL ≥ 14 dB (predictive of poor hearing aid user) often rejects their aid because of annoyance toward noise. Objectives: (a) To determine the effect of activation of DNR on the improvement in the aided ANL from low- and high-ANL groups; and (b) to predict the change in ANL when DNR was activated. Method: Ten bilateral mild to severe sloping sensorineural hearing loss (SNHL) participants in each of the low- and high-ANL groups were involved. These participants were bilaterally fitted with receiver in canal (RIC) hearing aids (Oticon, Smorum, Egedal, Denmark) with a DNR processor. Both SNR-50% (Signal to noise ratio (in dB) required to achieve 50 % speech recognition) and ANL were assessed in DNR-on and DNR-off listening conditions. Results: Digital noise reduction has no effect on SNR-50 in each group. The annoyance level was significantly reduced in the DNR-on than DNR-off condition in the low-ANL group. In the high-ANL group, a strong negative correlation was observed between the ANL in DNR off and a change in ANL after DNR was employed in the hearing aid (benefit). The benefit of DNR on annoyance can be effectively predicted by baseline-aided ANL by linear regression. Conclusion: Digital noise reduction reduced the annoyance level in the high-ANL group, and the amount of improvement was related to the baseline-aided ANL value (AU)


Assuntos
Pessoa de Meia-Idade , Idoso , Limiar Auditivo/fisiologia , Percepção da Fala/fisiologia , Efeitos do Ruído , Auxiliares de Audição , Método Simples-Cego , Perda Auditiva Neurossensorial/fisiopatologia
12.
Sheng Li Xue Bao ; 71(6): 935-945, 2019 Dec 25.
Artigo em Chinês | MEDLINE | ID: mdl-31879748

RESUMO

Speech comprehension is a central cognitive function of the human brain. In cognitive neuroscience, a fundamental question is to understand how neural activity encodes the acoustic properties of a continuous speech stream and resolves multiple levels of linguistic structures at the same time. This paper reviews the recently developed research paradigms that employ electroencephalography (EEG) or magnetoencephalography (MEG) to capture neural tracking of acoustic features or linguistic structures of continuous speech. This review focuses on two questions in speech processing: (1) The encoding of continuously changing acoustic properties of speech; (2) The representation of hierarchical linguistic units, including syllables, words, phrases and sentences. Studies have found that the low-frequency cortical activity tracks the speech envelope. In addition, the cortical activities on different time scales track multiple levels of linguistic units and constitute a representation of hierarchically organized linguistic units. The article reviewed these studies, which provided new insights into the processes of continuous speech in the human brain.


Assuntos
Eletroencefalografia , Magnetoencefalografia , Fala , Estimulação Acústica , Humanos , Fala/fisiologia , Percepção da Fala
13.
Adv Exp Med Biol ; 1101: 167-206, 2019.
Artigo em Inglês | MEDLINE | ID: mdl-31729676

RESUMO

The theory and implementation of modern cochlear implant are presented in this chapter. Major signal processing strategies of cochlear implants are discussed in detail. Hardware implementation including wireless signal transmission circuit, integrated circuit design of implant circuit, and neural response measurement circuit are provided in the latter part of the chapter. Finally, new technologies that are likely to improve the performance of current cochlear implants are introduced.


Assuntos
Implantes Cocleares , Processamento de Sinais Assistido por Computador , Percepção da Fala , Implante Coclear , Implantes Cocleares/tendências , Humanos , Percepção da Fala/fisiologia
14.
Brain Lang ; 199: 104695, 2019 12.
Artigo em Inglês | MEDLINE | ID: mdl-31610478

RESUMO

Newborns habituate to repeated auditory stimuli, and discriminate syllables, generating opportunities for early language learning. This study investigated trial-by-trial changes in newborn electrophysiological responses to auditory speech syllables as an index of habituation and novelty detection. Auditory event-related potentials (ERPs) were recorded from 16 term newborn infants, aged 1-3 days, in response to monosyllabic speech syllables presented during habituation and novelty detection tasks. Multilevel models demonstrated that newborns habituated to repeated auditory syllables, as ERP amplitude attenuated for a late-latency component over successive trials. Subsequently, during the novelty detection task, early- and late-latency component amplitudes decreased over successive trials for novel syllables only, indicating encoding of the novel speech syllable. We conclude that newborns dynamically encoded novel syllables over relatively short time periods, as indicated by a systematic change in response patterns with increased exposure. These results have important implications for understanding early precursors of learning and memory in newborns.


Assuntos
Potenciais Evocados Auditivos , Habituação Psicofisiológica , Desenvolvimento da Linguagem , Percepção da Fala , Feminino , Humanos , Recém-Nascido , Masculino , Memória , Fonética
15.
Brain Lang ; 199: 104694, 2019 12.
Artigo em Inglês | MEDLINE | ID: mdl-31586790

RESUMO

The aim of the present study was to uncover a possible common neural organizing principle in spoken and written communication, through the coupling of perceptual and motor representations. In order to identify possible shared neural substrates for processing the basic units of spoken and written language, a sparse sampling fMRI acquisition protocol was performed on the same subjects in two experimental sessions with similar sets of letters being read and written and of phonemes being heard and orally produced. We found evidence of common premotor regions activated in spoken and written language, both in perception and in production. The location of those brain regions was confined to the left lateral and medial frontal cortices, at locations corresponding to the premotor cortex, inferior frontal cortex and supplementary motor area. Interestingly, the speaking and writing tasks also appeared to be controlled by largely overlapping networks, possibly indicating some domain general cognitive processing. Finally, the spatial distribution of individual activation peaks further showed more dorsal and more left-lateralized premotor activations in written than in spoken language.


Assuntos
Córtex Motor/fisiologia , Leitura , Percepção da Fala , Fala , Redação , Adulto , Mapeamento Encefálico , Feminino , Humanos , Imagem por Ressonância Magnética , Masculino
16.
Brain Lang ; 199: 104700, 2019 12.
Artigo em Inglês | MEDLINE | ID: mdl-31586791

RESUMO

Recent neurophysiological studies have proposed distinct roles of ß and γ oscillations in implementing top-down and bottom-up processes. The present study aims to test this hypothesis in the domain of speech perception. We examined ß and γ oscillations elicited to a tone contrast in a passive oddball paradigm, and their relationships with discrimination sensitivity d' and RT from two groups of healthy adults who showed high and low discrimination sensitivity to the contrast. The low-sensitivity group showed a significant reduction in ß, which was further related to d'. Individual differences in RT were related to different frequency bands in the two groups, with a RT-ß correlation in the low-sensitivity group, and a RT-γ relation in the high-sensitivity group. Based on these findings, we suggest that ß, implicated in top-down processing, reflects individual differences in phonological representations, and that γ, involved in bottom-up processing, reflects individual differences in acoustic encoding.


Assuntos
Ritmo beta , Ritmo Gama , Individualidade , Percepção da Fala/fisiologia , Adulto , Feminino , Humanos , Masculino , Fonética
17.
Brain Lang ; 199: 104698, 2019 12.
Artigo em Inglês | MEDLINE | ID: mdl-31586792

RESUMO

This study examines cross-modality effects of a semantically-biased written sentence context on the perception of an acoustically-ambiguous word target identifying neural areas sensitive to interactions between sentential bias and phonetic ambiguity. Of interest is whether the locus or nature of the interactions resembles those previously demonstrated for auditory-only effects. FMRI results show significant interaction effects in right mid-middle temporal gyrus (RmMTG) and bilateral anterior superior temporal gyri (aSTG), regions along the ventral language comprehension stream that map sound onto meaning. These regions are more anterior than those previously identified for auditory-only effects; however, the same cross-over interaction pattern emerged implying similar underlying computations at play. The findings suggest that the mechanisms that integrate information across modality and across sentence and phonetic levels of processing recruit amodal areas where reading and spoken lexical and semantic access converge. Taken together, results support interactive accounts of speech and language processing.


Assuntos
Fonética , Leitura , Semântica , Acústica da Fala , Lobo Temporal/fisiologia , Adulto , Mapeamento Encefálico , Feminino , Humanos , Imagem por Ressonância Magnética , Masculino , Percepção da Fala
18.
Codas ; 31(5): e20180217, 2019.
Artigo em Português, Inglês | MEDLINE | ID: mdl-31644717

RESUMO

PURPOSE: To compare the ability to recognize sentences in silence and in noise in monolingual normal-hearing Brazilian Portuguese speakers, and bilingual speakers of Brazilian Portuguese and German, and bilingual speakers of Brazilian Portuguese and Italian, as well as to analyze the influence of age of second language acquisition on the performance of bilinguals. METHODS: 87 normal-hearing individuals aged between 18 and 55 years participated of this research. They were categorized into: Control Group, composed by 30 monolingual Brazilian Portuguese speakers; German Research Group, 31 simultaneous bilingual native speakers of Portuguese and speakers of German as a second language and; Italian Research Group, consisting of 26 successive bilinguals, native speakers of Portuguese and speakers of Italian as a second language. The Sentence List Test in Brazilian Portuguese was used to measure their Sentence Recognition Thresholds in Silence and Noise. RESULTS: In silence, there were no statistically significant differences in performance when comparing the bilingual to the monolingual individuals, and when comparing the bilingual speakers among themselves. On the other hand, in noise, there was a significant difference between the bilingual groups and the monolingual one. However, there were no significant differences between the bilingual groups when their performance was compared. CONCLUSION: Bilingualism positively influenced the development of language and listening skills, which led the bilinguals to outperform in speech recognition in the presence of noise. Also, the period of a second language acquisition did not influence bilingual performance.


Assuntos
Multilinguismo , Ruído , Percepção da Fala/fisiologia , Adolescente , Adulto , Estudos de Casos e Controles , Estudos Transversais , Feminino , Humanos , Masculino , Pessoa de Meia-Idade , Teste do Limiar de Recepção da Fala , Adulto Jovem
19.
Artigo em Chinês | MEDLINE | ID: mdl-31623034

RESUMO

Objective:To explore the early vowel perception development of pediatric cochlear implant(CI) of 1-3 years old. Method:A total of 123 children who had accepted cochlear implantation under 3 years old were analyzed retrospectively. According to the age of implantation, all participants were divided into two groups as 1 year old group(1-<2) and 2 years old group(2-<3). The vowel perception of mandarin early speech perception(MESP) test scores at 12, 24, 36 momths after implantation as well as the trends in vowel perception between group 1 and normal hearing pediatrics of the same age were analyzed to research the development of vowel perception in pediatric cochlear implants and the effect of implanted age as well as physiological age. Result:The scores improved notably in two groups with the increase of physiological age(P<0.01); The vowel perception of group 1 was significantly better than that of group 2(P<0.01), However, there were great difference between group 1 and normal hearing pediatrics of the same age. Conclusion:With the increase of physiological age, the vowel perception would be improved correspondingly within 3 years of pediatric cochlear implants under the age of 3; However, the earlier the age of implant, the better the vowel perception is.


Assuntos
Implantes Cocleares , Pediatria , Percepção da Fala , Pré-Escolar , Implante Coclear , Surdez , Humanos , Lactente , Estudos Retrospectivos
20.
Artigo em Chinês | MEDLINE | ID: mdl-31623041

RESUMO

Objective:This study was aimed to observe the effects of wireless audio microphone on hearing aids effect in noise at different listening distances. Method:Twenty-three subjects with bilateral sensorineural hearing loss, including 17 males and 6 females were fitted with binaural hearing aids. These patients did sentences recognition tests at two different listening distance(1.5 and 3 meters) in noise. The subjects were tested under three conditions, ①with hearing aids alone; ②with the wireless audio microphone alone; ③with hearing aid microphone and mini audio microphone simultaneously. Result:The sentence recognition threshold at 3 meters listening distance is significantly higher than it at 1.5 meters listening distance with hearing aids alone(P<0.05). There is no significant difference in the sentence recognition threshold between two listening distances when the wireless audio microphone was switched on(P>0.05). Conclusion:Mini audio microphone can significantly improve hearing aids effect in long distance listening in noise.


Assuntos
Auxiliares de Audição , Perda Auditiva Neurossensorial , Percepção da Fala , Feminino , Perda Auditiva Bilateral , Humanos , Masculino , Ruído
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA