Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 24.563
Filtrar
2.
Codas ; 32(6): e20180295, 2020.
Artigo em Português, Inglês | MEDLINE | ID: mdl-33331539

RESUMO

Waardenburg syndrome (WS) is a rare autosomal-dominant syndrome that can be presented with sensorineural hearing loss. In this report, we describe the outcomes of three children with WS at zero, three, nine, twelve and sixty months after cochlear implant (CI) fitting. The outcomes were assessed using IT-MAIS (Infant-Toddler Meaningful Auditory Integration Scale - younger than 5 year), MAIS (Meaningful Auditory Integration Scale - older than 5 year), MUSS (Meaningful Use of Speech Scale), and categories of auditory performance and speech intelligibility. The results showed an improvement in auditory and language performance over time, two patients who used CI for 5 years achieved 100% in IT-MAIS and MUSS tests. In addition, both were able to understand sentences in open set and achieve fluent speech. Moreover, both reached fluency on auditory and language performance scale. The third patient with 50 months of follow-up and in the 48 months evaluation, is in category 5 of auditory performance and 3 of speech intelligibility. We concluded that all children who had low levels of hearing and language before cochlear implant have improved hearing and language skills after implantation and rehabilitation.


Assuntos
Implante Coclear , Implantes Cocleares , Percepção da Fala , Síndrome de Waardenburg , Pré-Escolar , Audição , Humanos , Lactente , Resultado do Tratamento
3.
PLoS One ; 15(11): e0241247, 2020.
Artigo em Inglês | MEDLINE | ID: mdl-33137128

RESUMO

PURPOSE: To evaluate the long-term benefits in hearing-related quality of life, patient satisfaction and wearing time of patients rehabilitated with an active transcutaneous bone-conduction device. Adverse events and audiological outcomes are reported as secondary outcomes. METHODS: This retrospective, mono-centric cohort analysis involves 16 adults with conductive or mixed hearing loss with a mean device experience of 51.25 months. Patient-reported outcome measures were assessed using the short version of the Speech, Spatial and Qualities of Hearing Scale (SSQ12-B) and the German version of the Audio Processor Satisfaction Questionnaire (APSQ). Audiological outcomes as well as incidence of adverse events were obtained from patients´ charts. RESULTS: The hearing-related quality of life improved significantly within all subscales of the SSQ12-B scoring a mean overall of 2.95 points. Patient satisfaction measured with the APSQ scored 8.8 points on average. Wearing times differed considerably and patients with lower levels of education seemed to use their device longer compared to patients with academic education. Eight minor adverse events were documented, all of which resolved during follow-up. The mean gain in word recognition score at the last follow-up measured at 65 dB was 75.9%, while speech reception threshold was lowered by 35.1 dB. CONCLUSION: Even after several years, patients report significant benefits in hearing-related quality of life and device satisfaction. In combination with a low rate of minor adverse events and significantly improved audiological outcomes, the device is considered as a comfortable and effective option in hearing rehabilitation.


Assuntos
Auxiliares de Audição , Perda Auditiva Condutiva/reabilitação , Perda Auditiva Condutiva-Neurossensorial Mista/reabilitação , Percepção da Fala/fisiologia , Adolescente , Adulto , Idoso , Idoso de 80 Anos ou mais , Limiar Auditivo , Condução Óssea/fisiologia , Criança , Pré-Escolar , Feminino , Perda Auditiva Condutiva/fisiopatologia , Perda Auditiva Condutiva-Neurossensorial Mista/epidemiologia , Perda Auditiva Condutiva-Neurossensorial Mista/fisiopatologia , Humanos , Lactente , Masculino , Pessoa de Meia-Idade , Satisfação do Paciente , Qualidade de Vida , Inquéritos e Questionários , Adulto Jovem
4.
PLoS One ; 15(11): e0240534, 2020.
Artigo em Inglês | MEDLINE | ID: mdl-33147602

RESUMO

We examined the relationship between cognitive-linguistic mechanisms and auditory closure ability in children. Sixty-seven school-age children recognized isolated words and keywords in sentences that were interrupted at a rate of 2.5 Hz and 5 Hz. In essence, children were given only 50% of speech information and asked to repeat the complete word or sentence. Children's working memory capacity (WMC), attention, lexical knowledge, and retrieval from long-term memory (LTM) abilities were also measured to model their role in auditory closure ability. Overall, recognition of monosyllabic words and lexically easy multisyllabic words was significantly better at 2.5 Hz interruption rate than 5 Hz. Recognition of lexically hard multisyllabic words and keywords in sentences was better at 5 Hz relative to 2.5 Hz. Based on the best fit generalized "logistic" linear mixed effects models, there was a significant interaction between WMC and lexical difficulty of words. WMC was positively related only to recognition of lexically easy words. Lexical knowledge was found to be crucial for recognition of words and sentences, regardless of interruption rate. In addition, LTM retrieval ability was significantly associated with sentence recognition. These results suggest that lexical knowledge and the ability to retrieve information from LTM is crucial for children's speech recognition in adverse listening situations. Study findings make a compelling case for the assessment and intervention of lexical knowledge and retrieval abilities in children with listening difficulties.


Assuntos
Percepção Auditiva/fisiologia , Desenvolvimento da Linguagem , Memória de Curto Prazo/fisiologia , Percepção da Fala/fisiologia , Criança , Cognição/fisiologia , Feminino , Audição/fisiologia , Humanos , Idioma , Transtornos da Linguagem/fisiopatologia , Masculino , Desempenho Psicomotor/fisiologia , Fala/fisiologia , Distúrbios da Fala/fisiopatologia , Vocabulário
5.
PLoS One ; 15(11): e0242018, 2020.
Artigo em Inglês | MEDLINE | ID: mdl-33166341

RESUMO

Children acquire vowels earlier than consonants, and the former are less vulnerable to speech disorders than the latter. This study explores the hypothesis that a similar contrast exists later in life and that consonants are more vulnerable to ageing than vowels. Data was obtained with two experiments comparing the speech of Younger Adults (YAs) and Middle-aged Adults (MAs). In the first experiment an Automatic Speech Recognition (ASR) system was trained with a balanced corpus of 29 YAs and 27 MAs. The productions of each speaker were obtained in a Spanish language word (W) and non-word (NW) repetition task. The performance of the system was evaluated with the same corpus used for training using a cross validation approach. The ASR system recognized to a similar extent the Ws of both groups of speakers, but it was more successful with the NWs of the YAs than with those of the MAs. Detailed error analysis revealed that the MA speakers scored below the YA speakers for consonants and also for the place and manner of articulation features; the results were almost identical in both groups of speakers for vowels and for the voicing feature. In the second experiment a group of healthy native listeners was asked to recognize isolated syllables presented with background noise. The target speakers were one YA and one MA that had taken part in the first experiment. The results were consistent with those of the ASR experiment: the manner and place of articulation were better recognized, and vowels and voicing were worse recognized, in the YA speaker than in the MA speaker. We conclude that consonant articulation is more vulnerable to ageing than vowel articulation. Future studies should explore whether or not these early and selective changes in articulation accuracy might be caused by changes in speech perception skills (e.g., in auditory temporal processing).


Assuntos
Envelhecimento , Fala , Adulto , Percepção Auditiva , Feminino , Humanos , Masculino , Pessoa de Meia-Idade , Fonética , Espanha , Percepção da Fala , Voz
6.
PLoS One ; 15(11): e0242511, 2020.
Artigo em Inglês | MEDLINE | ID: mdl-33237919

RESUMO

The present study explored whether a tool for automatic detection and recognition of interactions and child-directed speech (CDS) in preschool classrooms could be developed, validated, and applied to non-coded video recordings representing children's classroom experiences. Using first-person video recordings collected by 13 preschool children during a morning in their classrooms, we extracted high-level audiovisual features from recordings using automatic speech recognition and computer vision services from a cloud computing provider. Using manual coding for interactions and transcriptions of CDS as reference, we trained and tested supervised classifiers and linear mappings to measure five variables of interest. We show that the supervised classifiers trained with speech activity, proximity, and high-level facial features achieve adequate accuracy in detecting interactions. Furthermore, in combination with an automatic speech recognition service, the supervised classifier achieved error rates for CDS measures that are in line with other open-source automatic decoding tools in early childhood settings. Finally, we demonstrate our tool's applicability by using it to automatically code and transcribe children's interactions and CDS exposure vertically within a classroom day (morning to afternoon) and horizontally over time (fall to winter). Developing and scaling tools for automatized capture of children's interactions with others in the preschool classroom, as well as exposure to CDS, may revolutionize scientific efforts to identify precise mechanisms that foster young children's language development.


Assuntos
/métodos , Pré-Escolar/educação , Interface para o Reconhecimento da Fala , Fala , Ensino , Adulto , Computação em Nuvem , Expressão Facial , Feminino , Humanos , Relações Interpessoais , Desenvolvimento da Linguagem , Aprendizado de Máquina , Grupo Associado , Fonética , Percepção da Fala , Gravação em Vídeo
7.
Artigo em Chinês | MEDLINE | ID: mdl-33254311

RESUMO

Objective:To describe the effects of the possible related factors in unilateral cochlear implantation(CI) on tinnitus,and analysis the hearing and speech ability in different tinnitus prognosis mode. Method:The 70 post-lingual deafness CI patients(27.73±14.032 years old) in the clinical trial for LCI-20PI cochlear implant and LSP-20A sound processor project by a fast questionnaire about the tinnitus positive or negative respectively before the CI, 3rd, 6th, 9th and 12th months after the first mapping. 6 modes about tinnitus development were record: Type A(-to-), no tinnitus before the CI and continued negative until the last follow-up; Type B(+ to-), have tinnitus before CI and disappeared after the mapping; Type C(-to +), no tinnitus before but appeared after the surgery; Type D(+ to-to +), have tinnitus before the CI and disappeared during the continue follow-up, but finally the tinnitus show up again at last; Type E(-to + to-), no tinnitus before and suffered from tinnitus after the CI surgery, but the noise disappeared at last; Type F(+ to +), have the tinnitus in all the duration. Then we briefly analyzed the factors like age, gender, the duration of hearing loss, and the duration of hearing aids usage. Compare the characteristics in all the modes of tinnitus prognosis. Result:In this research CI treatment effect rate on tinnitus is 80%. The mean age of Type A (tinnitus from -to- ) group is (20.79 ±11.364) years old; Type B (tinnitus from + to-) group is (32.69±10.606) years old; Type C(tinnitus from-to +) group is (40.25±2.217) years old; Type D(tinnitus from + to-to +) group is (28.00±0) years old; Type E (tinnitus from-to + to-) group is (52.50±6.364) years old; Type F (tinnitus from + to +) group is (30.33±11.015) years old. And P<0.05 between the groups, while the severe-deaf-duration intergroup differences (P>0.05). The mean speech discrimination rates are all elevate after 12 months and no statistical significance between the groups. But the E group has a lowst elevation in mean pure tone threshold (22.50±3.535 ) dB HL, when the F group has the best promotion (56.04±10.649 ) dB HL, and the difference between the groups is statistically significant. Conclusion:The cochlear implantation could eliminate tinnitus in 80% patients in this research. The better elevation of hearing and speech ability in the patients with persist tinnitus pre-and post-CI usage may related to amount and functions of the residual auditory nerves. Age may be an important factor in tinnitus generation, which may need more explanation and attention during the rehabilitation period.


Assuntos
Implante Coclear , Implantes Cocleares , Percepção da Fala , Zumbido , Adolescente , Adulto , Criança , Humanos , Pessoa de Meia-Idade , Prognóstico , Fala , Resultado do Tratamento , Adulto Jovem
8.
Artigo em Chinês | MEDLINE | ID: mdl-33254319

RESUMO

Objective:The study is aimed to evaluate impacts of cochlear implantation on speech perception and quality of life in postlingual deaf adults, and to explore the correlation between speech perception and quality of life using Nijimegen Cochlear Implantation Questionnaire and Mandarin version of Minimum Speech Test Battery. Method:Thirty-six postlingual deafpatients were recruited, including 20 males and 16 females. Patient age was 20 to 72 years old(52±16) when CI was implanted, and the hearing loss duration was 2 to 25 years(14±6) before cochlear implantation. The single syllable recognition rate score were tested by using Mandarin version of Minimum Speech Test Battery, and the quantify quality of life was tested by using Nijimegen Cochlear Implantation Questionnaire. Result:Speech recognition and quality of life have significantly improved in patients with CI after cochlear implantiont. The scores of basic sound perception, advanced sound perception, speech ability, self-confidence, social activity ability, and social ability have improved, but the differences were not statistically significant. The Mandrin single-syllable recognition rate scores were related to basic sound perception(r=-0.36; P=0.004), advanced sound perception(r=-0.41; P=0.002), and speech ability(r=-0.67; P=0.001), and the differences are statistically significant. Conclusion:The postlingual deafnesses ability of auditory, speech perception and the quality of life have improved significantly in patients with CI after cochlear implantiont.


Assuntos
Implante Coclear , Implantes Cocleares , Percepção da Fala , Adulto , Idoso , Feminino , Humanos , Masculino , Pessoa de Meia-Idade , Qualidade de Vida , Fala , Adulto Jovem
10.
PLoS One ; 15(10): e0240682, 2020.
Artigo em Inglês | MEDLINE | ID: mdl-33091043

RESUMO

In present-day Seoul Korean, the primary phonetic feature for the lenis-aspirated stop distinction is shifting from VOT to F0. Some previous studies have considered this sound change to be a tonogenesis, whereby the low-level F0 perturbation has developed into tonal features (L for the lenis and H for the aspirated) in the segmental phonology. They, however, have examined the stop distinction only at a phrase- or utterance-initial position. We newly explore the sound change in relation to various prosodic structural factors (position and prominence). Apparent-time production data were recorded from four speaker groups: young female, young male, old female, old male. The way the speakers use VOT versus F0 indeed varies as a function of position and prominence. Crucially, in all groups, VOT is still used for the lenis-aspirated distinction phrase-medially due to the lenis stop voicing. This role of VOT, however, is found only in the non-prominent (unfocused) condition, in which the F0 difference is reduced to a low-level perturbation effect. In the prominent (focused) context in which tones come into play, the role of VOT diminishes, led by young female speakers. These can be interpreted as a prosodically-conditioned, complementary use of the features to maintain sufficient contrast. Importantly, however, the tonal difference under focus is not bidirectionally polarized, so that F0 is not lowered for the lenis stop. A lack of direct enhancement of the distinctive L tone weakens a possibility that F0 is transphonologized to the phonemic feature system of the language. As an alternative to the view that tonal features are newly introduced in the segmental phonology, we propose a prosodic account: the sound change is best characterized as a prosodically-conditioned change in the use of the segmental voicing feature (implemented by VOT) versus already available post-lexical tones in the intonational phonology of Korean.


Assuntos
Fonética , Acústica da Fala , Percepção da Fala , Voz , Adulto , Grupo com Ancestrais do Continente Asiático , Feminino , Humanos , Masculino , Pessoa de Meia-Idade , Seul , Adulto Jovem
11.
Annu Int Conf IEEE Eng Med Biol Soc ; 2020: 2837-2840, 2020 07.
Artigo em Inglês | MEDLINE | ID: mdl-33018597

RESUMO

One of the remarkable abilities of humans is to focus the attention on a certain speaker in a multi-speaker environment that is known as the cocktail party effect. How the human brain solves this non-trivial task is a challenge that the scientific community has not yet found answers to. In recent years, progress has been made thanks to the development of system identification method based on least-squares (LS) that maps the variations between the cortical signals of a listener and the speech signals present in an auditory scene. Results from numerous two-speaker experiments simulating the cocktail party effect have shown that the auditory attention could be inferred from electroencephalography (EEG) using the LS method. It has been suggested that these methods have the potential to be integrated into hearing aids for algorithmic control. However, a major challenge remains using LS methods such that a large number of scalp EEG electrodes are required in order to get a reliable estimate of the attention. Here we present a new system identification method based on linear minimum mean squared error (LMMSE) that could estimate the attention with the help of two electrodes: one for the true signal estimation and other for the noise estimation. The algorithm is tested using EEG signals collected from ten subjects and its performance is compared against the state-of-the-art LS algorithm.


Assuntos
Percepção da Fala , Atenção , Eletroencefalografia , Humanos , Ruído , Fala
12.
Annu Int Conf IEEE Eng Med Biol Soc ; 2020: 3074-3077, 2020 07.
Artigo em Inglês | MEDLINE | ID: mdl-33018654

RESUMO

Passive brain-computer interfaces (BCIs) covertly decode the cognitive and emotional states of users by using neurophysiological signals. An important issue for passive BCIs is to monitor the attentional state of the brain. Previous studies mainly focus on the classification of attention levels, i.e. high vs. low levels, but few has investigated the classification of attention focuses during speech perception. In this paper, we tried to use electroencephalography (EEG) to recognize the subject's attention focuses on either call sign or number when listening to a short sentence. Fifteen subjects participated in this study, and they were required to focus on either call sign or number for each listening task. A new algorithm was proposed to classify the EEG patterns of different attention focuses, which combined common spatial pattern (CSP), short-time Fourier transformation (STFT) and discriminative canonical pattern matching (DCPM). As a result, the accuracy reached an average of 78.38% with a peak of 93.93% for single trial classification. The results of this study demonstrate the proposed algorithm is effective to classify the auditory attention focuses during speech perception.


Assuntos
Interfaces Cérebro-Computador , Percepção da Fala , Atenção , Percepção Auditiva , Eletroencefalografia , Humanos
13.
Annu Int Conf IEEE Eng Med Biol Soc ; 2020: 3102-3105, 2020 07.
Artigo em Inglês | MEDLINE | ID: mdl-33018661

RESUMO

Speech recognition based on surface electromyography (sEMG) signals is an important research direction with potential applications in life, work and clinical. The number and placement of sEMG electrodes play a critical role in capturing the underlying sEMG activities and in turn, accurately classifying the speaking tasks. The aim of this work was to investigate the effect of the number of channels in speech recognition based on high-density (HD) sEMG. 8 healthy subjects were recruited to perform 11 English speech tasks with sEMG signals were detected from 120 electrodes covering almost the whole neck and face. The classification accuracy was evaluated in the context of a linear discriminant analysis (LDA) with different sets of EMG electrodes. By comparing the classification accuracy, the sequential forward search (SFS) algorithm was adopted to figure out the optimal combination of electrodes which realized the highest classification level. The results showed that smaller number of channels obtained by the SFS method could achieve the classification accuracy of 80%, and another few electrodes were needed to record detail information to achieve the classification accuracy of 85%, 90% and 95%. The numbers were rather smaller than 120. Considering the computation time and reliable accuracy, it is concluded that the SFS method might be helpful for standardizing the number and position of electrodes in speech recognition.


Assuntos
Percepção da Fala , Fala , Eletromiografia , Humanos , Movimento , Processamento de Sinais Assistido por Computador
14.
Annu Int Conf IEEE Eng Med Biol Soc ; 2020: 3314-3317, 2020 07.
Artigo em Inglês | MEDLINE | ID: mdl-33018713

RESUMO

Reverberation reduces speech quality, and therefore causes inconveniency to listeners, especially those using assistive hearing devices. To enhance the quality of reverberant speech, a significant step is speech quality assessment, most of which are based on subjective judgements. Subjective evaluations vary with listeners' perception, emotional and mental states. To obtain an objective assessment of speech quality in reverberation, this work carried out an event related potential (ERP) study using a passive oddball paradigm. Listeners were presented with anechoic speech as standard stimuli mixed with reverberant speech under different levels of reverberation as deviant stimuli. The ERP responses reveal how human-beings' subconsciousness interacts with different levels of reverberation in the perceived speech. Results showed that the peak amplitude of P300 in ERP responses followed the variation of reverberation time in reverberant speech, providing evidence that P300 in ERP responses could work as a neural surrogate of reverberation time in objective speech quality assessment.


Assuntos
Perda Auditiva Neurossensorial , Percepção da Fala , Audição , Humanos , Ruído , Inteligibilidade da Fala
15.
Annu Int Conf IEEE Eng Med Biol Soc ; 2020: 4221-4224, 2020 07.
Artigo em Inglês | MEDLINE | ID: mdl-33018928

RESUMO

Internet of things (IoT) in healthcare, has effi-ciently accelerated medical monitoring and assessment through the real-time analysis of collected data. Hence, to support the hearing-impaired community with better calibrations to their clinical processors and hearing aids, a portable smart space interface - AURIS has been developed by the Cochlear Implant Processing Lab (CILab) at UT-Dallas. The proposed Auris interface periodically samples the acoustic space, and through a learn vs test phase, builds a Gaussian mixture model for each specific environmental locations. An effective connection is established by the Auris interface with the CRSS CCi-Mobile research platform through an android app to fine tune the con-figuration settings for cochlear implant (CI) or hearing aid (HA) users entering the room/location. Baseline objective evaluations have been performed in diverse naturalistic locations using 12 hours of audio data. The performance metrics is determined by a verified wireless communication, along with estimated acoustic environment knowledge and room classification at greater than 90% accuracy.


Assuntos
Implantes Cocleares , Auxiliares de Audição , Percepção da Fala , Acústica , Pesquisa Espacial
16.
Nat Commun ; 11(1): 5240, 2020 10 16.
Artigo em Inglês | MEDLINE | ID: mdl-33067457

RESUMO

Spoken language, both perception and production, is thought to be facilitated by an ensemble of predictive mechanisms. We obtain intracranial recordings in 37 patients using depth probes implanted along the anteroposterior extent of the supratemporal plane during rhythm listening, speech perception, and speech production. These reveal two predictive mechanisms in early auditory cortex with distinct anatomical and functional characteristics. The first, localized to bilateral Heschl's gyri and indexed by low-frequency phase, predicts the timing of acoustic events. The second, localized to planum temporale only in language-dominant cortex and indexed by high-gamma power, shows a transient response to acoustic stimuli that is uniquely suppressed during speech production. Chronometric stimulation of Heschl's gyrus selectively disrupts speech perception, while stimulation of planum temporale selectively disrupts speech production. This work illuminates the fundamental acoustic infrastructure-both architecture and function-for spoken language, grounding cognitive models of speech perception and production in human neurobiology.


Assuntos
Córtex Auditivo/fisiopatologia , Epilepsia/fisiopatologia , Estimulação Acústica , Adulto , Córtex Auditivo/diagnóstico por imagem , Mapeamento Encefálico , Epilepsia/diagnóstico por imagem , Epilepsia/psicologia , Feminino , Humanos , Idioma , Imagem por Ressonância Magnética , Masculino , Fala , Percepção da Fala , Adulto Jovem
17.
PLoS One ; 15(10): e0240832, 2020.
Artigo em Inglês | MEDLINE | ID: mdl-33119665

RESUMO

Hypnosis is a powerful tool to affect the processing and perception of stimuli. Here, we investigated the effects of hypnosis on the processing of auditory stimuli, the time course of event-related-potentials (ERP; N1 and P3b amplitudes) and the activity of cortical sources of the P3b component. Forty-eight participants completed an auditory oddball paradigm composed of standard, distractor, and target stimuli during a hypnosis (HYP), a simulation of hypnosis (SIM), a distraction (DIS), and a control (CON) condition. During HYP, participants were suggested that an earplug would obstruct the perception of tones and during SIM they should pretend being hypnotized and obstructed to hear the tones. During DIS, participants' attention was withdrawn from the tones by focusing participants' attention onto a film. In each condition, subjects were asked to press a key whenever a target stimulus was presented. Behavioral data show that target hit rates and response time became significantly reduced during HYP and SIM and loudness ratings of tones were only reduced during HYP. Distraction from stimuli by the film was less effective in reducing target hit rate and tone loudness. Although, the N1 amplitude was not affected by the experimental conditions, the P3b amplitude was significantly reduced in HYP and SIM compared to CON and DIS. In addition, source localization results indicate that only a small number of neural sources organize the differences of tone processing between the control condition and the distraction, hypnosis, and simulation of hypnosis conditions. These sources belong to brain areas that control the focus of attention, the discrimination of auditory stimuli, and the organization of behavioral responses to targets. Our data confirm that deafness suggestions significantly change auditory processing and perception but complete deafness is hard to achieve during HYP. Therefore, the term 'deafness' may be misleading and should better be replaced by 'hypoacusis'.


Assuntos
Encéfalo/diagnóstico por imagem , Cognição/fisiologia , Surdez/fisiopatologia , Hipnose/métodos , Estimulação Acústica , Adolescente , Adulto , Atenção/fisiologia , Percepção Auditiva , Comportamento/fisiologia , Encéfalo/fisiologia , Encéfalo/fisiopatologia , Surdez/diagnóstico por imagem , Surdez/etiologia , Eletroencefalografia/métodos , Potenciais Evocados/fisiologia , Potenciais Evocados Auditivos/fisiologia , Feminino , Audição/fisiologia , Humanos , Masculino , Pessoa de Meia-Idade , Percepção da Fala/fisiologia , Adulto Jovem
18.
Trends Hear ; 24: 2331216520960861, 2020.
Artigo em Inglês | MEDLINE | ID: mdl-33073727

RESUMO

Effective hearing aid (HA) rehabilitation requires personalization of the HA fitting parameters, but in current clinical practice only the gain prescription is typically individualized. To optimize the fitting process, advanced HA settings such as noise reduction and microphone directionality can also be tailored to individual hearing deficits. In two earlier studies, an auditory test battery and a data-driven approach that allow classifying hearing-impaired listeners into four auditory profiles were developed. Because these profiles were found to be characterized by markedly different hearing abilities, it was hypothesized that more tailored HA fittings would lead to better outcomes for such listeners. Here, we explored potential interactions between the four auditory profiles and HA outcome as assessed with three different measures (speech recognition, overall quality, and noise annoyance) and six HA processing strategies with various noise reduction, directionality, and compression settings. Using virtual acoustics, a realistic speech-in-noise environment was simulated. The stimuli were generated using a HA simulator and presented to 49 habitual HA users who had previously been profiled. The four auditory profiles differed clearly in terms of their mean aided speech reception thresholds, thereby implying different needs in terms of signal-to-noise ratio improvement. However, no clear interactions with the tested HA processing strategies were found. Overall, these findings suggest that the auditory profiles can capture some of the individual differences in HA processing needs and that further research is required to identify suitable HA solutions for them.


Assuntos
Auxiliares de Audição , Perda Auditiva Neurossensorial , Percepção da Fala , Limiar Auditivo , Audição , Perda Auditiva Neurossensorial/diagnóstico , Humanos , Ruído/efeitos adversos , Fala
19.
Codas ; 32(5): e20180272, 2020.
Artigo em Português, Inglês | MEDLINE | ID: mdl-33053080

RESUMO

PURPOSE: To validate the content of an instrument to measure listening effort for hearing-impaired individuals. METHOD: This is a validation study, developed in two stages, which the Stage 1 is the planning and development of the first version of the instrument, and Stage 2 the investigation of the evidences of validity based on the content and development of the final version of the instrument to measure listening effort. Ten professionals with expertise in the field of audiology, with more than five years of clinical experience participated in this study. The instrument to be validated was composed of three parts: I - "speech perception of logatomes and listening effort"; II - "listening effort and working memory" and; III - "speech perception of meaningless sentences and working memory" and they were presented monoaurally, in quiet and in the signal-to-noise ratios + 5dB, 0dB and -5dB. It was conducted a descriptive analysis regarding the suggestions of the committee judge audiologists and the analysis of the individual and scale content validity index. RESULTS: The results showed that parts I and III which constitute the proposed instrument reached a scale content validity index above 0.78, which means that the presented items did not need modification in their construct. CONCLUSION: The evidences of validity studied allowed relevant modifications and made this instrument adequate to its construct.


Assuntos
Pessoas com Deficiência Auditiva , Percepção da Fala , Percepção Auditiva , Humanos , Memória de Curto Prazo , Razão Sinal-Ruído
20.
Trends Hear ; 24: 2331216520960601, 2020.
Artigo em Inglês | MEDLINE | ID: mdl-33054620

RESUMO

Speech recognition in complex environments involves focusing on the most relevant speech signal while ignoring distractions. Difficulties can arise due to the incoming signal's characteristics (e.g., accented pronunciation, background noise, distortion) or the listener's characteristics (e.g., hearing loss, advancing age, cognitive abilities). Listeners who use cochlear implants (CIs) must overcome these difficulties while listening to an impoverished version of the signals available to listeners with normal hearing (NH). In the real world, listeners often attempt tasks concurrent with, but unrelated to, speech recognition. This study sought to reveal the effects of visual distraction and performing a simultaneous visual task on audiovisual speech recognition. Two groups, those with CIs and those with NH listening to vocoded speech, were presented videos of unaccented and accented talkers with and without visual distractions, and with a secondary task. It was hypothesized that, compared with those with NH, listeners with CIs would be less influenced by visual distraction or a secondary visual task because their prolonged reliance on visual cues to aid auditory perception improves the ability to suppress irrelevant information. Results showed that visual distractions alone did not significantly decrease speech recognition performance for either group, but adding a secondary task did. Speech recognition was significantly poorer for accented compared with unaccented speech, and this difference was greater for CI listeners. These results suggest that speech recognition performance is likely more dependent on incoming signal characteristics than a difference in adaptive strategies for managing distractions between those who listen with and without a CI.


Assuntos
Implante Coclear , Implantes Cocleares , Percepção da Fala , Cognição , Humanos , Fala
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA