Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 6 de 6
Filter
Add more filters











Database
Language
Publication year range
1.
JMIR Mhealth Uhealth ; 9(3): e20890, 2021 03 15.
Article in English | MEDLINE | ID: mdl-33720025

ABSTRACT

BACKGROUND: With the growing adult population using electronic hearing devices such as cochlear implants or hearing aids, there is an increasing worldwide need for auditory training (AT) to promote optimal device use. However, financial resources and scheduling conflicts make clinical AT infeasible. OBJECTIVE: To address this gap between need and accessibility, we primarily aimed to develop a mobile health (mHealth) app called Speech Banana for AT. The app would be substantially more affordable and portable than clinical AT; would deliver a validated training model that is reflective of modern techniques; and would track users' progress in speech comprehension, providing greater continuity between periodic in-person visits. To improve international availability, our secondary aim was to implement the English language training model into Korean as a proof of concept for worldwide usability. METHODS: A problem- and objective-centered Design Science Research Methodology approach was adopted to develop the Speech Banana app. A review of previous literature and computer-based learning programs outlined current AT gaps, whereas interviews with speech pathologists and users clarified the features that were addressed in the app. Past and present users were invited to evaluate the app via community forums and the System Usability Scale. RESULTS: Speech Banana has been implemented in English and Korean languages for iPad and web use. The app comprises 38 lessons, which include analytic exercises pairing visual and auditory stimuli, and synthetic quizzes presenting auditory stimuli only. During quizzes, users type the sentence heard, and the app provides visual feedback on performance. Users may select a male or female speaker and the volume of background noise, allowing for training with a range of frequencies and signal-to-noise ratios. There were more than 3200 downloads of the English iPad app and almost 100 downloads of the Korean app; more than 100 users registered for the web apps. The English app received a System Usability Scale rating of "good" from 6 users, and the Korean app received a rating of "OK" from 16 users. CONCLUSIONS: Speech Banana offers AT accessibility with a validated curriculum, allowing users to develop speech comprehension skills with the aid of a mobile device. This mHealth app holds potential as a supplement to clinical AT, particularly in this era of global telemedicine.


Subject(s)
Mobile Applications , Musa , Telemedicine , Adult , Female , Humans , Male , Speech
2.
J Audiol Otol ; 22(1): 28-38, 2017 Dec.
Article in English | MEDLINE | ID: mdl-29325391

ABSTRACT

BACKGROUND AND OBJECTIVES: It is important to understand the frequency region of cues used, and not used, by cochlear implant (CI) recipients. Speech and environmental sound recognition by individuals with CI and normal-hearing (NH) was measured. Gradients were also computed to evaluate the pattern of change in identification performance with respect to the low-pass filtering or high-pass filtering cutoff frequencies. SUBJECTS AND METHODS: Frequency-limiting effects were implemented in the acoustic waveforms by passing the signals through low-pass filters (LPFs) or high-pass filters (HPFs) with seven different cutoff frequencies. Identification of Korean vowels and consonants produced by a male and female speaker and environmental sounds was measured. Crossover frequencies were determined for each identification test, where the LPF and HPF conditions show the identical identification scores. RESULTS: CI and NH subjects showed changes in identification performance in a similar manner as a function of cutoff frequency for the LPF and HPF conditions, suggesting that the degraded spectral information in the acoustic signals may similarly constraint the identification performance for both subject groups. However, CI subjects were generally less efficient than NH subjects in using the limited spectral information for speech and environmental sound identification due to the inefficient coding of acoustic cues through the CI sound processors. CONCLUSIONS: This finding will provide vital information in Korean for understanding how different the frequency information is in receiving speech and environmental sounds by CI processor from normal hearing.

3.
PLoS One ; 10(7): e0131807, 2015.
Article in English | MEDLINE | ID: mdl-26162017

ABSTRACT

This study was aimed to evaluate the relative contributions of spectral and temporal information to Korean phoneme recognition and to compare them with those to English phoneme recognition. Eleven normal-hearing Korean-speaking listeners participated in the study. Korean phonemes, including 18 consonants in a /Ca/ format and 17 vowels in a /hVd/ format, were processed through a noise vocoder. The spectral information was controlled by varying the number of channels (1, 2, 3, 4, 6, 8, 12, and 16) whereas the temporal information was controlled by varying the lowpass cutoff frequency of the envelope extractor (1 to 512 Hz in octave steps). A total of 80 vocoder conditions (8 numbers of channels × 10 lowpass cutoff frequencies) were presented to listeners for phoneme recognition. While vowel recognition depended on the spectral cues predominantly, a tradeoff between the spectral and temporal information was evident for consonant recognition. The overall consonant recognition was dramatically lower than that of English consonant recognition under similar vocoder conditions. The complexity of the Korean consonant repertoire, the three-way distinction of stops in particular, hinders recognition of vocoder-processed phonemes.


Subject(s)
Speech Perception , Adult , Cochlear Implants , Cues , Female , Humans , Male , Pattern Recognition, Automated , Phonetics , Republic of Korea , Signal Processing, Computer-Assisted , Young Adult
4.
J Am Acad Audiol ; 21(1): 35-43, 2010 Jan.
Article in English | MEDLINE | ID: mdl-20085198

ABSTRACT

BACKGROUND: Maximum performance and long-term stability of bilateral cochlear implants has become an important topic because there has been increasing numbers of recipients of bilateral cochlear implants. PURPOSE: To determine the performance over time (up to 6yr) of subjects with simultaneous bilateral cochlear implants (CI+CI) on word recognition and localization. RESEARCH DESIGN: Over-time investigation of word recognition in quiet (CNC) and sound localization in quiet (Everyday Sounds Localization Test). STUDY SAMPLE: The subjects were 48 adults who simultaneously received their cochlear implants at the University of Iowa. RESULTS: For word recognition, percent correct scores continuously improved up to 1 yr postimplantation with the most benefit occurring within the first month of implantation. In observing up to 72 mo, the averaged scores reached to the plateau of about 63% correct in CNC after 2 yr (N = 31). But, when we followed 17 subjects who have complete data set between 12 mo and 48+ months, word recognition scores were significantly different from 12 mo to 48 + months, which implies binaural advantages need more time to be developed. Localization test results suggested that the root mean square (RMS) error scores continuously improved up to 1 yr postimplantation with most benefits occurring within the first 3 mo. After 2 yr, the averaged scores reached to the plateau of about 20 degrees RMS error (N = 27). When we followed 10 subjects who have complete data set between 12 mo and 48+ months, localization scores were not improved from 12 mo to 48+ months. There were large individual differences in performance over time. CONCLUSIONS: In general, substantial benefits in both word recognition and localization were found over the first 1-12 mo postimplantation for subjects who received simultaneous bilateral cochlear implants. These benefits were maintained over time up to 6yr postimplantation.


Subject(s)
Cochlear Implants/standards , Hearing Loss, Bilateral/surgery , Speech Perception/physiology , Adult , Aged , Aged, 80 and over , Female , Follow-Up Studies , Hearing Loss, Bilateral/physiopathology , Hearing Loss, Bilateral/rehabilitation , Humans , Male , Middle Aged , Prosthesis Design , Retrospective Studies , Speech Reception Threshold Test/methods , Time Factors , Young Adult
5.
Audiol Neurootol ; 13(3): 206-12, 2008.
Article in English | MEDLINE | ID: mdl-18212495

ABSTRACT

The excessive storage of mucopolysaccharide in Hunter syndrome leads to various otologic manifestations. We interviewed 19 patients with Hunter syndrome to assess their otologic problems, and conducted audiologic tests and temporal bone CT. Patients with the intermediate or severe form exhibited severe speech delay by more than 2 years (12/14 patients). However, in patients with the mild form (5/5), speech development was not much disturbed (2/5), although otoscopic findings were similar. The hearing threshold determined by the auditory brainstem response differed significantly between the mild and intermediate/severe forms (p < 0.05). Therefore, patients with the mild form may benefit from active otologic intervention such as VT insertion, amplification, and speech therapy.


Subject(s)
Hearing Disorders/etiology , Language Development Disorders/etiology , Mucopolysaccharidosis II/physiopathology , Speech Intelligibility/physiology , Adolescent , Child , Child, Preschool , Ear, Middle/pathology , Humans , Intelligence , Language Development Disorders/physiopathology , Learning Disabilities/etiology , Male , Mastoid/pathology , Mucopolysaccharidosis II/genetics , Mucopolysaccharidosis II/pathology , Mucopolysaccharidosis II/psychology , Phenotype , Prospective Studies
6.
Semin Hear ; 29(4): 326-332, 2008 Nov.
Article in English | MEDLINE | ID: mdl-20333263

ABSTRACT

This article reviews possible neural correlates of tinnitus, including an increase in rate, a decrease in rate, periodic activity, synchronous activity across neurons, and an edge between active and inactive neurons. We make some suggestions regarding how electrical current might alter these patterns of neural activity. For example, if tinnitus were represented with periodic neural activity, then electrical stimulation would need to disrupt this periodicity. Some cases of cochlear electrical stimulation are reviewed that show the tinnitus can be reduced or eliminated with cochlear electrical stimulation although it varies across individuals. Finally, after summarizing some key observations, we suggest some next steps to bring this into a clinical application.

SELECTION OF CITATIONS
SEARCH DETAIL