Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 20
Filtrar
Más filtros










Base de datos
Intervalo de año de publicación
1.
Harv Rev Psychiatry ; 31(1): 1-13, 2023.
Artículo en Inglés | MEDLINE | ID: mdl-36608078

RESUMEN

ABSTRACT: The need for objective measurement in psychiatry has stimulated interest in alternative indicators of the presence and severity of illness. Speech may offer a source of information that bridges the subjective and objective in the assessment of mental disorders. We systematically reviewed the literature for articles exploring speech analysis for psychiatric applications. The utility of speech analysis depends on how accurately speech features represent clinical symptoms within and across disorders. We identified four domains of the application of speech analysis in the literature: diagnostic classification, assessment of illness severity, prediction of onset of illness, and prognosis and treatment outcomes. We discuss the findings in each of these domains, with a focus on how types of speech features characterize different aspects of psychopathology. Models that bring together multiple speech features can distinguish speakers with psychiatric disorders from healthy controls with high accuracy. Differentiating between types of mental disorders and symptom dimensions are more complex problems that expose the transdiagnostic nature of speech features. Convergent progress in speech research and computer sciences opens avenues for implementing speech analysis to enhance objectivity of assessment in clinical practice. Application of speech analysis will need to address issues of ethics and equity, including the potential to perpetuate discriminatory bias through models that learn from clinical assessment data. Methods that mitigate bias are available and should play a key role in the implementation of speech analysis.


Asunto(s)
Trastornos Mentales , Psiquiatría , Humanos , Habla , Trastornos Mentales/diagnóstico , Trastornos Mentales/terapia , Trastornos Mentales/psicología , Psicopatología
2.
J Acoust Soc Am ; 148(4): 1911, 2020 10.
Artículo en Inglés | MEDLINE | ID: mdl-33138491

RESUMEN

Although the first two or three formant frequencies are considered essential cues for vowel identification, certain limitations of this approach have been noted. Alternative explanations have suggested listeners rely on other aspects of the gross spectral shape. A study conducted by Ito, Tsuchida, and Yano [(2001). J. Acoust. Soc. Am. 110, 1141-1149] offered strong support for the latter, as attenuation of individual formant peaks left vowel identification largely unaffected. In the present study, these experiments are replicated in two dialects of English. Although the results were similar to those of Ito, Tsuchida, and Yano [(2001). J. Acoust. Soc. Am. 110, 1141-1149], quantitative analyses showed that when a formant is suppressed, participant response entropy increases due to increased listener uncertainty. In a subsequent experiment, using synthesized vowels with changing formant frequencies, suppressing individual formant peaks led to reliable changes in identification of certain vowels but not in others. These findings indicate that listeners can identify vowels with missing formant peaks. However, such formant-peak suppression may lead to decreased certainty in identification of steady-state vowels or even changes in vowel identification in certain dynamically specified vowels.


Asunto(s)
Fonética , Percepción del Habla , Señales (Psicología) , Humanos , Lenguaje
3.
J Acoust Soc Am ; 143(4): 2460, 2018 04.
Artículo en Inglés | MEDLINE | ID: mdl-29716264

RESUMEN

Natural sounds have substantial acoustic structure (predictability, nonrandomness) in their spectral and temporal compositions. Listeners are expected to exploit this structure to distinguish simultaneous sound sources; however, previous studies confounded acoustic structure and listening experience. Here, sensitivity to acoustic structure in novel sounds was measured in discrimination and identification tasks. Complementary signal-processing strategies independently varied relative acoustic entropy (the inverse of acoustic structure) across frequency or time. In one condition, instantaneous frequency of low-pass-filtered 300-ms random noise was rescaled to 5 kHz bandwidth and resynthesized. In another condition, the instantaneous frequency of a short gated 5-kHz noise was resampled up to 300 ms. In both cases, entropy relative to full bandwidth or full duration was a fraction of that in 300-ms noise sampled at 10 kHz. Discrimination of sounds improved with less relative entropy. Listeners identified a probe sound as a target sound (1%, 3.2%, or 10% relative entropy) that repeated amidst distractor sounds (1%, 10%, or 100% relative entropy) at 0 dB SNR. Performance depended on differences in relative entropy between targets and background. Lower-relative-entropy targets were better identified against higher-relative-entropy distractors than lower-relative-entropy distractors; higher-relative-entropy targets were better identified amidst lower-relative-entropy distractors. Results were consistent across signal-processing strategies.


Asunto(s)
Estimulación Acústica/métodos , Percepción Auditiva/fisiología , Discriminación en Psicología/fisiología , Psicoacústica , Localización de Sonidos/fisiología , Sonido , Estudios de Casos y Controles , Humanos , Procesamiento de Señales Asistido por Computador
4.
J Acoust Soc Am ; 142(1): 434, 2017 07.
Artículo en Inglés | MEDLINE | ID: mdl-28764432

RESUMEN

Given recent interest in the analysis of naturally produced spontaneous speech, a large database of speech samples from the Canadian Maritimes was collected, processed, and analyzed with the primary aim of examining vowel-inherent spectral change in formant trajectories. Although it takes few resources to collect a large sample of audio recordings, the analysis of spontaneous speech introduces a number of difficulties compared to that of laboratory citation speech: Surrounding consonants may have a large influence on vowel formant frequencies and the distribution of consonant contexts is highly unbalanced. To overcome these problems, a statistical procedure inspired by that of Broad and Clermont [(2014). J. Phon. 47, 47-80] was developed to estimate the magnitude of both onset and coda effects on vowel formant frequencies. Estimates of vowel target formant frequencies and the parameters associated with consonant-context effects were allowed to vary freely across the duration of the vocalic portion of a syllable which facilitated the examination of vowel-inherent spectral change. Thirty-five hours of recorded speech samples from 223 speakers were automatically segmented and formant-frequency values were measured for all stressed vowels in the database. Consonant effects were accounted for to produce context-normalized vowel formant frequencies that varied across time.

5.
Am J Audiol ; 25(4): 344-358, 2016 Dec 01.
Artículo en Inglés | MEDLINE | ID: mdl-27814664

RESUMEN

PURPOSE: Speech-in-noise testing relies on a number of factors beyond the auditory system, such as cognitive function, compliance, and motor function. It may be possible to avoid these limitations by using electroencephalography. The present study explored this possibility using the N400. METHOD: Eleven adults with typical hearing heard high-constraint sentences with congruent and incongruent terminal words in the presence of speech-shaped noise. Participants ignored all auditory stimulation and watched a video. The signal-to-noise ratio (SNR) was varied around each participant's behavioral threshold during electroencephalography recording. Speech was also heard in quiet. RESULTS: The amplitude of the N400 effect exhibited a nonlinear relationship with SNR. In the presence of background noise, amplitude decreased from high (+4 dB) to low (+1 dB) SNR but increased dramatically at threshold before decreasing again at subthreshold SNR (-2 dB). CONCLUSIONS: The SNR of speech in noise modulates the amplitude of the N400 effect to semantic anomalies in a nonlinear fashion. These results are the first to demonstrate modulation of the passively evoked N400 by SNR in speech-shaped noise and represent a first step toward the end goal of developing an N400-based physiological metric for speech-in-noise testing.


Asunto(s)
Potenciales Evocados Auditivos/fisiología , Ruido , Percepción del Habla/fisiología , Prueba del Umbral de Recepción del Habla , Estimulación Acústica , Adulto , Electroencefalografía , Potenciales Evocados/fisiología , Femenino , Voluntarios Sanos , Humanos , Masculino , Relación Señal-Ruido , Adulto Joven
6.
J Acoust Soc Am ; 128(4): 2112-26, 2010 Oct.
Artículo en Inglés | MEDLINE | ID: mdl-20968382

RESUMEN

Some evidence, mostly drawn from experiments using only a single moderate rate of speech, suggests that low-frequency amplitude modulations may be particularly important for intelligibility. Here, two experiments investigated intelligibility of temporally distorted sentences across a wide range of simulated speaking rates, and two metrics were used to predict results. Sentence intelligibility was assessed when successive segments of fixed duration were temporally reversed (exp. 1), and when sentences were processed through four third-octave-band filters, the outputs of which were desynchronized (exp. 2). For both experiments, intelligibility decreased with increasing distortion. However, in exp. 2, intelligibility recovered modestly with longer desynchronization. Across conditions, performances measured as a function of proportion of utterance distorted converged to a common function. Estimates of intelligibility derived from modulation transfer functions predict a substantial proportion of the variance in listeners' responses in exp. 1, but fail to predict performance in exp. 2. By contrast, a metric of potential information, quantified as relative dissimilarity (change) between successive cochlear-scaled spectra, is introduced. This metric reliably predicts listeners' intelligibility across the full range of speaking rates in both experiments. Results support an information-theoretic approach to speech perception and the significance of spectral change rather than physical units of time.


Asunto(s)
Cóclea/fisiología , Fonética , Acústica del Lenguaje , Inteligibilidad del Habla , Estimulación Acústica , Audiometría , Entropía , Humanos , Masculino , Modelos Teóricos , Espectrografía del Sonido , Factores de Tiempo
7.
J Acoust Soc Am ; 127(4): 2611-21, 2010 Apr.
Artículo en Inglés | MEDLINE | ID: mdl-20370042

RESUMEN

Although recent evidence reconfirmed the importance of spectral peak frequencies in vowel identification [Kiefte and Kluender (2005). J. Acoust. Soc. Am. 117, 1395-1404], the role of formant amplitude in perception remains somewhat controversial. Although several studies have demonstrated a relationship between vowel perception and formant amplitude, this effect may be a result of basic auditory phenomena such as decreased local spectral contrast and simultaneous masking. This study examines the roles that local spectral contrast and simultaneous masking play in the relationship between the amplitude of spectral peaks and the perception of vowel stimuli. Both full- and incomplete-spectrum stimuli were used in an attempt to separate the effects of local spectral contrast and simultaneous masking. A second experiment was conducted to measure the detectability of the presence/absence of a formant peak to determine to what extent identification data could be predicted from spectral peak audibility alone. Results from both experiments indicate that, while both masking and spectral contrast likely play important roles in vowel perception, additional factors must be considered in order to account for vowel identification data. Systematic differences between the audibility of spectral peaks and predictions of perceived vowel identity were observed.


Asunto(s)
Enmascaramiento Perceptual , Acústica del Lenguaje , Percepción del Habla , Estimulación Acústica , Audiometría de Tonos Puros , Umbral Auditivo , Humanos , Psicoacústica , Espectrografía del Sonido , Factores de Tiempo
8.
Atten Percept Psychophys ; 72(2): 470-80, 2010 Feb.
Artículo en Inglés | MEDLINE | ID: mdl-20139460

RESUMEN

Brief experience with reliable spectral characteristics of a listening context can markedly alter perception of subsequent speech sounds, and parallels have been drawn between auditory compensation for listening context and visual color constancy. In order to better evaluate such an analogy, the generality of acoustic context effects for sounds with spectral-temporal compositions distinct from speech was investigated. Listeners identified nonspeech sounds-extensively edited samples produced by a French horn and a tenor saxophone-following either resynthesized speech or a short passage of music. Preceding contexts were "colored" by spectral envelope difference filters, which were created to emphasize differences between French horn and saxophone spectra. Listeners were more likely to report hearing a saxophone when the stimulus followed a context filtered to emphasize spectral characteristics of the French horn, and vice versa. Despite clear changes in apparent acoustic source, the auditory system calibrated to relatively predictable spectral characteristics of filtered context, differentially affecting perception of subsequent target nonspeech sounds. This calibration to listening context and relative indifference to acoustic sources operates much like visual color constancy, for which reliable properties of the spectrum of illumination are factored out of perception of color.


Asunto(s)
Asociación , Percepción Auditiva , Percepción de Color , Música , Fonética , Espectrografía del Sonido , Percepción del Habla , Adolescente , Femenino , Humanos , Masculino , Práctica Psicológica , Psicoacústica , Adulto Joven
9.
Ear Hear ; 31(1): 115-25, 2010 Feb.
Artículo en Inglés | MEDLINE | ID: mdl-19816181

RESUMEN

OBJECTIVES: (1) To determine whether high-frequency sensorineural hearing loss (HF SNHL) is accompanied by deterioration in temporal resolution in the low-frequency region where hearing sensitivity is within normal range. (2) To evaluate whether such temporal processing deficits contribute to speech perception difficulty in noise. DESIGN: A between-group design was employed, using subjects either with or without high-frequency hearing loss and matched by age. Temporal resolution was evaluated in amplitude modulation (AM) detection and gap detection tasks. To restrict evaluation to the low-frequency regions where the auditory sensitivity was virtually normal, low-pass noise carriers (for AM detection) and gap markers (for gap detection) were used. The impact of temporal processing deficits on speech perception was evaluated using hearing in noise tests (HINT) with varied time compression rates of the speech materials. RESULTS: Adults with high-frequency hearing loss showed poorer performance than the age-matched normal-hearing subjects on both the AM and gap detection tasks, even though the stimuli were restricted to regions of observed normal sensitivity. With increasing time compression, listeners with HF SNHL required a larger signal to noise ratio to maintain accuracy in speech perception in adaptive HINT and exhibited a bigger decrease in score for HINTs at a fixed signal to noise ratio. Multiple regression/correlation analyses show significant correlation across the scores of AM/gap detection tasks and HINTs. CONCLUSIONS: Temporal resolution in the low-frequency region with near-normal sensitivity seems to be deteriorated in subjects with HF SNHL. They were more sensitive to increases in speech rate, suggesting that poorer temporal processing may be related to speech perception deficits in noise.


Asunto(s)
Audiometría del Habla/métodos , Pérdida Auditiva de Alta Frecuencia/diagnóstico , Enmascaramiento Perceptual , Adulto , Umbral Auditivo , Femenino , Humanos , Masculino , Persona de Mediana Edad , Valores de Referencia , Espectrografía del Sonido , Percepción del Tiempo , Adulto Joven
10.
J Fluency Disord ; 33(2): 99-119, 2008.
Artículo en Inglés | MEDLINE | ID: mdl-18617051

RESUMEN

UNLABELLED: A multiple single-subject design was used to examine the effects of SpeechEasy on stuttering frequency in the laboratory and in longitudinal samples of speech produced in situations of daily living (SDL). Seven adults who stutter participated, all of whom had exhibited at least 30% reduction in stuttering frequency while using SpeechEasy during previous laboratory assessments. For each participant, speech samples recorded in the laboratory and SDL during device use were compared to samples obtained in those settings without the device. In SDL, stuttering frequencies were recorded weekly for 9-16 weeks during face-to-face and phone conversations. Participants also provided data regarding device tolerance and perceived benefits. Laboratory assessments were conducted at the beginning and the end of the longitudinal data collection in SDL. All seven participants exhibited reduced stuttering in self-formulated speech in the Device compared to No-device condition during the first laboratory assessment. In the second laboratory assessment, four participants exhibited less stuttering and three exhibited more stuttering with the device than without. In SDL, five of seven participants exhibited some instances of reduced stuttering when wearing the device and three of these exhibited relatively stable amounts of stuttering reduction during long-term use. Five participants reported positive changes in speaking-related attitudes and perceptions of stuttering. Further investigation into the short- and long-term effectiveness of SpeechEasy in SDL is warranted. EDUCATIONAL OBJECTIVES: The reader will be able to summarize: (1) issues pertinent to evaluating treatment benefits of wearable fluency aids and evaluate (2) the effect of SpeechEasy on stuttering frequency and the perceived benefits of device use in situations of daily living, as assessed weekly over the course of 9-16 weeks of wear, for seven adults who stutter.


Asunto(s)
Logopedia/instrumentación , Tartamudeo/terapia , Adulto , Diseño de Equipo , Femenino , Humanos , Masculino , Persona de Mediana Edad , Resultado del Tratamiento
11.
J Fluency Disord ; 33(2): 120-34, 2008.
Artículo en Inglés | MEDLINE | ID: mdl-18617052

RESUMEN

UNLABELLED: The effects of SpeechEasy on stuttering frequency, stuttering severity self-ratings, speech rate, and speech naturalness for 31 adults who stutter were examined. Speech measures were compared for samples obtained with and without the device in place in a dispensing setting. Mean stuttering frequencies were reduced by 79% and 61% for the device compared to the control conditions on reading and monologue tasks, respectively. Mean severity self-ratings decreased by 3.5 points for oral reading and 2.7 for monologue on a 9-point scale. Despite dramatic reductions in stuttering frequency, mean global speech rates in the device condition increased by only 8% in the reading task and 15% for the monologue task, and were well below normal. Further, complete elimination of stuttering was not associated with normalized speech rates. Nevertheless, mean ratings of speech naturalness improved markedly in the device compared to the control condition and, at 3.3 and 3.2 for reading and monologue, respectively, were only slightly outside the normal range. These results show that SpeechEasy produced improved speech outcomes in an assessment setting. However, findings raise the issue of a possible contribution of slowed speech rate to the stuttering reduction effect, especially given participants' instructions to speak chorally with the delayed signal as part of the active listening instructions of the device protocol. Study of device effects in situations of daily living over the long term is necessary to fully explore its treatment potential, especially with respect to long-term stability. EDUCATIONAL OBJECTIVES: The reader will be able to discuss and evaluate: (1) issues pertinent to evaluating treatment benefits of fluency aids and (2) the effects of SpeechEasy on stuttering frequency, speech rate, and speech naturalness during testing in a dispensing setting for a relatively large sample of adults who stutter.


Asunto(s)
Logopedia/instrumentación , Tartamudeo/terapia , Conducta Verbal , Adolescente , Adulto , Diseño de Equipo , Femenino , Humanos , Masculino , Persona de Mediana Edad , Medición de la Producción del Habla , Resultado del Tratamiento
12.
J Acoust Soc Am ; 123(1): 366-76, 2008 Jan.
Artículo en Inglés | MEDLINE | ID: mdl-18177166

RESUMEN

Several experiments are described in which synthetic monophthongs from series varying between /i/ and /u/ are presented following filtered precursors. In addition to F(2), target stimuli vary in spectral tilt by applying a filter that either raises or lowers the amplitudes of higher formants. Previous studies have shown that both of these spectral properties contribute to identification of these stimuli in isolation. However, in the present experiments we show that when a precursor sentence is processed by the same filter used to adjust spectral tilt in the target stimulus, listeners identify synthetic vowels on the basis of F(2) alone. Conversely, when the precursor sentence is processed by a single-pole filter with center frequency and bandwidth identical to that of the F(2) peak of the following vowel, listeners identify synthetic vowels on the basis of spectral tilt alone. These results show that listeners ignore spectral details that are unchanged in the acoustic context. Instead of identifying vowels on the basis of incorrect acoustic information, however (e.g., all vowels are heard as /i/ when second formant is perceptually ignored), listeners discriminate the vowel stimuli on the basis of the more informative spectral property.


Asunto(s)
Fonética , Percepción del Habla/fisiología , Adulto , Humanos , Masculino , Espectrografía del Sonido , Acústica del Lenguaje , Voz , Calidad de la Voz
13.
J Commun Disord ; 41(1): 33-48, 2008.
Artículo en Inglés | MEDLINE | ID: mdl-17418860

RESUMEN

UNLABELLED: The effects of choral speech and altered auditory feedback (AAF) on stuttering frequency were compared to identify those properties of choral speech that make it a more effective condition for stuttering reduction. Seventeen adults who stutter (AWS) participated in an experiment consisting of special choral speech conditions that were manipulated to selectively eliminate specific differences between choral speech and AAF. Consistent with previous findings, results showed that both choral speech and AAF reduced stuttering compared to solo reading. Although reductions under AAF were substantial, they were less dramatic than those for choral speech. Stuttering reduction for choral speech was highly robust even when the accompanist's voice temporally lagged that of the AWS, when there was no opportunity for dynamic interplay between the AWS and accompanist, and when the accompanist was replaced by the AWS's own voice, all of which approximate specific features of AAF. Choral speech was also highly effective in reducing stuttering across changes in speech rate and for both familiar and unfamiliar passages. We concluded that differences in properties between choral speech and AAF other than those that were manipulated in this experiment must account for differences in stuttering reduction. LEARNING OUTCOMES: The reader will be able to (1) describe differences in stuttering reduction associated with altered auditory feedback compared to choral speech conditions and (2) describe differences between delivery of a second voice signal as an altered rendition of the speakers own voice (altered auditory feedback) and alterations in the voice of an accompanist (choral speech).


Asunto(s)
Estimulación Acústica , Retroalimentación , Percepción del Habla , Logopedia/métodos , Tartamudeo/terapia , Adulto , Femenino , Humanos , Masculino , Persona de Mediana Edad , Lectura
14.
J Fluency Disord ; 31(2): 137-52, 2006.
Artículo en Inglés | MEDLINE | ID: mdl-16753207

RESUMEN

UNLABELLED: The effect of SpeechEasy on stuttering frequency during speech produced in a laboratory setting was examined. Thirteen adults who stutter participated. Stuttering frequencies in two baseline conditions were compared to stuttering frequencies with the device fitted according to the manufacturer's protocol. The fitting protocol includes instructions for deliberate use of vowel prolongation. Relative to the initial baseline condition, stuttering was reduced by 74%, 36%, and 49% for reading, monologue, and conversation, respectively. In comparison, stuttering was reduced by 42%, 30%, and 36%, respectively with the device in place, but before participants were instructed to deliberately prolong vowels. Examination of individual response profiles revealed that although stuttering reduced in the device compared to the baseline conditions during at least one of three speech tasks for most participants, degree and pattern of benefit varied greatly across participants. EDUCATIONAL OBJECTIVES: The reader will be able to: (1) discuss recent research in altered auditory feedback that led to the development of SpeechEasy, (2) analyze and describe issues related to evaluating the treatment benefits of fluency aids, and (3) summarize the range of outcomes that were observed with SpeechEasy in this study.


Asunto(s)
Logopedia/instrumentación , Logopedia/métodos , Tartamudeo/terapia , Conducta Verbal , Adulto , Comunicación , Femenino , Humanos , Masculino , Persona de Mediana Edad , Lectura , Análisis y Desempeño de Tareas , Terapia Asistida por Computador/instrumentación , Resultado del Tratamiento
15.
J Acoust Soc Am ; 118(4): 2599-606, 2005 Oct.
Artículo en Inglés | MEDLINE | ID: mdl-16266180

RESUMEN

Among the most influential publications in speech perception is Liberman, Delattre, and Cooper's [Am. J. Phys. 65, 497-516 (1952)] report on the identification of synthetic, voiceless stops generated by the Pattern Playback. Their map of stop consonant identification shows a highly complex relationship between acoustics and perception. This complex mapping poses a challenge to many classes of relatively simple pattern recognition models which are unable to capture the original finding of Liberman et al. that identification of /k/ was bimodal for bursts preceding front vowels but otherwise unimodal. A replication of this experiment was conducted in an attempt to reproduce these identification patterns using a simulation of the Pattern Playback device. Examination of spectrographic data from stimuli generated by the Pattern Playback revealed additional spectral peaks that are consistent with harmonic distortion characteristic of tube amplifiers of that era. Only when harmonic distortion was introduced did bimodal /k/ responses in front-vowel context emerge. The acoustic consequence of this distortion is to add, e.g., a high-frequency peak to midfrequency bursts or a midfrequency peak to a low-frequency burst. This likely resulted in additional /k/ responses when the second peak approximated the second formant of front vowels. Although these results do not challenge the main observations made by Liberman et al. that perception of stop bursts is context dependent, they do show that the mapping from acoustics to perception is much less complex without these additional distortion products.


Asunto(s)
Fonética , Percepción del Habla/fisiología , Estimulación Acústica , Audiometría del Habla , Femenino , Humanos , Masculino , Modelos Biológicos , Programas Informáticos , Espectrografía del Sonido
16.
J Acoust Soc Am ; 117(3 Pt 1): 1395-404, 2005 Mar.
Artículo en Inglés | MEDLINE | ID: mdl-15807027

RESUMEN

Ito et al. [J. Acoust. Soc. Am. 110, 1141-1149 (2001)] demonstrated that listeners can reliably identify vowel stimuli on the basis of relative formant amplitude in the absence of, or in spite of, F2 peak frequency. In the present study, formant frequencies and global spectral tilt are manipulated independently in synthetic steady-state vowels. Listeners' identification of these sounds demonstrate strong perceptual effects for both local (formant frequency) and global (spectral tilt) acoustic characteristics. Subsequent experiments reveal that effects of spectral tilt are attenuated in synthetic stimuli for which formant center frequencies change continuously. When formant peaks are kinematic, perceptual salience of the relative amplitudes of low- and high-frequency formants (as determined by spectral tilt) is mitigated. Because naturally produced English vowels are rarely spectrally static, one may conclude that gross spectral properties may play only a limited role in perception of fluently produced vowel sounds.


Asunto(s)
Acústica del Lenguaje , Percepción del Habla/fisiología , Humanos , Fonética , Espectrografía del Sonido
17.
Otol Neurotol ; 25(6): 903-9, 2004 Nov.
Artículo en Inglés | MEDLINE | ID: mdl-15547418

RESUMEN

HYPOTHESIS: Ossiculoplasty using prosthetic reconstruction with a malleus assembly to the stapes head will result in better transmission of vibrations from the eardrum to the stapes footplate than reconstruction with a tympanic membrane assembly to the stapes head. Both types of reconstruction will be affected by tension of the prosthesis. BACKGROUND: Theories (and some clinical studies) that the shape of the normal tympanic membrane is important suggest that prosthetic reconstruction to the malleus performs better than reconstruction to the tympanic membrane. This has not been previously tested by directly measuring vibration responses in the human ear. Our previous work suggests that tympanic membrane assembly to the stapes head type prostheses performed best under low tension. This had not been previously tested for malleus assembly to the stapes head type prostheses. METHODS: Hydroxyapatite prostheses were used to reconstruct a missing incus defect in a fresh cadaveric human ear model. Two types of prostheses were used, one from the stapes head to the malleus (malleus assembly to the stapes head), the other from the stapes head to the tympanic membrane (tympanic membrane assembly to the stapes head). Stapes footplate center responses were measured using a laser Doppler vibrometer in response to calibrated acoustic frequency sweeps. RESULTS: Tension had a very significant effect on both types of prostheses in the lower frequencies. Loose tension was best overall. The malleus assembly to the stapes head type prostheses consistently performed better than the tympanic membrane assembly to the stapes head type prostheses when stratified for tension. CONCLUSION: Tension has a significant effect on prosthesis function. Malleus assembly to the stapes head type prostheses generally result in better transmission of vibrations to the stapes footplate than tympanic membrane assembly to the stapes head type prostheses.


Asunto(s)
Yunque , Martillo/cirugía , Prótesis Osicular , Procedimientos de Cirugía Plástica/métodos , Cirugía del Estribo/métodos , Membrana Timpánica/cirugía , Fenómenos Biomecánicos , Cadáver , Humanos , Resultado del Tratamiento
18.
Laryngoscope ; 114(2): 305-8, 2004 Feb.
Artículo en Inglés | MEDLINE | ID: mdl-14755209

RESUMEN

OBJECTIVE: Hearing results from ossiculoplasty are unpredictable. There are many potentially modifiable parameters. One parameter that has not been adequately investigated in the past is the effect of tension on the mechanical functioning of the prosthesis. Our goal was to investigate this parameter further, with the hypothesis that the mechanical functioning of partial ossicular replacement prostheses (PORP) from the stapes head to the eardrum will be affected by the tension that they are placed under. METHODS: Fresh temporal bones were used to reconstruct a missing incus defect with a PORP-type prosthesis. Three different lengths of PORP were used, and the stapes vibrations were measured with a laser Doppler vibrometer using a calibrated standard sound in the ear canal. Eight temporal bones were used. RESULTS: Tension had a very significant effect on stapes vibration. In general, loose prostheses resulted in the best overall vibration transmission. The effects were most marked at the lower frequencies. There was a slight advantage to tight prostheses in the higher frequencies, but much less than the decrement in lower frequencies with tight prostheses. CONCLUSION: In ossicular reconstruction, best stapes vibration results in our model are achieved by shorter prostheses, which result in lower tension.


Asunto(s)
Oído Medio/cirugía , Reemplazo Osicular/métodos , Cadáver , Humanos , Prótesis Osicular , Vibración
19.
Speech Commun ; 41(1): 59-69, 2003 Aug.
Artículo en Inglés | MEDLINE | ID: mdl-28747807

RESUMEN

Perceptual systems in all modalities are predominantly sensitive to stimulus change, and many examples of perceptual systems responding to change can be portrayed as instances of enhancing contrast. Multiple findings from perception experiments serve as evidence for spectral contrast explaining fundamental aspects of perception of coarticulated speech, and these findings are consistent with a broad array of known psychoacoustic and neurophysiological phenomena. Beyond coarticulation, important characteristics of speech perception that extend across broader spectral and temporal ranges may best be accounted for by the constant calibration of perceptual systems to maximize sensitivity to change.

20.
J Acoust Soc Am ; 111(5 Pt 1): 2213-8, 2002 May.
Artículo en Inglés | MEDLINE | ID: mdl-12051441

RESUMEN

Previous studies of auditory-nerve fiber (ANF) representation of vowels in cats and rodents (chinchillas and guinea pigs) have shown that, at amplitudes typical for conversational speech (60-70 dB), neuronal firing rate as a function of characteristic frequency alone provides a poor representation of spectral prominences (e.g., formants) of speech sounds. However, ANF rate representations may not be as inadequate as they appear. Here, it is investigated whether some of this apparent inadequacy owes to the mismatch between animal and human cochlear characteristics. For all animal models tested in earlier studies, the basilar membrane is shorter and encompasses a broader range of frequencies than that of humans. In this study, a customized speech synthesizer was used to create a rendition of the vowel [E] with formant spacing and bandwidths that fit the cat cochlea in proportion to the human cochlea. In these vowels, the spectral envelope is matched to cochlear distance rather than to frequency. Recordings of responses to this cochlear normalized [E] in auditory-nerve fibers of cats demonstrate that rate-based encoding of vowel sounds is capable of distinguishing spectral prominences even at 70-80-dB SPL. When cochlear dimensions are taken into account, rate encoding in ANF appears more informative than was previously believed.


Asunto(s)
Cóclea/fisiología , Nervio Coclear/fisiología , Percepción del Habla/fisiología , Acústica , Animales , Gatos , Fonética
SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA