Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 11 de 11
Filtrar
Más filtros










Base de datos
Intervalo de año de publicación
1.
Ear Hear ; 22(3): 225-35, 2001 Jun.
Artículo en Inglés | MEDLINE | ID: mdl-11409858

RESUMEN

OBJECTIVE: The objective of this study was to compare the effects of a single-band envelope cue as a supplement to speechreading of segmentals and sentences when presented through either the auditory or tactual modality. DESIGN: The supplementary signal, which consisted of a 200-Hz carrier amplitude-modulated by the envelope of an octave band of speech centered at 500 Hz, was presented through a high-performance single-channel vibrator for tactual stimulation or through headphones for auditory stimulation. Normal-hearing subjects were trained and tested on the identification of a set of 16 medial vowels in /b/-V-/d/ context and a set of 24 initial consonants in C-/a/-C context under five conditions: speechreading alone (S), auditory supplement alone (A), tactual supplement alone (T), speechreading combined with the auditory supplement (S+A), and speechreading combined with the tactual supplement (S+T). Performance on various speech features was examined to determine the contribution of different features toward improvements under the aided conditions for each modality. Performance on the combined conditions (S+A and S+T) was compared with predictions generated from a quantitative model of multi-modal performance. To explore the relationship between benefits for segmentals and for connected speech within the same subjects, sentence reception was also examined for the three conditions of S, S+A, and S+T. RESULTS: For segmentals, performance generally followed the pattern of T < A < S < S+T < S+A. Significant improvements to speechreading were observed with both the tactual and auditory supplements for consonants (10 and 23 percentage-point improvements, respectively), but only with the auditory supplement for vowels (a 10 percentage-point improvement). The results of the feature analyses indicated that improvements to speechreading arose primarily from improved performance on the features low and tense for vowels and on the features voicing, nasality, and plosion for consonants. These improvements were greater for auditory relative to tactual presentation. When predicted percent-correct scores for the multi-modal conditions were compared with observed scores, the predicted values always exceeded observed values and the predictions were somewhat more accurate for the S+A than for the S+T conditions. For sentences, significant improvements to speechreading were observed with both the auditory and tactual supplements for high-context materials but again only with the auditory supplement for low-context materials. The tactual supplement provided a relative gain to speechreading of roughly 25% for all materials except low-context sentences (where gain was only 10%), whereas the auditory supplement provided relative gains of roughly 50% (for vowels, consonants, and low-context sentences) to 75% (for high-context sentences). CONCLUSIONS: The envelope cue provides a significant benefit to the speechreading of consonant segments when presented through either the auditory or tactual modality and of vowel segments through audition only. These benefits were found to be related to the reception of the same types of features under both modalities (voicing, manner, and plosion for consonants and low and tense for vowels); however, benefits were larger for auditory compared with tactual presentation. The benefits observed for segmentals appear to carry over into benefits for sentence reception under both modalities.


Asunto(s)
Señales (Psicología) , Lectura de los Labios , Percepción del Habla , Tacto , Adulto , Percepción Auditiva , Femenino , Humanos , Fonética , Percepción Visual
2.
J Speech Lang Hear Res ; 42(3): 568-82, 1999 Jun.
Artículo en Inglés | MEDLINE | ID: mdl-10391623

RESUMEN

Previous research on the visual reception of fingerspelled English suggests that communication rates are limited primarily by constraints on production. Studies of artificially accelerated fingerspelling indicate that reception of fingerspelled sentences is highly accurate for rates up to 2 to 3 times those that can be produced naturally. The current paper reports on the results of a comparable study of the reception of American Sign Language (ASL). Fourteen native deaf ASL signers participated in an experiment in which videotaped productions of isolated ASL signs or ASL sentences were presented at normal playback speed and at speeds of 2, 3, 4, and 6 times normal speed. For isolated signs, identification scores decreased from 95% correct to 46% correct across the range of rates that were tested; for sentences, the ability to identify key signs decreased from 88% to 19% over the range of rates tested. The results indicate a breakdown in processing at around 2.5-3 times the normal rate as evidenced both by a substantial drop in intelligibility in this region and by a shift in error patterns away from semantic and toward formational. These results parallel those obtained in previous studies of the intelligibility of the auditory reception of time-compressed speech and the visual reception of accelerated fingerspelling. Taken together, these results suggest a modality-independent upper limit to language processing.


Asunto(s)
Lengua de Signos , Percepción Visual/fisiología , Adulto , Sordera , Femenino , Humanos , Masculino , Persona de Mediana Edad , Fonética , Semántica , Factores de Tiempo , Grabación de Cinta de Video
3.
J Speech Hear Res ; 38(2): 477-89, 1995 Apr.
Artículo en Inglés | MEDLINE | ID: mdl-7596113

RESUMEN

One of the natural methods of tactual communication in common use among individuals who are both deaf and blind is the tactual reception of sign language. In this method, the receiver (who is deaf-blind) places a hand (or hands) on the dominant (or both) hand(s) of the signer in order to receive, through the tactual sense, the various formational properties associated with signs. In the study reported here, 10 experienced deaf-blind users of either American Sign Language (ASL) or Pidgin Sign English (PSE) participated in experiments to determine their ability to receive signed materials including isolated signs and sentences. A set of 122 isolated signs was received with an average accuracy of 87% correct. The most frequent type of error made in identifying isolated signs was related to misperception of individual phonological components of signs. For presentation of signed sentences (translations of the English CID sentences into ASL or PSE), the performance of individual subjects ranged from 60-85% correct reception of key signs. Performance on sentences was relatively independent of rate of presentation in signs/sec, which covered a range of roughly 1 to 3 signs/sec. Sentence errors were accounted for primarily by deletions and phonological and semantic/syntactic substitutions. Experimental results are discussed in terms of differences in performance for isolated signs and sentences, differences in error patterns for the ASL and PSE groups, and communication rates relative to visual reception of sign language and other natural methods of tactual communication.


Asunto(s)
Ceguera/complicaciones , Comunicación , Sordera/complicaciones , Lengua de Signos , Tacto , Adolescente , Adulto , Humanos , Persona de Mediana Edad
4.
J Rehabil Res Dev ; 31(1): 20-41, 1994.
Artículo en Inglés | MEDLINE | ID: mdl-8035358

RESUMEN

Although great strides have been made in the development of automatic speech recognition (ASR) systems, the communication performance achievable with the output of current real-time speech recognition systems would be extremely poor relative to normal speech reception. An alternate application of ASR technology to aid the hearing impaired would derive cues from the acoustical speech signal that could be used to supplement speechreading. We report a study of highly trained receivers of Manual Cued Speech that indicates that nearly perfect reception of everyday connected speech materials can be achieved at near normal speaking rates. To understand the accuracy that might be achieved with automatically generated cues, we measured how well trained spectrogram readers and an automatic speech recognizer could assign cues for various cue systems. We then applied a recently developed model of audiovisual integration to these recognizer measurements and data on human recognition of consonant and vowel segments via speechreading to evaluate the benefit to speechreading provided by such cues. Our analysis suggests that with cues derived from current recognizers, consonant and vowel segments can be received with accuracies in excess of 80%. This level of performance is roughly equivalent to the segment reception accuracy required to account for observed levels of Manual Cued Speech reception. Current recognizers provide maximal benefit by generating only a relatively small number (three to five) of cue groups, and may not provide substantially greater aid to speechreading than simpler aids that do not incorporate discrete phonetic recognition. To provide guidance for the development of improved automatic cueing systems, we describe techniques for determining optimum cue groups for a given recognizer and speechreader, and estimate the cueing performance that might be achieved if the performance of current recognizers were improved.


Asunto(s)
Equipos de Comunicación para Personas con Discapacidad , Pérdida Auditiva/rehabilitación , Habla , Adolescente , Adulto , Señales (Psicología) , Humanos , Modelos Teóricos , Fonética , Percepción del Habla
5.
J Acoust Soc Am ; 92(4 Pt 1): 1869-81, 1992 Oct.
Artículo en Inglés | MEDLINE | ID: mdl-1401531

RESUMEN

A comprehensive set of speech reception measures were obtained in a group of about 20 postlingually deafened adult users of the Ineraid multichannel cochlear implant. The measures included audio, visual, and audiovisual recognition of words embedded in two types of sentences (with differing degrees of difficulty) and audio-only recognition of isolated monosyllabic words, consonant identification (12 alternatives, /Ca/), and vowel identification (8 alternatives, /bVt/). For most implantees, the audiovisual gains in the sentence tests were very high. Quantitative relations among audio-only scores were assessed using power-law transformations suggested by Boothroyd and Nittrouer [J. Acoust. Soc. Am. 84, 101-114 (1988)] that can account for the benefit of sentence context (via a factor k) and the relation between word and phoneme recognition (via a factor j). Across the broad range of performance that existed among the subjects, substantial order was observed among measures of speech reception along the continuum from recognition of words in sentences, words in isolation, speech segments, and the retrieval of underlying phonetic features. Correlations exceeded 0.85 among direct and sentence-derived measures of isolated word recognition as well as among direct and word-derived measures of segmental recognition. Results from a variety of other studies involving presentation of limited auditory signals, single-channel and multichannel implants, and tactual systems revealed a similar pattern among word recognition, overall consonant identification performance, and consonantal feature recruitment. Finally, improving the reception of consonantal place cues was identified as key to producing the greatest potential gains in speech reception.


Asunto(s)
Implantes Cocleares , Sordera/rehabilitación , Prueba del Umbral de Recepción del Habla , Adulto , Anciano , Anciano de 80 o más Años , Femenino , Humanos , Lectura de los Labios , Masculino , Persona de Mediana Edad , Fonética , Diseño de Prótesis , Percepción del Habla
6.
J Speech Hear Res ; 35(2): 450-65, 1992 Apr.
Artículo en Inglés | MEDLINE | ID: mdl-1533433

RESUMEN

Although results obtained with the Tadoma method of speechreading have set a new standard for tactual speech communication, they are nevertheless inferior to those obtained in the normal auditory domain. Speech reception through Tadoma is comparable to that of normal-hearing subjects listening to speech under adverse conditions corresponding to a speech-to-noise ratio of roughly 0 dB. The goal of the current study was to demonstrate improvements to speech reception through Tadoma through the use of supplementary tactual information, thus leading to a new standard of performance in the tactual domain. Three supplementary tactual displays were investigated: (a) an articulatory-based display of tongue contact with the hard palate; (b) a multichannel display of the short-term speech spectrum; and (c) tactual reception of Cued Speech. The ability of laboratory-trained subjects to discriminate pairs of speech segments that are highly confused through Tadoma was studied for each of these augmental displays. Generally, discrimination tests were conducted for Tadoma alone, the supplementary display alone, and Tadoma combined with the supplementary tactual display. The results indicated that the tongue-palate contact display was an effective supplement to Tadoma for improving discrimination of consonants, but that neither the tongue-palate contact display nor the short-term spectral display was highly effective in improving vowel discriminability. For both vowel and consonant stimulus pairs, discriminability was nearly perfect for the tactual reception of the manual cues associated with Cued Speech. Further experiments on the identification of speech segments were conducted for Tadoma combined with Cued Speech. The observed data for both discrimination and identification experiments are compared with the predictions of models of integration of information from separate sources.


Asunto(s)
Ceguera/rehabilitación , Equipos de Comunicación para Personas con Discapacidad/normas , Sordera/rehabilitación , Terapia Asistida por Computador/normas , Tacto , Ceguera/complicaciones , Ceguera/fisiopatología , Señales (Psicología) , Sordera/complicaciones , Sordera/fisiopatología , Estudios de Evaluación como Asunto , Expresión Facial , Femenino , Humanos , Masculino , Hueso Paladar/fisiología , Pruebas de Discriminación del Habla , Lengua/fisiología
7.
J Speech Hear Res ; 33(4): 786-97, 1990 Dec.
Artículo en Inglés | MEDLINE | ID: mdl-2273891

RESUMEN

A method of communication in frequent use among members of the deaf-blind community is the tactual reception of fingerspelling. In this method, the hand of the deaf-blind individual is placed on the hand of the sender to monitor the handshapes and movements associated with the letters of the manual alphabet. The purpose of the current study was to examine the ability of experienced deaf-blind subjects to receive fingerspelled materials, including sentences and connected text, through the tactual sense. A parallel study of the reception of fingerspelling through the visual sense was also conducted using sighted deaf subjects. For both visual and tactual reception of fingerspelled sentences, accuracy of reception was examined as a function of rate of presentation. In the tactual study, where rates were limited to those that could be produced naturally by an experienced interpreter, highly accurate reception of conversational sentence materials was observed throughout the range of naturally produced rates (i.e., 2 to 6 letters/s). In the visual study, rates in excess of those that can be produced naturally were achieved through variable-speed playback of videotapes of fingerspelled sentences. The results of this study indicate that performance varies systematically as a function of rate of presentation, with scores of 50% correct on conversational sentences obtained at rates of 12 to 16 letters/s (i.e., rates roughly double to triple normal speed). These results suggest that normal communication rates for the visual reception of fingerspelling are restricted by limitations on the rate of manual production. Although maximal rates of natural manual production of fingerspelling correspond to the presentation of a new handshape on the order of once every 150-20 ms, the data from the sped-up visual study suggest that experienced receivers of visual fingerspelling are able to receive sentences at substantially higher rates of fingerspelling (which are, in fact, comparable to communication rates for spoken English).


Asunto(s)
Ceguera/fisiopatología , Comunicación , Sordera/fisiopatología , Lengua de Signos , Percepción del Habla , Tacto , Visión Ocular , Adulto , Anciano , Anciano de 80 o más Años , Estudios de Evaluación como Asunto , Humanos , Persona de Mediana Edad , Grabación de Cinta de Video
8.
Percept Psychophys ; 46(1): 29-38, 1989 Jul.
Artículo en Inglés | MEDLINE | ID: mdl-2755759

RESUMEN

Experiments were conducted on length resolution for objects held between the thumb and fore-finger. The just noticeable difference in length measured in discrimination experiments is roughly 1 mm for reference lengths of 10 to 20 mm. It increases monotonically with reference length but violates Weber's law. Also, it decreases when the subject is permitted to maintain a constant finger span between trials; however, it tends to increase when the nondominant hand is used. As would be expected from studies of other stimulus dimensions in other sense modalities, resolution is considerably poorer in identification experiments than in discrimination experiments. For stimulus sets that cover a broad range (90 mm), the total information transfer is roughly 2 bits; for those that cover a relatively small range (18 mm), it is roughly 1 bit. The data are analyzed and interpreted using analysis techniques and models that have been used previously in studies of audition (e.g., Durlach & Braida, 1969).


Asunto(s)
Aprendizaje Discriminativo , Percepción del Tamaño , Estereognosis , Adulto , Atención , Humanos , Memoria a Corto Plazo , Tacto
9.
J Acoust Soc Am ; 82(5): 1548-59, 1987 Nov.
Artículo en Inglés | MEDLINE | ID: mdl-3693695

RESUMEN

The goal of this study was to determine the extent to which the difficulty experienced by impaired listeners in understanding noisy speech can be explained on the basis of elevated tone-detection thresholds. Twenty-one impaired ears of 15 subjects, spanning a variety of audiometric configurations with average hearing losses to 75 dB, were tested for reception of consonants in a speech-spectrum noise. Speech level, noise level, and frequency-gain characteristic were varied to generate a range of listening conditions. Results for impaired listeners were compared to those of normal-hearing listeners tested under the same conditions with extra noise added to approximate the impaired listeners' detection thresholds. Results for impaired and normal listeners were also compared on the basis of articulation indices. Consonant recognition by this sample of impaired listeners was generally comparable to that of normal-hearing listeners with similar threshold shifts listening under the same conditions. When listening conditions were equated for articulation index, there was no clear dependence of consonant recognition on average hearing loss. Assuming that the primary consequence of the threshold simulation in normals is loss of audibility (as opposed to suprathreshold discrimination or resolution deficits), it is concluded that the primary source of difficulty in listening in noise for listeners with moderate or milder hearing impairments, aside from the noise itself, is the loss of audibility.


Asunto(s)
Pérdida Auditiva Sensorineural/fisiopatología , Ruido , Percepción del Habla , Estimulación Acústica , Adulto , Audiometría , Humanos , Voz
10.
J Acoust Soc Am ; 82(4): 1243-52, 1987 Oct.
Artículo en Inglés | MEDLINE | ID: mdl-3680781

RESUMEN

Experiments were conducted to determine the ability of subjects to identify vibrotactile stimuli presented to the distal pad of the middle finger. The stimulus sets varied along one or more of the following dimensions: intensity of vibration, frequency of vibration, and contactor area. Identification performance was measured by information transfer. One-dimensional stimulus sets produced values in the range 1-2 bits and, for most subjects, three-dimensional sets produced values in the range 4-5 bits. Of the three dimensions considered, performance on the intensity variable was most affected, and performance on contactor area least affected, by simultaneous variations in the other dimensions.


Asunto(s)
Aprendizaje Discriminativo , Tacto , Vibración , Atención , Dedos , Humanos
11.
J Acoust Soc Am ; 81(4): 1085-92, 1987 Apr.
Artículo en Inglés | MEDLINE | ID: mdl-3571725

RESUMEN

This research is concerned with the ability of normal-hearing listeners to discriminate broadband signals on the basis of spectral shape. The signals were six broadband noises whose spectral shapes were modeled after the spectra of unvoiced fricative and plosive consonants. The difficulty of the discriminations was controlled by the addition of noise filtered to match the long-term speech spectrum. Two-interval discrimination measurements were made in which loudness cues were eliminated by randomizing (roving) the overall stimulus level between presentation intervals. Experimental results, examined as a function of intensity rove width, stimulus duration, and stimulus pair, were related to the predictions of a simple filter-bank model whose fitting parameter provides an estimate of internal noise. Most results, with the notable exception of duration effects, were predicted by the model. Estimates of internal noise in each frequency channel averaged roughly 7 dB for long-duration stimuli and 13 dB for short-duration stimuli. Results and predictions are compared to results of other studies concerned with the discrimination of spectral shape.


Asunto(s)
Ruido , Enmascaramiento Perceptual , Discriminación de la Altura Tonal , Humanos , Percepción Sonora , Fonética , Psicoacústica , Percepción del Habla
SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA