Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 20 de 92
Filter
1.
Science ; 212(4497): 947-9, 1981 May 22.
Article in English | MEDLINE | ID: mdl-7233191

ABSTRACT

A three-tone sinusoidal replica of a naturally produced utterance was identified by listeners, despite the readily apparent unnatural speech quality of the signal. The time-varying properties of these highly artificial acoustic signals are apparently sufficient to support perception of the linguistic message in the absence of traditional acoustic cues for phonetic segments.


Subject(s)
Phonetics , Speech Perception/physiology , Auditory Perception/physiology , Humans
2.
Science ; 222(4620): 175-7, 1983 Oct 14.
Article in English | MEDLINE | ID: mdl-6623067

ABSTRACT

Two-month-old infants discriminated complex sinusoidal patterns that varied in the duration of their initial frequency transitions. Discrimination of these nonspeech sinusoidal patterns was a function of both the duration of the transitions and the total duration of the stimulus pattern. This contextual effect was observed even though the information specifying stimulus duration occurred after the transitional information. These findings parallel those observed with infants for perception of synthetic speech stimuli. Specialized speech processing capacities are thus not required to account for infants' sensitivity to contextual effects in acoustic signals, whether speech or nonspeech.


Subject(s)
Infant , Speech Perception/physiology , Humans , Time Factors
3.
Laryngoscope ; 115(4): 595-600, 2005 Apr.
Article in English | MEDLINE | ID: mdl-15805866

ABSTRACT

OBJECTIVES/HYPOTHESIS: Individual speech and language outcomes of deaf children with cochlear implants (CIs) are quite varied. Individual differences in underlying cognitive functions may explain some of this variance. The current study investigated whether behavioral inhibition skills of deaf children were related to performance on a range of audiologic outcome measures. DESIGN: Retrospective analysis of longitudinal data collected from prelingually and profoundly deaf children who used CIs. METHODS: Behavioral inhibition skills were measured using a visual response delay task that did not require hearing. Speech and language measures were obtained from behavioral tests administered at 1-year intervals of CI use. RESULTS: Female subjects showed higher response delay scores than males. Performance increased with length of CI use. Younger children showed greater improvement in performance as a function of device use than older children. No other subject variable had a significant effect on response delay score. A series of multiple regression analyses revealed several significant relations between delay task performance and open set word recognition, vocabulary, receptive language, and expressive language scores. CONCLUSIONS: The present results suggest that CI experience affects visual information processing skills of prelingually deaf children. Furthermore, the observed pattern of relations suggests that speech and language processing skills are closely related to the development of response delay skills in prelingually deaf children with CIs. These relations may reflect underlying verbal encoding skills, subvocal rehearsal skills, and verbally mediated self-regulatory skills. Clinically, visual response delay tasks may be useful in assessing behavioral and cognitive development in deaf children after implantation.


Subject(s)
Child Behavior/classification , Cochlear Implants , Deafness/surgery , Inhibition, Psychological , Age Factors , Child , Child Development/physiology , Child Language , Child, Preschool , Deafness/psychology , Female , Follow-Up Studies , Humans , Longitudinal Studies , Male , Retrospective Studies , Sex Factors , Speech/physiology , Speech Intelligibility/physiology , Speech Perception/physiology , Treatment Outcome , Vocabulary
4.
Cognition ; 43(3): 253-91, 1992 Jun.
Article in English | MEDLINE | ID: mdl-1643815

ABSTRACT

The present study explores how stimulus variability in speech production influences the 2-month-old infant's perception and memory for speech sounds. Experiment 1 focuses on the consequences of talker variability for the infant's ability to detect differences between speech sounds. When tested with high-amplitude sucking (HAS) procedure, infants who listened to versions of a syllable, such as [symbol: see text], produced by 6 male and 6 female talkers, detected a change to another syllable, such as [symbol: see text], uttered by the same group of talkers. In fact, infants exposed to multiple talkers performed as well as other infants who heard utterances produced by only a single talker. Moreover, other results showed that infants discriminate the voices of the individual talkers, although discriminating one mixed group of talkers (3 males and 3 females) from another is too difficult for them. Experiment 2 explored the consequences of talker variability on infants' memory for speech sounds. The HAS procedure was modified by introducing a 2-min delay period between the familiarization and test phases of the experiment. Talker variability impeded infants' encoding of speech sounds. Infants who heard versions of the same syllable produced by 12 different talkers did not detect a change to a new syllable produced by the same talkers after the delay period. However, infants who heard the same syllable produced by a single talker were able to detect the phonetic change after the delay. Finally, although infants who heard productions from a single talker retained information about the phonetic structure of the syllable during the delay, they apparently did not retain information about the identity of the talker. Experiment 3 reduced the range of variability across talkers and investigated whether variability interferes with retention of all speech information. Although reducing the range of variability did not lead to retention of phonetic details, infants did recognize a change in the gender of the talkers' voices (from male to female or vice versa) after a 2-min delay. Two additional experiments explored the consequences of limiting the variability to a single talker. In Experiment 4, with an immediate testing procedure, infants exposed to 12 different tokens of one syllable produced by the same talker discriminated these from 12 tokens of another syllable.(ABSTRACT TRUNCATED AT 400 WORDS)


Subject(s)
Arousal , Attention , Phonetics , Psychology, Child , Speech Perception , Female , Humans , Infant , Language Development , Male
5.
J Exp Psychol Gen ; 122(3): 316-30, 1993 Sep.
Article in English | MEDLINE | ID: mdl-8371087

ABSTRACT

College students were separated into 2 groups (high and low) on the basis of 3 measures: subjective familiarity ratings of words, self-reported language experiences, and a test of vocabulary knowledge. Three experiments were conducted to determine if the groups also differed in visual word naming, lexical decision, and semantic categorization. High Ss were consistently faster than low Ss in naming visually presented words. They were also faster and more accurate in making difficult lexical decisions and in rejecting homophone foils in semantic categorization. Taken together, the results demonstrate that Ss who differ in lexical familiarity also differ in processing efficiency. The relationship between processing efficiency and working memory accounts of individual differences in language processing is also discussed.


Subject(s)
Individuality , Mental Recall , Reading , Semantics , Vocabulary , Adult , Attention , Female , Humans , Male , Reaction Time
6.
Ann N Y Acad Sci ; 405: 485-9, 1983.
Article in English | MEDLINE | ID: mdl-6575670

ABSTRACT

Recent perceptual experiments with normal adult listeners show that phonetic information can readily be conveyed by sinewave replicas of speech signals. These tonal patterns are made of three sinusoids set equal in frequency and amplitude to the respective peaks of the first three formants of natural-speech utterances. Unlike natural and most synthetic speech, the spectrum of sinusoidal patterns contains neither harmonics nor broadband formants, and is identified as grossly unnatural in voice timbre. Despite this drastic recoding of the short-time speech spectrum, listeners perceive the phonetic content if the temporal properties of spectrum variation are preserved. These observations suggest that phonetic perception may depend on properties of coherent spectrum variation, a second-order property of the acoustic signal, rather than any particular set of acoustic elements present in speech signals.


Subject(s)
Speech Acoustics , Speech Perception , Speech , Humans , Methods , Phonetics , Sound Spectrography , Speech Intelligibility
7.
J Exp Psychol Hum Percept Perform ; 23(6): 1665-79, 1997 Dec.
Article in English | MEDLINE | ID: mdl-9425674

ABSTRACT

According to P. K. Kuhl (1991), a perceptual magnet effect occurs when discrimination accuracy is lower among better instances of a phonetic category than among poorer instances. Three experiments examined the perceptual magnet effect for the vowel /i/. In Experiment 1, participants rated some examples of /i/ as better instances of the category than others. In Experiment 2, no perceptual magnet effect was observed with materials based on Kuhl's tokens of /i/ or with items normed for each participant. In Experiment 3, participants labeled the vowels developed from Kuhl's test set. Many of the vowels in the nonprototype /i/ condition were not categorized as /i/s. This finding suggests that the comparisons obtained in Kuhl's original study spanned different phonetic categories.


Subject(s)
Phonetics , Speech Perception , Analysis of Variance , Cues , Generalization, Psychological , Humans , Psychological Theory
8.
J Exp Psychol Hum Percept Perform ; 8(2): 297-314, 1982 Apr.
Article in English | MEDLINE | ID: mdl-6461723

ABSTRACT

For many years there has been a consensus that early linguistic experience exerts a profound and often permanent effect on the perceptual abilities underlying the identification and discrimination of stop consonants. It has also been concluded that selective modification of the perception of stop consonants cannot be accomplished easily and quickly in the laboratory with simple discrimination training techniques. In the present article we report the results of three experiments that examined the perception of a three-way voicing contrast by naive monolingual speakers of English. Laboratory training procedures were implemented with a small computer in a real-time environment to examine the perception of voiced, voiceless unaspirated, and voiceless aspirated stops differing in voice onset time. Three perceptual categories were present for most subjects after only a few minutes of exposure to the novel contrast. Subsequent perceptual tests revealed reliable and consistent labeling and categorical-like discrimination functions for all three voicing categories, even though one of the contrasts is not phonologically distinctive in English. The present results demonstrate that the perceptual mechanisms used by adults in categorizing stop consonants can be modified easily with simple laboratory techniques in a short period of time.


Subject(s)
Phonetics , Speech Perception , Adult , Feedback , Female , Humans , Male , Speech Discrimination Tests
9.
Hear Res ; 132(1-2): 34-42, 1999 Jun.
Article in English | MEDLINE | ID: mdl-10392545

ABSTRACT

Functional neuroimaging with positron emission tomography (PET) was used to compare the brain activation patterns of normal-hearing (NH) with postlingually deaf, cochlear-implant (CI) subjects listening to speech and nonspeech signals. The speech stimuli were derived from test batteries for assessing speech-perception performance of hearing-impaired subjects with different sensory aids. Subjects were scanned while passively listening to monaural (right ear) stimuli in five conditions: Silent Baseline, Word, Sentence, Time-reversed Sentence, and Multitalker Babble. Both groups showed bilateral activation in superior and middle temporal gyri to speech and backward speech. However, group differences were observed in the Sentence compared to Silence condition. CI subjects showed more activated foci in right temporal regions, where lateralized mechanisms for prosodic (pitch) processing have been well established; NH subjects showed a focus in the left inferior frontal gyrus (Brodmann's area 47), where semantic processing has been implicated. Multitalker Babble activated auditory temporal regions in the CI group only. Whereas NH listeners probably habituated to this multitalker babble, the CI listeners may be using a perceptual strategy that emphasizes 'coarse' coding to perceive this stimulus globally as speechlike. The group differences provide the first neuroimaging evidence suggesting that postlingually deaf CI and NH subjects may engage differing perceptual processing strategies under certain speech conditions.


Subject(s)
Brain/diagnostic imaging , Brain/physiology , Cochlear Implants , Hearing/physiology , Phonetics , Speech Perception/physiology , Tomography, Emission-Computed , Adult , Cerebrovascular Circulation/physiology , Female , Humans , Male , Pilot Projects , Reference Values
10.
J Exp Psychol Learn Mem Cogn ; 19(2): 309-28, 1993 Mar.
Article in English | MEDLINE | ID: mdl-8454963

ABSTRACT

Recognition memory for spoken words was investigated with a continuous recognition memory task. Independent variables were number of intervening words (lag) between initial and subsequent presentations of a word, total number of talkers in the stimulus set, and whether words were repeated in the same voice or a different voice. In Experiment 1, recognition judgements were based on word identity alone. Same-voice repetitions were recognized more quickly and accurately than different-voice repetitions at all values of lag and at all levels of talker variability. In Experiment 2, recognition judgments were based on both word identity and voice identity. Subjects recognized repeated voices quite accurately. Gender of the talker affected voice recognition but not item recognition. These results suggest that detailed information about a talker's voice is retained in long-term episodic memory representations of spoken words.


Subject(s)
Attention , Mental Recall , Speech Perception , Voice Quality , Adult , Female , Humans , Individuality , Male , Psychoacoustics , Retention, Psychology
11.
J Exp Psychol Learn Mem Cogn ; 14(3): 421-33, 1988 Jul.
Article in English | MEDLINE | ID: mdl-2969941

ABSTRACT

To examine the effects of stimulus structure and variability on perceptual learning, we compared transcription accuracy before and after training with synthetic speech produced by rule. Subjects were trained with either isolated words or fluent sentences of synthetic speech that were either novel stimuli or a fixed list of stimuli that was repeated. Subjects who were trained on the same stimuli every day improved as much as did the subjects who were given novel stimuli. In a second experiment, the size of the repeated stimulus set was reduced. Under these conditions, subjects trained with repeated stimuli did not generalize to novel stimuli as well as did subjects trained with novel stimuli. Our results suggest that perceptual learning depends on the degree to which the training stimuli characterize the underlying structure of the full stimulus set. Furthermore, we found that training with isolated words only increased the intelligibility of isolated words, although training with sentences increased the intelligibility of both isolated words and sentences.


Subject(s)
Communication Aids for Disabled , Phonetics , Self-Help Devices , Semantics , Speech Perception , Adult , Humans , Speech Intelligibility
12.
J Exp Psychol Learn Mem Cogn ; 13(1): 64-75, 1987 Jan.
Article in English | MEDLINE | ID: mdl-2949053

ABSTRACT

Cohort theory, developed by Marslen-Wilson and Welsh (1978), proposes that a "cohort" of all the words beginning with a particular sound sequence will be activated during the initial stage of the word recognition process. We used a priming technique to test specific predictions regarding cohort activation in three experiments. In each experiment, subjects identified target words embedded in noise at different signal-to-noise ratios. The target words were either presented in isolation or preceded by a prime item that shared phonological information with the target. In Experiment 1, primes and targets were English words that shared zero, one, two, three, or all phonemes from the beginning of the word. In Experiment 2, nonword primes preceded word targets and shared initial phonemes. In Experiment 3, word primes and word targets shared phonemes from the end of a word. Evidence of reliable phonological priming was observed in all three experiments. The results of the first two experiments support the assumption of activation of lexical candidates based on word-initial information, as proposed in cohort theory. However, the results of the third experiment, which showed increased probability of correctly identifying targets that shared phonemes from the end of words, did not support the predictions derived from the theory. The findings are discussed in terms of current models of auditory word recognition and recent approaches to spoken-language understanding.


Subject(s)
Cues , Phonetics , Speech Perception , Humans , Models, Psychological , Probability
13.
J Exp Psychol Learn Mem Cogn ; 17(1): 152-62, 1991 Jan.
Article in English | MEDLINE | ID: mdl-1826729

ABSTRACT

In a recent study, Martin, Mullennix, Pisoni, and Summers (1989) reported that subjects' accuracy in recalling lists of spoken words was better for words in early list positions when the words were spoken by a single talker than when they were spoken by multiple talkers. The present study was conducted to examine the nature of these effects in further detail. Accuracy of serial-ordered recall was examined for lists of words spoken by either a single talker or by multiple talkers. Half the lists contained easily recognizable words, and half contained more difficult words, according to a combined metric of word frequency, lexical neighborhood density, and neighborhood frequency. Rate of presentation was manipulated to assess the effects of both variables on rehearsal and perceptual encoding. A strong interaction was obtained between talker variability and rate of presentation. Recall of multiple-talker lists was affected much more than single-talker lists by changes in presentation rate. At slow presentation rates, words in early serial positions produced by multiple talkers were actually recalled more accurately than words produced by a single talker. No interaction was observed for word confusability and rate of presentation. The data provide support for the proposal that talker variability affects the accuracy of recall of spoken words not only by increasing the processing demands for early perceptual encoding of the words, but also by affecting the efficiency of the rehearsal process itself.


Subject(s)
Attention , Mental Recall , Speech Perception , Verbal Learning , Adult , Female , Humans , Male , Phonetics , Practice, Psychological , Semantics
14.
J Exp Psychol Learn Mem Cogn ; 18(6): 1211-38, 1992 Nov.
Article in English | MEDLINE | ID: mdl-1447548

ABSTRACT

Phonological priming of spoken words refers to improved recognition of targets preceded by primes that share at least one of their constituent phonemes (e.g., BULL-BEER). Phonetic priming refers to reduced recognition of targets preceded by primes that share no phonemes with targets but are phonetically similar to targets (e.g., BULL-VEER). Five experiments were conducted to investigate the role of bias in phonological priming. Performance was compared across conditions of phonological and phonetic priming under a variety of procedural manipulations. Ss in phonological priming conditions systematically modified their responses on unrelated priming trials in perceptual identification, and they were slower and more errorful on unrelated trials in lexical decision than were Ss in phonetic priming conditions. Phonetic and phonological priming effects display different time courses and also different interactions with changes in proportion of related priming trials. Phonological priming involves bias; phonetic priming appears to reflect basic properties of activation and competition in spoken word recognition.


Subject(s)
Speech Perception , Speech , Vocabulary , Adult , Female , Humans , Language , Male , Memory , Noise , Perceptual Masking , Phonetics , Research Design , Semantics , Speech Production Measurement
15.
J Exp Psychol Learn Mem Cogn ; 15(4): 676-84, 1989 Jul.
Article in English | MEDLINE | ID: mdl-2526857

ABSTRACT

Three experiments were conducted to investigate recall of lists of words containing items spoken by either a single talker or by different talkers. In each experiment, recall of early list items was better for lists spoken by a single talker than for lists of the same words spoken by different talkers. The use of a memory preload procedure demonstrated that recall of visually presented preload digits was superior when the words in a subsequent list were spoken by a single talker than by different talkers. In addition, a retroactive interference task demonstrated that the effects of talker variability on the recall of early list items were not due to use of talker-specific acoustic cues in working memory at the time of recall. Taken together, the results suggest that word lists produced by different talkers require more processing resources in working memory than do lists produced by a single talker. The findings are discussed in terms of the role that active rehearsal plays in the transfer of spoken items into long-term memory and the factors that may affect the efficiency of rehearsal.


Subject(s)
Memory , Mental Recall , Speech Perception , Verbal Learning , Adult , Attention , Cues , Humans , Phonetics , Psychoacoustics , Serial Learning , Sex Factors
16.
Dev Psychol ; 33(3): 441-52, 1997 May.
Article in English | MEDLINE | ID: mdl-9149923

ABSTRACT

In a series of experiments, the authors investigated the effects of talker variability on children's word recognition. In Experiment 1, when stimuli were presented in the clear, 3- and 5-year-olds were less accurate at identifying words spoken by multiple talkers than those spoken by a single talker when the multiple-talker list was presented first. In Experiment 2, when words were presented in noise, 3-, 4-, and 5-year-olds again performed worse in the multiple-talker condition than in the single-talker condition, this time regardless of order; processing multiple talkers became easier with age. Experiment 3 showed that both children and adults were slower to repeat words from multiple-talker than those from single-talker lists. More important, children (but not adults) matched acoustic properties of the stimuli (specifically, duration). These results provide important new information about the development of talker normalization in speech perception and spoken word recognition.


Subject(s)
Attention , Language Development , Speech Perception , Adult , Child, Preschool , Female , Humans , Male , Pattern Recognition, Visual , Sound Spectrography , Speech Acoustics , Verbal Learning
17.
Brain Lang ; 68(1-2): 306-11, 1999.
Article in English | MEDLINE | ID: mdl-10433774

ABSTRACT

Probabilistic phonotactics refers to the relative frequencies of segments and sequences of segments in spoken words. Neighborhood density refers to the number of words that are phonologically similar to a given word. Despite a positive correlation between phonotactic probability and neighborhood density, nonsense words with high probability segments and sequences are responded to more quickly than nonsense words with low probability segments and sequences, whereas real words occurring in dense similarity neighborhoods are responded to more slowly than real words occurring in sparse similarity neighborhoods. This contradiction may be resolved by hypothesizing that effects of probabilistic phonotactics have a sublexical focus and that effects of similarity neighborhood density have a lexical focus. The implications of this hypothesis for models of spoken word recognition are discussed.


Subject(s)
Cognition/physiology , Speech , Vocabulary , Humans , Phonetics
18.
J Speech Lang Hear Res ; 40(6): 1395-405, 1997 Dec.
Article in English | MEDLINE | ID: mdl-9430759

ABSTRACT

Traditional word-recognition tests typically use phonetically balanced (PB) word lists produced by one talker at one speaking rate. Intelligibility measures based on these tests may not adequately evaluate the perceptual processes used to perceive speech under more natural listening conditions involving many sources of stimulus variability. The purpose of this study was to examine the influence of stimulus variability and lexical difficulty on the speech-perception abilities of 17 adults with mild-to-moderate hearing loss. The effects of stimulus variability were studied by comparing word-identification performance in single-talker versus multiple-talker conditions and at different speaking rates. Lexical difficulty was assessed by comparing recognition of "easy" words (i.e., words that occur frequently and have few phonemically similar neighbors) with "hard" words (i.e., words that occur infrequently and have many similar neighbors). Subjects also completed a 20-item questionnaire to rate their speech understanding abilities in daily listening situations. Both sources of stimulus variability produced significant effects on speech intelligibility. Identification scores were poorer in the multiple-talker condition than in the single-talker condition, and word-recognition performance decreased as speaking rate increased. Lexical effects on speech intelligibility were also observed. Word-recognition performance was significantly higher for lexically easy words than lexically hard words. Finally, word-recognition performance was correlated with scores on the self-report questionnaire rating speech understanding under natural listening conditions. The pattern of results suggest that perceptually robust speech-discrimination tests are able to assess several underlying aspects of speech perception in the laboratory and clinic that appear to generalize to conditions encountered in natural listening situations where the listener is faced with many different sources of stimulus variability. That is, word-recognition performance measured under conditions where the talker varied from trial to trial was better correlated with self-reports of listening ability than was performance in a single-talker condition where variability was constrained.


Subject(s)
Hearing Disorders/diagnosis , Noise , Speech Perception , Adult , Aged , Female , Hearing Tests/methods , Humans , Male , Middle Aged , Semantics , Severity of Illness Index , Vocabulary
19.
Ann Otol Rhinol Laryngol Suppl ; 185: 68-70, 2000 Dec.
Article in English | MEDLINE | ID: mdl-11141011

ABSTRACT

On the basis of the good predictions for phonemes correct, we conclude that closed-set feature identification may successfully predict phoneme identification in an open-set word recognition task. For word recognition, however, the PCM model underpredicted observed performance, and the addition of a mental lexicon (ie, the SPAMR model) was needed for a good match to data averaged across 7 adults with CIs. The predictions for words correct improved with the addition of a lexicon, providing support for the hypothesis that lexical information is used in open-set spoken word recognition by CI users. The perception of words more complex than CNCs is also likely to require lexical knowledge (Frisch et al, this supplement, pp 60-62) In the future, we will use the performance off individual CI users on psychophysical tasks to generate predicted vowel and consonant confusion matrices to be used to predict open-set spoken word recognition.


Subject(s)
Cochlear Implants , Speech Perception , Adult , Humans , Models, Theoretical
20.
Ann Otol Rhinol Laryngol Suppl ; 166: 300-3, 1995 Sep.
Article in English | MEDLINE | ID: mdl-7668680

ABSTRACT

This study examined the influence of stimulus variability and lexical difficulty on the speech perception performance of adults who used either multichannel cochlear implants or conventional hearing aids. The effects of stimulus variability were examined by comparing word identification in single-talker versus multiple-talker conditions. Lexical effects were assessed by comparing recognition of "easy" words (ie, words that occur frequently and have few phonemically similar words, or neighbors) with "hard" words (ie, words with the opposite lexical characteristics). Word recognition performance was assessed in either closed- or open-set response formats. The results demonstrated that both stimulus variability and lexical difficulty influenced word recognition performance. Identification scores were poorer in the multiple-talker than in the single-talker conditions. Also, scores for lexically "easy" items were better than those for "hard" items. The effects of stimulus variability were not evident when a closed-set response format was employed.


Subject(s)
Cochlear Implants , Hearing Aids , Speech Discrimination Tests , Adult , Humans
SELECTION OF CITATIONS
SEARCH DETAIL