Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 6 de 6
Filtrar
Más filtros

Banco de datos
País/Región como asunto
Tipo del documento
País de afiliación
Intervalo de año de publicación
1.
J Exp Psychol Hum Percept Perform ; 33(4): 960-77, 2007 Aug.
Artículo en Inglés | MEDLINE | ID: mdl-17683240

RESUMEN

Although the effect of acoustic cues on speech segmentation has been extensively investigated, the role of higher order information (e.g., syntax) has received less attention. Here, the authors examined whether syntactic expectations based on subject-verb agreement have an effect on segmentation and whether they do so despite conflicting acoustic cues. Although participants detected target words faster in phrases containing adequate acoustic cues ("spins" in take spins and "pins" in takes pins), this acoustic effect was suppressed when the phrases were appended to a plural context (those women take spins/*takes pins [with the asterisk indicating a syntactically unacceptable parse]). The syntactically congruent target ("spins") was detected faster regardless of the acoustics. However, a singular context (that woman *take spins/takes pins) had no effect on segmentation, and the results resembled those of the neutral phrases. Subsequent experiments showed that the discrepancy was due to the relative time course of syntactic expectations and acoustics cues. Taken together, the data suggest that syntactic knowledge can facilitate segmentation but that its effect is substantially attenuated if conflicting acoustic cues are encountered before full realization of the syntactic constraint.


Asunto(s)
Lingüística , Habla , Humanos , Tiempo de Reacción , Acústica del Lenguaje , Vocabulario
2.
J Acoust Soc Am ; 122(1): 554-67, 2007 Jul.
Artículo en Inglés | MEDLINE | ID: mdl-17614511

RESUMEN

This study investigates the effects of sentential context, lexical knowledge, and acoustic cues on the segmentation of connected speech. Listeners heard near-homophonous phrases (e.g., plmpaI for "plum pie" versus "plump eye") in isolation, in a sentential context, or in a lexically biasing context. The sentential context and the acoustic cues were piloted to provide strong versus mild support for one segmentation alternative (plum pie) or the other (plump eye). The lexically biasing context favored one segmentation or the other (e.g., skmpaI for "scum pie" versus *"scump eye," and lmpaI, for "lump eye" versus *"lum pie," with the asterisk denoting a lexically unacceptable parse). A forced-choice task, in which listeners indicated which of two words they thought they heard (e.g., "pie" or "eye"), revealed compensatory mechanisms between the sources of information. The effect of both sentential and lexical contexts on segmentation responses was larger when the acoustic cues were mild than when they were strong. Moreover, lexical effects were accompanied with a reduction in sensitivity to the acoustic cues. Sentential context only affected the listeners' response criterion. The results highlight the graded, interactive, and flexible nature of multicue segmentation, as well as functional differences between sentential and lexical contributions to this process.


Asunto(s)
Señales (Psicología) , Fonética , Acústica del Lenguaje , Percepción del Habla/fisiología , Estimulación Acústica , Humanos , Enmascaramiento Perceptual , Semántica , Inteligibilidad del Habla/fisiología , Vocabulario
3.
J Exp Psychol Gen ; 134(4): 477-500, 2005 Nov.
Artículo en Inglés | MEDLINE | ID: mdl-16316287

RESUMEN

A central question in psycholinguistic research is how listeners isolate words from connected speech despite the paucity of clear word-boundary cues in the signal. A large body of empirical evidence indicates that word segmentation is promoted by both lexical (knowledge-derived) and sublexical (signal-derived) cues. However, an account of how these cues operate in combination or in conflict is lacking. The present study fills this gap by assessing speech segmentation when cues are systematically pitted against each other. The results demonstrate that listeners do not assign the same power to all segmentation cues; rather, cues are hierarchically integrated, with descending weights allocated to lexical, segmental, and prosodic cues. Lower level cues drive segmentation when the interpretive conditions are altered by a lack of contextual and lexical information or by white noise. Taken together, the results call for an integrated, hierarchical, and signal-contingent approach to speech segmentation.


Asunto(s)
Señales (Psicología) , Percepción del Habla , Habla , Humanos , Fonética
4.
Lang Speech ; 48(Pt 2): 223-53, 2005.
Artículo en Inglés | MEDLINE | ID: mdl-16411506

RESUMEN

The involvement of syllables in the perception of spoken English has traditionally been regarded as minimal because of ambiguous syllable boundaries and overriding rhythmic segmentation cues. The present experiments test the perceptual separability of syllables and vowels in spoken English using the migration paradigm. Experiments 1 and 2 show that syllables migrate considerably more than full and reduced vowels, and this effect is not influenced by the lexicality of the stimuli, their stress pattern, or the syllables' position relative to the edge of the stimuli. Experiment 3 confirms the predominance of syllable migration against a pseudosyllable baseline, and provides some evidence that syllable migration depends on whether syllable boundaries are clear or ambiguous. Consistent with this hypothesis, Experiment 4 demonstrates that CVC syllables migrate more in stimuli with a clear CVC-initial structure than in ambisyllabic stimuli. Together, the data suggest that syllables have a greater contribution to the perception of spoken English than previously assumed.


Asunto(s)
Percepción del Habla , Análisis de Varianza , Humanos , Pruebas del Lenguaje , Fonética
5.
Q J Exp Psychol (Hove) ; 63(3): 544-54, 2010 Mar.
Artículo en Inglés | MEDLINE | ID: mdl-19591079

RESUMEN

Using cross-modal form priming, we compared the use of stress and lexicality in the segmentation of spoken English by native English speakers (L1) and by native Hungarian speakers of second-language English (L2). For both language groups, lexicality was found to be an effective segmentation cue. That is, spoken disyllabic word fragments were stronger primes in a subsequent visual word recognition task when preceded by meaningful words than when preceded by nonwords: For example, the first two syllables of corridor were a more effective prime for visually presented corridor when heard in the phrase anythingcorri than in imoshingcorri. The stress pattern of the prime (strong-weak vs. weak-strong) did not affect the degree of priming. For L1 speakers, this supports previous findings about the preferential use of high-level segmentation strategies in clear speech. For L2 speakers, the lexical strategy was employed regardless of L2 proficiency level and instead of exploiting the consistent stress pattern of their native language. This is clear evidence for the primacy and robustness of segmentation by lexical subtraction even in individuals whose lexical knowledge is limited.


Asunto(s)
Multilingüismo , Fonética , Semántica , Percepción del Habla/fisiología , Vocabulario , Análisis de Varianza , Señales (Psicología) , Inglaterra , Femenino , Humanos , Hungría , Pruebas del Lenguaje , Masculino , Tiempo de Reacción/fisiología , Estudiantes , Universidades
6.
Psychol Sci ; 16(12): 958-64, 2005 Dec.
Artículo en Inglés | MEDLINE | ID: mdl-16313660

RESUMEN

In this study, we introduce pause detection (PD) as a new tool for studying the on-line integration of lexical and semantic information during speech comprehension. When listeners were asked to detect 200-ms pauses inserted into the last words of spoken sentences, their detection latencies were influenced by the lexical-semantic information provided by the sentences. Listeners took longer to detect a pause when it was inserted within a word that had multiple potential endings, rather than a unique ending, in the context of the sentence. An event-related potential (ERP) variant of the PD procedure revealed brain correlates of pauses as early as 101 to 125 ms following pause onset and patterns of lexical-semantic integration that mirrored those obtained with PD within 160 ms of pause onset. Thus, both the behavioral and the electrophysiological responses to pauses suggest that lexical and semantic processes are highly interactive and that their integration occurs rapidly during speech comprehension.


Asunto(s)
Semántica , Detección de Señal Psicológica , Percepción del Habla , Vocabulario , Humanos , Medición de la Producción del Habla
SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA