Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 171
Filtrar
Más filtros

Banco de datos
Tipo del documento
Intervalo de año de publicación
1.
Proc Natl Acad Sci U S A ; 121(23): e2320489121, 2024 Jun 04.
Artículo en Inglés | MEDLINE | ID: mdl-38805278

RESUMEN

Neural oscillations reflect fluctuations in excitability, which biases the percept of ambiguous sensory input. Why this bias occurs is still not fully understood. We hypothesized that neural populations representing likely events are more sensitive, and thereby become active on earlier oscillatory phases, when the ensemble itself is less excitable. Perception of ambiguous input presented during less-excitable phases should therefore be biased toward frequent or predictable stimuli that have lower activation thresholds. Here, we show such a frequency bias in spoken word recognition using psychophysics, magnetoencephalography (MEG), and computational modelling. With MEG, we found a double dissociation, where the phase of oscillations in the superior temporal gyrus and medial temporal gyrus biased word-identification behavior based on phoneme and lexical frequencies, respectively. This finding was reproduced in a computational model. These results demonstrate that oscillations provide a temporal ordering of neural activity based on the sensitivity of separable neural populations.


Asunto(s)
Lenguaje , Magnetoencefalografía , Percepción del Habla , Humanos , Percepción del Habla/fisiología , Masculino , Femenino , Adulto , Lóbulo Temporal/fisiología , Adulto Joven , Modelos Neurológicos
2.
J Exp Child Psychol ; 227: 105581, 2023 03.
Artículo en Inglés | MEDLINE | ID: mdl-36423439

RESUMEN

Although there is ample evidence documenting the development of spoken word recognition from infancy to adolescence, it is still unclear how development of word-level processing interacts with higher-level sentence processing, such as the use of lexical-semantic cues, to facilitate word recognition. We investigated how the ability to use an informative verb (e.g., draws) to predict an upcoming word (picture) and suppress competition from similar-sounding words (pickle) develops throughout the school-age years. Eye movements of children from two age groups (5-6 years and 9-10 years) were recorded while the children heard a sentence with an informative or neutral verb (The brother draws/gets the small picture) in which the final word matched one of a set of four pictures, one of which was a cohort competitor (pickle). Both groups demonstrated use of the informative verb to more quickly access the target word and suppress cohort competition. Although the age groups showed similar ability to use semantic context to facilitate processing, the older children demonstrated faster lexical access and more robust cohort suppression in both informative and uninformative contexts. This suggests that development of word-level processing facilitates access of top-down linguistic cues that support more efficient spoken language processing. Whereas developmental differences in the use of semantic context to facilitate lexical access were not explained by vocabulary knowledge, differences in the ability to suppress cohort competition were explained by vocabulary. This suggests a potential role for vocabulary knowledge in the resolution of lexical competition and perhaps the influence of lexical competition dynamics on vocabulary development.


Asunto(s)
Percepción del Habla , Masculino , Niño , Adolescente , Humanos , Preescolar , Lenguaje , Semántica , Vocabulario , Lingüística
3.
Neuroimage ; 260: 119457, 2022 10 15.
Artículo en Inglés | MEDLINE | ID: mdl-35842096

RESUMEN

The efficiency of spoken word recognition is essential for real-time communication. There is consensus that this efficiency relies on an implicit process of activating multiple word candidates that compete for recognition as the acoustic signal unfolds in real-time. However, few methods capture the neural basis of this dynamic competition on a msec-by-msec basis. This is crucial for understanding the neuroscience of language, and for understanding hearing, language and cognitive disorders in people for whom current behavioral methods are not suitable. We applied machine-learning techniques to standard EEG signals to decode which word was heard on each trial and analyzed the patterns of confusion over time. Results mirrored psycholinguistic findings: Early on, the decoder was equally likely to report the target (e.g., baggage) or a similar sounding competitor (badger), but by around 500 msec, competitors were suppressed. Follow up analyses show that this is robust across EEG systems (gel and saline), with fewer channels, and with fewer trials. Results are robust within individuals and show high reliability. This suggests a powerful and simple paradigm that can assess the neural dynamics of speech decoding, with potential applications for understanding lexical development in a variety of clinical disorders.


Asunto(s)
Percepción del Habla , Electroencefalografía , Humanos , Psicolingüística , Reconocimiento en Psicología , Reproducibilidad de los Resultados
4.
Int J Audiol ; : 1-10, 2022 Nov 25.
Artículo en Inglés | MEDLINE | ID: mdl-36427054

RESUMEN

OBJECTIVE: The aim of the current study was to assess the sensitivity, reliability and convergent validity of objective measures of listening effort collected in a sequential dual-task. DESIGN: On each trial, participants viewed a set of digits and listened to a spoken sentence presented at one of a range of signal-to-noise ratios (SNR) and then typed the sentence-final word and recalled the digits. Listening effort measures included word response time, digit recall accuracy and digit response time. In Experiment 1, SNR on each trial was randomised. In Experiment 2, SNR varied in a blocked design, and in each block self-reported listening effort was also collected. STUDY SAMPLES: Separate groups of 40 young adults participated in each experiment. RESULTS: Effects of SNR were observed for all measures. Linear effects of SNR were generally observed even with word recognition accuracy factored out of the models. Among the objective measures, reliability was excellent, and repeated-measures correlations, though not between-subjects correlations, were nearly all significant. CONCLUSION: The objective measures assessed appear to be sensitive and reliable indices of listening effort that are non-redundant with speech intelligibility and have strong within-participants convergent validity. Results support use of these measures in future studies of listening effort.

5.
J Psycholinguist Res ; 51(5): 933-955, 2022 Oct.
Artículo en Inglés | MEDLINE | ID: mdl-35556197

RESUMEN

Addressees use information from specific speakers' previous discourse to make predictions about incoming linguistic material and to restrict the choice of potential interpretations. In this way, speaker specificity has been shown to be an influential factor in language processing across several domains e.g., spoken word recognition, sentence processing, and pragmatics. However, its influence on semantic disambiguation has received little attention to date. Using an exposure-test design and visual world eye tracking, we examined the effect of speaker-specific literal vs. nonliteral style on the disambiguation of metaphorical polysemes such as 'fork', 'head', and 'mouse'. Eye movement data revealed that when interpreting polysemous words with a literal and a nonliteral meaning, addressees showed a late-stage preference for the literal meaning in response to a nonliteral speaker. We interpret this as reflecting an indeterminacy in the intended meaning in this condition, as well as the influence of meaning dominance cues at later stages of processing. Response data revealed that addressees then ultimately resolved to the literal target in 90% of trials. These results suggest that addressees consider a range of senses in the earlier stages of processing, and that speaker style is a contextual determinant in semantic processing.


Asunto(s)
Señales (Psicología) , Semántica , Humanos , Lenguaje , Lingüística , Movimientos Oculares
6.
Mem Cognit ; 49(1): 181-192, 2021 01.
Artículo en Inglés | MEDLINE | ID: mdl-32676885

RESUMEN

Two experiments were conducted to investigate the extent to which the lexical tone can affect spoken-word recognition in Chinese using a printed-word paradigm. Participants were presented with a visual display of four words-namely, a target word (e.g., , xiang4xian4, "quadrant"), a tone-consistent phonological competitor (e.g., , xiang4ce4, "photo album"), or a tone-inconsistent phonological competitor (e.g., , xiang1cai4, "coriander"), and two unrelated distractors. Simultaneously, they were asked to listen to a spoken target word presented in isolation (Experiment 1) or embedded in neutral/predictive sentence contexts (Experiment 2), and then click on the target word on the screen. Results showed significant phonological competitor effects (i.e., the fixation proportion on the phonological competitor was higher than that on the distractors) under both tone conditions. Specifically, a larger phonological competitor effect was observed in the tone-consistent condition than in the tone-inconsistent condition when the spoken word was presented in isolation and the neutral sentence contexts. This finding suggests a partial role of lexical tone in constraining spoken-word recognition. However, when embedded in a predictive sentence context, the phonological competitor effect was only observed in the tone-consistent condition and absent in the tone-inconsistent condition. This result indicates that the predictive sentence context can strengthen the role of lexical tone.


Asunto(s)
Tecnología de Seguimiento Ocular , Percepción Auditiva , China , Humanos , Lenguaje , Fonética , Percepción del Habla
7.
Behav Res Methods ; 52(5): 2202-2231, 2020 10.
Artículo en Inglés | MEDLINE | ID: mdl-32291734

RESUMEN

The Auditory English Lexicon Project (AELP) is a multi-talker, multi-region psycholinguistic database of 10,170 spoken words and 10,170 spoken nonwords. Six tokens of each stimulus were recorded as 44.1-kHz, 16-bit, mono WAV files by native speakers of American, British, and Singapore English, with one from each gender. Intelligibility norms, as determined by average identification scores and confidence ratings from between 15 and 20 responses per token, were obtained from 561 participants. Auditory lexical decision accuracies and latencies, with between 25 and 36 responses per token, were obtained from 438 participants. The database also includes a variety of lexico-semantic variables and structural indices for the words and nonwords, as well as participants' individual difference measures such as age, gender, language background, and proficiency. Taken together, there are a total of 122,040 sound files and over 4 million behavioral data points in the AELP. We describe some of the characteristics of this database. This resource is freely available from a website ( https://inetapps.nus.edu.sg/aelp/ ) hosted by the Department of Psychology at the National University of Singapore.


Asunto(s)
Lenguaje , Psicolingüística , Semántica , Bases de Datos Factuales , Toma de Decisiones , Humanos
8.
Behav Res Methods ; 51(3): 1187-1204, 2019 06.
Artículo en Inglés | MEDLINE | ID: mdl-29916041

RESUMEN

The Massive Auditory Lexical Decision (MALD) database is an end-to-end, freely available auditory and production data set for speech and psycholinguistic research, providing time-aligned stimulus recordings for 26,793 words and 9592 pseudowords, and response data for 227,179 auditory lexical decisions from 231 unique monolingual English listeners. In addition to the experimental data, we provide many precompiled listener- and item-level descriptor variables. This data set makes it easy to explore responses, build and test theories, and compare a wide range of models. We present summary statistics and analyses.


Asunto(s)
Toma de Decisiones , Adolescente , Adulto , Recolección de Datos , Bases de Datos Factuales , Femenino , Humanos , Lenguaje , Masculino , Psicolingüística , Habla , Adulto Joven
9.
Mem Cognit ; 46(4): 642-654, 2018 05.
Artículo en Inglés | MEDLINE | ID: mdl-29372533

RESUMEN

The aim of this study was to investigate the extent to which phonological information mediates the visual attention shift to printed Chinese words in spoken word recognition by using an eye-movement technique with a printed-word paradigm. In this paradigm, participants are visually presented with four printed words on a computer screen, which include a target word, a phonological competitor, and two distractors. Participants are then required to select the target word using a computer mouse, and the eye movements are recorded. In Experiment 1, phonological information was manipulated at the full-phonological overlap; in Experiment 2, phonological information at the partial-phonological overlap was manipulated; and in Experiment 3, the phonological competitors were manipulated to share either fulloverlap or partial-overlap with targets directly. Results of the three experiments showed that the phonological competitor effects were observed at both the full-phonological overlap and partial-phonological overlap conditions. That is, phonological competitors attracted more fixations than distractors, which suggested that phonological information mediates the visual attention shift during spoken word recognition. More importantly, we found that the mediating role of phonological information varies as a function of the phonological similarity between target words and phonological competitors.


Asunto(s)
Atención/fisiología , Reconocimiento Visual de Modelos/fisiología , Fonética , Psicolingüística , Percepción del Habla/fisiología , Adulto , China , Femenino , Humanos , Masculino , Adulto Joven
10.
Behav Res Methods ; 50(3): 871-889, 2018 06.
Artículo en Inglés | MEDLINE | ID: mdl-29713952

RESUMEN

This article describes a new Python distribution of TISK, the time-invariant string kernel model of spoken word recognition (Hannagan et al. in Frontiers in Psychology, 4, 563, 2013). TISK is an interactive-activation model similar to the TRACE model (McClelland & Elman in Cognitive Psychology, 18, 1-86, 1986), but TISK replaces most of TRACE's reduplicated, time-specific nodes with theoretically motivated time-invariant, open-diphone nodes. We discuss the utility of computational models as theory development tools, the relative merits of TISK as compared to other models, and the ways in which researchers might use this implementation to guide their own research and theory development. We describe a TISK model that includes features that facilitate in-line graphing of simulation results, integration with standard Python data formats, and graph and data export. The distribution can be downloaded from https://github.com/maglab-uconn/TISK1.0 .


Asunto(s)
Investigación Conductal/métodos , Simulación por Computador , Reconocimiento en Psicología , Programas Informáticos , Percepción del Habla , Humanos
11.
J Psycholinguist Res ; 47(1): 65-78, 2018 Feb.
Artículo en Inglés | MEDLINE | ID: mdl-28752195

RESUMEN

The purpose of this study is to investigate the interaction between processing lexical and speaker-specific information in spoken word recognition. The specific question is whether repetition and semantic/associative priming is reduced when the prime and target are produced by different speakers. In Experiment 1, the prime and target were repeated (e.g., queen-queen) or unrelated (e.g., bell-queen). In Experiment 2, the prime and target were semantically/associatively related (e.g., king-queen) or unrelated (e.g., bell-queen). In both experiments, the prime and target were either produced by the same male speaker or two different male speakers. Two interstimulus intervals between the prime and target were used to examine the time course of processing speaker information. The tasks for the participants included judging the lexical status of the target (lexical decision), followed by judging whether the prime and target were produced by the same speaker or different speakers (speaker discrimination). The results showed that both lexical decision and speaker discrimination were facilitated to a smaller extent when the prime and target were produced by different speakers, indicating reduced repetition priming by speaker variability. In contrast, semantic/associative priming was not affected by speaker variability. The ISI between the prime and target did not affect either type of priming. In conclusion, speaker variability affects accessing a word's form but not its meaning, suggesting that speaker-specific information is processed at a relatively shallow level.


Asunto(s)
Aprendizaje por Asociación , Reconocimiento en Psicología , Memoria Implícita , Semántica , Femenino , Humanos , Masculino , Percepción del Habla , Vocabulario , Adulto Joven
12.
Cogn Psychol ; 98: 73-101, 2017 11.
Artículo en Inglés | MEDLINE | ID: mdl-28881224

RESUMEN

Speech carries accent information relevant to determining the speaker's linguistic and social background. A series of web-based experiments demonstrate that accent cues can modulate access to word meaning. In Experiments 1-3, British participants were more likely to retrieve the American dominant meaning (e.g., hat meaning of "bonnet") in a word association task if they heard the words in an American than a British accent. In addition, results from a speeded semantic decision task (Experiment 4) and sentence comprehension task (Experiment 5) confirm that accent modulates on-line meaning retrieval such that comprehension of ambiguous words is easier when the relevant word meaning is dominant in the speaker's dialect. Critically, neutral-accent speech items, created by morphing British- and American-accented recordings, were interpreted in a similar way to accented words when embedded in a context of accented words (Experiment 2). This finding indicates that listeners do not use accent to guide meaning retrieval on a word-by-word basis; instead they use accent information to determine the dialectic identity of a speaker and then use their experience of that dialect to guide meaning access for all words spoken by that person. These results motivate a speaker-model account of spoken word recognition in which comprehenders determine key characteristics of their interlocutor and use this knowledge to guide word meaning access.


Asunto(s)
Reconocimiento en Psicología , Percepción del Habla/fisiología , Habla/fisiología , Adulto , Comprensión , Femenino , Humanos , Masculino , Reino Unido , Estados Unidos
13.
J Exp Child Psychol ; 159: 16-33, 2017 07.
Artículo en Inglés | MEDLINE | ID: mdl-28266332

RESUMEN

Although bilingual learners represent the linguistic majority, much less is known about their lexical processing in comparison with monolingual learners. In the current study, bilingual and monolingual toddlers were compared on their ability to recognize familiar words. Children were presented with correct pronunciations and mispronunciations, with the latter involving a vowel, consonant, or tone substitution. A robust ability to recognize words when their labels were correctly pronounced was observed in both groups. Both groups also exhibited a robust ability to reject vowel, tone, and consonant mispronunciations as possible labels for familiar words. However, time course analyses revealed processing differences based on language background; relative to Mandarin monolinguals, Mandarin-English bilingual toddlers demonstrated reduced efficiency in recognizing correctly pronounced words. With respect to mispronunciations, Mandarin-English bilingual learners demonstrated reduced sensitivity to tone mispronunciations relative to Mandarin monolingual toddlers. Moreover, the relative cost of mispronunciations differed for monolingual and bilingual toddlers. Monolingual toddlers demonstrated least sensitivity to consonants followed by vowels and tones, whereas bilingual toddlers demonstrated least sensitivity to tone, followed by consonants and then by vowels. Time course analyses revealed that both groups were sensitive to vowel and consonant variation. Results reveal both similarities and differences in monolingual and bilingual learners' processing of familiar words in Mandarin Chinese.


Asunto(s)
Desarrollo del Lenguaje , Lingüística , Multilingüismo , Fonética , Lectura , Semántica , Acústica del Lenguaje , Percepción del Habla , Niño , Preescolar , Femenino , Humanos , Lactante , Masculino , Singapur , Estadística como Asunto , Aprendizaje Verbal
14.
Behav Res Methods ; 49(1): 230-241, 2017 02.
Artículo en Inglés | MEDLINE | ID: mdl-26850055

RESUMEN

Despite its prevalence as one of the most highly influential models of spoken word recognition, the TRACE model has yet to be extended to consider tonal languages such as Mandarin Chinese. A key reason for this is that the model in its current state does not encode lexical tone. In this report, we present a modified version of the jTRACE model in which we borrowed on its existing architecture to code for Mandarin phonemes and tones. Units are coded in a way that is meant to capture the similarity in timing of access to vowel and tone information that has been observed in previous studies of Mandarin spoken word recognition. We validated the model by first simulating a recent experiment that had used the visual world paradigm to investigate how native Mandarin speakers process monosyllabic Mandarin words (Malins & Joanisse, 2010). We then subsequently simulated two psycholinguistic phenomena: (1) differences in the timing of resolution of tonal contrast pairs, and (2) the interaction between syllable frequency and tonal probability. In all cases, the model gave rise to results comparable to those of published data with human subjects, suggesting that it is a viable working model of spoken word recognition in Mandarin. It is our hope that this tool will be of use to practitioners studying the psycholinguistics of Mandarin Chinese and will help inspire similar models for other tonal languages, such as Cantonese and Thai.


Asunto(s)
Patrones de Reconocimiento Fisiológico , Percepción del Habla , Factores de Edad , Pueblo Asiatico , Simulación por Computador , Humanos , Lenguaje , Fonética , Psicolingüística/métodos , Reproducibilidad de los Resultados
15.
J Psycholinguist Res ; 46(1): 201-210, 2017 Feb.
Artículo en Inglés | MEDLINE | ID: mdl-27090111

RESUMEN

Previous experimental psycholinguistic studies suggested that the probabilistic phonotactics information might likely to hint the locations of word boundaries in continuous speech and hence posed an interesting solution to the empirical question on how we recognize/segment individual spoken word in speech. We investigated this issue by using Cantonese language as a testing case in the present study. A word-spotting task was used in which listeners were instructed to spot any Cantonese word from a series of nonsense sound sequences. We found that it was easier for the native Cantonese listeners to spot the target word in the nonsense sound sequences with high transitional probability phoneme combinations than those with low transitional probability phoneme combinations. These results concluded that native Cantonese listeners did make use of the transitional probability information to recognize the spoken word in speech.


Asunto(s)
Señales (Psicología) , Fonética , Reconocimiento en Psicología/fisiología , Percepción del Habla/fisiología , Adulto , Femenino , Hong Kong , Humanos , Masculino , Probabilidad , Adulto Joven
16.
J Exp Child Psychol ; 152: 136-148, 2016 12.
Artículo en Inglés | MEDLINE | ID: mdl-27544643

RESUMEN

To understand speech, listeners need to be able to decode the speech stream into meaningful units. However, coarticulation causes phonemes to differ based on their context. Because coarticulation is an ever-present component of the speech stream, it follows that listeners may exploit this source of information for cues to the identity of the words being spoken. This research investigates the development of listeners' sensitivity to coarticulation cues below the level of the phoneme in spoken word recognition. Using a looking-while-listening paradigm, adults and 2- and 3-year-old children were tested on coarticulation cues that either matched or mismatched the target. Both adults and children predicted upcoming phonemes based on anticipatory coarticulation to make decisions about word identity. The overall results demonstrate that coarticulation cues are a fundamental component of children's spoken word recognition system. However, children did not show the same resolution as adults of the mismatching coarticulation cues and competitor inhibition, indicating that children's processing systems are still developing.


Asunto(s)
Envejecimiento/psicología , Lenguaje , Reconocimiento en Psicología , Percepción del Habla , Adulto , Percepción Auditiva , Niño , Preescolar , Señales (Psicología) , Femenino , Humanos , Masculino , Adulto Joven
17.
J Exp Child Psychol ; 151: 51-64, 2016 11.
Artículo en Inglés | MEDLINE | ID: mdl-26687440

RESUMEN

We examined the contents of language-mediated prediction in toddlers by investigating the extent to which toddlers are sensitive to visual shape representations of upcoming words. Previous studies with adults suggest limits to the degree to which information about the visual form of a referent is predicted during language comprehension in low constraint sentences. Toddlers (30-month-olds) heard either contextually constraining sentences or contextually neutral sentences as they viewed images that were either identical or shape-related to the heard target label. We observed that toddlers activate shape information of upcoming linguistic input in contextually constraining semantic contexts; hearing a sentence context that was predictive of the target word activated perceptual information that subsequently influenced visual attention toward shape-related targets. Our findings suggest that visual shape is central to predictive language processing in toddlers.


Asunto(s)
Atención , Comprensión , Desarrollo del Lenguaje , Semántica , Percepción del Habla , Percepción Visual , Vocabulario , Preescolar , Femenino , Humanos , Lingüística , Masculino
18.
J Neurolinguistics ; 37: 58-67, 2016 Feb 01.
Artículo en Inglés | MEDLINE | ID: mdl-26516296

RESUMEN

In a masked cross-modal priming experiment with ERP recordings, spoken Japanese words were primed with words written in one of the two syllabary scripts of Japanese. An early priming effect, peaking at around 200ms after onset of the spoken word target, was seen in left lateral electrode sites for Katakana primes, and later effects were seen for both Hiragana and Katakana primes on the N400 ERP component. The early effect is thought to reflect the efficiency with which words in Katakana script make contact with sublexical phonological representations involved in spoken language comprehension, due to the particular way this script is used by Japanese readers. This demonstrates fast-acting influences of visual primes on the processing of auditory target words, and suggests that briefly presented visual primes can influence sublexical processing of auditory target words. The later N400 priming effects, on the other hand, most likely reflect cross-modal influences on activity at the level of whole-word phonology and semantics.

19.
Behav Res Methods ; 48(2): 553-66, 2016 06.
Artículo en Inglés | MEDLINE | ID: mdl-25987305

RESUMEN

Models of spoken word recognition typically make predictions that are then tested in the laboratory against the word recognition scores of human subjects (e.g., Luce & Pisoni Ear and Hearing, 19, 1-36, 1998). Unfortunately, laboratory collection of large sets of word recognition data can be costly and time-consuming. Due to the numerous advantages of online research in speed, cost, and participant diversity, some labs have begun to explore the use of online platforms such as Amazon's Mechanical Turk (AMT) to source participation and collect data (Buhrmester, Kwang, & Gosling Perspectives on Psychological Science, 6, 3-5, 2011). Many classic findings in cognitive psychology have been successfully replicated online, including the Stroop effect, task-switching costs, and Simon and flanker interference (Crump, McDonnell, & Gureckis PLoS ONE, 8, e57410, 2013). However, tasks requiring auditory stimulus delivery have not typically made use of AMT. In the present study, we evaluated the use of AMT for collecting spoken word identification and auditory lexical decision data. Although online users were faster and less accurate than participants in the lab, the results revealed strong correlations between the online and laboratory measures for both word identification accuracy and lexical decision speed. In addition, the scores obtained in the lab and online were equivalently correlated with factors that have been well established to predict word recognition, including word frequency and phonological neighborhood density. We also present and analyze a method for precise auditory reaction timing that is novel to behavioral research. Taken together, these findings suggest that AMT can be a viable alternative to the traditional laboratory setting as a source of participation for some spoken word recognition research.


Asunto(s)
Internet , Reconocimiento en Psicología , Percepción del Habla/fisiología , Investigación Conductal , Toma de Decisiones , Femenino , Humanos , Masculino , Pruebas Neuropsicológicas , Desempeño Psicomotor , Tiempo de Reacción , Proyectos de Investigación , Adulto Joven
20.
J Psycholinguist Res ; 45(2): 307-16, 2016 Apr.
Artículo en Inglés | MEDLINE | ID: mdl-25641395

RESUMEN

Two word-spotting experiments were conducted to examine the question of whether native Cantonese listeners are constrained by phonotactics information in spoken word recognition of Chinese words in speech. Because no legal consonant clusters occurred within an individual Chinese word, this kind of categorical phonotactics information of Chinese word may be most likely to cue native Cantonese listeners the locations of possible word boundaries in speech. The observed results from the two word-spotting experiments confirmed this prediction. Together with other relevant studies, we suggest that phonotactics constraint is one of the useful sources of information in spoken word recognition processes of Chinese words in speech.


Asunto(s)
Fonética , Percepción del Habla/fisiología , Habla/fisiología , Adulto , Femenino , Hong Kong , Humanos , Masculino , Adulto Joven
SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA