Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 11 de 11
Filtrar
1.
Lang Speech ; : 238309221137607, 2022 Nov 30.
Artigo em Inglês | MEDLINE | ID: mdl-36448317

RESUMO

Adults are able to use visual prosodic cues in the speaker's face to segment speech. Furthermore, eye-tracking data suggest that learners will shift their gaze to the mouth during visual speech segmentation. Although these findings suggest that the mouth may be viewed more than the eyes or nose during visual speech segmentation, no study has examined the direct functional importance of individual features; thus, it is unclear which visual prosodic cues are important for word segmentation. In this study, we examined the impact of first removing (Experiment 1) and then isolating (Experiment 2) individual facial features on visual speech segmentation. Segmentation performance was above chance in all conditions except for when the visual display was restricted to the eye region (eyes only condition in Experiment 2). This suggests that participants were able to segment speech when they could visually access the mouth but not when the mouth was completely removed from the visual display, providing evidence that visual prosodic cues conveyed by the mouth are sufficient and likely necessary for visual speech segmentation.

2.
J Neurodev Disord ; 11(1): 21, 2019 09 13.
Artigo em Inglês | MEDLINE | ID: mdl-31519145

RESUMO

BACKGROUND: Qualitatively atypical language development characterized by non-sequential skill acquisition within a developmental domain, which has been called developmental deviance or difference, is a common characteristic of autism spectrum disorder (ASD). We developed the Response Dispersion Index (RDI), a measure of this phenomenon based on intra-subtest scatter of item responses on standardized psychometric assessments, to assess the within-task variability among individuals with language impairment (LI) and/or ASD. METHODS: Standard clinical assessments of language were administered to 502 individuals from the New Jersey Language and Autism Genetics Study (NJLAGS) cohort. Participants were divided into four diagnostic groups: unaffected, ASD-only, LI-only, and ASD + LI. For each language measure, RDI was defined as the product of the total number of test items and the sum of the weight (based on item difficulty) of test items missed. Group differences in RDI were assessed, and the relationship between RDI and ASD diagnosis among individuals with LI was investigated for each language assessment. RESULTS: Although standard scores were unable to distinguish the LI-only and ASD/ASD + LI groups, the ASD/ASD + LI groups had higher RDI scores compared to LI-only group across all measures of expressive, pragmatic, and metalinguistic language. RDI was positively correlated with quantitative ASD traits across all subgroups and was an effective predictor of ASD diagnosis among individuals with LI. CONCLUSIONS: The RDI is an effective quantitative metric of developmental deviance/difference that correlates with ASD traits, supporting previous associations between ASD and non-sequential skill acquisition. The RDI can be adapted to other clinical measures to investigate the degree of difference that is not captured by standard performance summary scores.


Assuntos
Transtorno do Espectro Autista/diagnóstico , Desenvolvimento da Linguagem , Transtornos da Linguagem/diagnóstico , Testes de Linguagem , Psicometria , Análise e Desempenho de Tarefas , Adolescente , Adulto , Transtorno do Espectro Autista/complicações , Estudos de Coortes , Feminino , Humanos , Transtornos da Linguagem/etiologia , Masculino , Pessoa de Meia-Idade , Projetos Piloto , Estudos Retrospectivos , Adulto Jovem
3.
Front Psychol ; 7: 52, 2016.
Artigo em Inglês | MEDLINE | ID: mdl-26869959

RESUMO

Speech is inextricably multisensory: both auditory and visual components provide critical information for all aspects of speech processing, including speech segmentation, the visual components of which have been the target of a growing number of studies. In particular, a recent study (Mitchel and Weiss, 2014) established that adults can utilize facial cues (i.e., visual prosody) to identify word boundaries in fluent speech. The current study expanded upon these results, using an eye tracker to identify highly attended facial features of the audiovisual display used in Mitchel and Weiss (2014). Subjects spent the most time watching the eyes and mouth. A significant trend in gaze durations was found with the longest gaze duration on the mouth, followed by the eyes and then the nose. In addition, eye-gaze patterns changed across familiarization as subjects learned the word boundaries, showing decreased attention to the mouth in later blocks while attention on other facial features remained consistent. These findings highlight the importance of the visual component of speech processing and suggest that the mouth may play a critical role in visual speech segmentation.

4.
J Phon ; 56: 66-74, 2016 May.
Artigo em Inglês | MEDLINE | ID: mdl-28867850

RESUMO

One challenge for speech perception is between-speaker variability in the acoustic parameters of speech. For example, the same phoneme (e.g. the vowel in "cat") may have substantially different acoustic properties when produced by two different speakers and yet the listener must be able to interpret these disparate stimuli as equivalent. Perceptual tuning, the use of contextual information to adjust phonemic representations, may be one mechanism that helps listeners overcome obstacles they face due to this variability during speech perception. Here we test whether visual contextual cues to speaker identity may facilitate the formation and maintenance of distributional representations for individual speakers, allowing listeners to adjust phoneme boundaries in a speaker-specific manner. We familiarized participants to an audiovisual continuum between /aba/ and /ada/. During familiarization, the "b-face" mouthed /aba/ when an ambiguous token was played, while the "D-face" mouthed /ada/. At test, the same ambiguous token was more likely to be identified as /aba/ when paired with a stilled image of the "b-face" than with an image of the "D-face." This was not the case in the control condition when the two faces were paired equally with the ambiguous token. Together, these results suggest that listeners may form speaker-specific phonemic representations using facial identity cues.

5.
Brain Imaging Behav ; 9(2): 264-74, 2015 Jun.
Artigo em Inglês | MEDLINE | ID: mdl-24788335

RESUMO

The social-cognitive deficits associated with several neurodevelopmental and neuropsychiatric disorders have been linked to structural and functional brain anomalies. Given the recent appreciation for quantitative approaches to behavior, in this study we examined the brain-behavior links in social cognition in healthy young adults from a quantitative approach. Twenty-two participants were administered quantitative measures of social cognition, including the social responsiveness scale (SRS), the empathizing questionnaire (EQ) and the systemizing questionnaire (SQ). Participants underwent a structural, 3-T magnetic resonance imaging (MRI) procedure that yielded both volumetric (voxel count) and asymmetry indices. Model fitting with backward elimination revealed that a combination of cortical, limbic and striatal regions accounted for significant variance in social behavior and cognitive styles that are typically associated with neurodevelopmental and neuropsychiatric disorders. Specifically, as caudate and amygdala volumes deviate from the typical R > L asymmetry, and cortical gray matter becomes more R > L asymmetrical, overall SRS and Emotion Recognition scores increase. Social Avoidance was explained by a combination of cortical gray matter, pallidum (rightward asymmetry) and caudate (deviation from rightward asymmetry). Rightward asymmetry of the pallidum was the sole predictor of Interpersonal Relationships and Repetitive Mannerisms. Increased D-scores on the EQ-SQ, an indication of greater systemizing relative to empathizing, was also explained by deviation from the typical R > L asymmetry of the caudate.These findings extend the brain-behavior links observed in neurodevelopmental disorders to the normal distribution of traits in a healthy sample.


Assuntos
Encéfalo/anatomia & histologia , Cognição , Habilidades Sociais , Feminino , Substância Cinzenta/anatomia & histologia , Humanos , Imageamento por Ressonância Magnética , Masculino , Modelos Neurológicos , Tamanho do Órgão , Testes Psicológicos , Inquéritos e Questionários , Adulto Jovem
6.
Lang Cogn Process ; 29(7): 771-780, 2014.
Artigo em Inglês | MEDLINE | ID: mdl-25018577

RESUMO

Speech is typically a multimodal phenomenon, yet few studies have focused on the exclusive contributions of visual cues to language acquisition. To address this gap, we investigated whether visual prosodic information can facilitate speech segmentation. Previous research has demonstrated that language learners can use lexical stress and pitch cues to segment speech and that learners can extract this information from talking faces. Thus, we created an artificial speech stream that contained minimal segmentation cues and paired it with two synchronous facial displays in which visual prosody was either informative or uninformative for identifying word boundaries. Across three familiarisation conditions (audio stream alone, facial streams alone, and paired audiovisual), learning occurred only when the facial displays were informative to word boundaries, suggesting that facial cues can help learners solve the early challenges of language acquisition.

7.
Front Psychol ; 5: 407, 2014.
Artigo em Inglês | MEDLINE | ID: mdl-24904449

RESUMO

Recent advances in the field of statistical learning have established that learners are able to track regularities of multimodal stimuli, yet it is unknown whether the statistical computations are performed on integrated representations or on separate, unimodal representations. In the present study, we investigated the ability of adults to integrate audio and visual input during statistical learning. We presented learners with a speech stream synchronized with a video of a speaker's face. In the critical condition, the visual (e.g., /gi/) and auditory (e.g., /mi/) signals were occasionally incongruent, which we predicted would produce the McGurk illusion, resulting in the perception of an audiovisual syllable (e.g., /ni/). In this way, we used the McGurk illusion to manipulate the underlying statistical structure of the speech streams, such that perception of these illusory syllables facilitated participants' ability to segment the speech stream. Our results therefore demonstrate that participants can integrate audio and visual input to perceive the McGurk illusion during statistical learning. We interpret our findings as support for modality-interactive accounts of statistical learning.

8.
J Exp Psychol Learn Mem Cogn ; 37(5): 1081-91, 2011 Sep.
Artigo em Inglês | MEDLINE | ID: mdl-21574745

RESUMO

It is currently unknown whether statistical learning is supported by modality-general or modality-specific mechanisms. One issue within this debate concerns the independence of learning in one modality from learning in other modalities. In the present study, the authors examined the extent to which statistical learning across modalities is independent by simultaneously presenting learners with auditory and visual streams. After establishing baseline rates of learning for each stream independently, they systematically varied the amount of audiovisual correspondence across 3 experiments. They found that learners were able to segment both streams successfully only when the boundaries of the audio and visual triplets were in alignment. This pattern of results suggests that learners are able to extract multiple statistical regularities across modalities provided that there is some degree of cross-modal coherence. They discuss the implications of their results in light of recent claims that multisensory statistical learning is guided by modality-independent mechanisms.


Assuntos
Percepção Auditiva/fisiologia , Aprendizagem/fisiologia , Reconhecimento Visual de Modelos/fisiologia , Sensação/fisiologia , Estimulação Acústica , Comportamento de Escolha/fisiologia , Feminino , Humanos , Masculino , Testes Neuropsicológicos , Estimulação Luminosa , Reconhecimento Psicológico , Estatística como Assunto , Estudantes , Inquéritos e Questionários , Universidades
9.
J Exp Psychol Hum Percept Perform ; 35(2): 588-94, 2009 Apr.
Artigo em Inglês | MEDLINE | ID: mdl-19331511

RESUMO

M. J. Spivey, M. Grosjean, and G. Knoblich showed that in a phonological competitor task, participants' mouse cursor movements showed more curvature toward the competitor item when the competitor and target were phonologically similar than when the competitor and target were phonologically dissimilar. Spivey et al. interpreted this result as evidence for continuous cascading of information during the processing of spoken words. Here we show that the results of Spivey et al.need not be ascribed to continuous speech processing. Instead, their results can be ascribed to discrete processing of speech, provided one appeals to an already supported model of motor control that asserts that switching movements from 1 target to another relies on superposition of the 2nd movement onto the 1st. The latter process is a continuous cascade, a fact that indirectly strengthens the plausibility of continuous cascade models. However, the fact that we can simulate the results of Spivey et al.with a continuous motor output model and a discrete perceptual model shows that the implications of Spivey et al.'s experiment are less clear than these authors supposed.


Assuntos
Modelos Neurológicos , Modelos Psicológicos , Desempenho Psicomotor , Percepção da Fala , Comportamento de Utilização de Ferramentas , Comportamento de Escolha , Humanos , Idioma , Movimento , Psicolinguística , Interface Usuário-Computador
10.
Lang Learn Dev ; 5(1): 30-49, 2009.
Artigo em Inglês | MEDLINE | ID: mdl-24729760

RESUMO

Studies using artificial language streams indicate that infants and adults can use statistics to correctly segment words. However, most studies have utilized only a single input language. Given the prevalence of bilingualism, how is multiple language input segmented? One particular problem may occur if learners combine input across languages: the statistics of particular units that overlap different languages may subsequently change and disrupt correct segmentation. Our study addresses this issue by employing artificial language streams to simulate the earliest stages of segmentation in adult L2-learners. In four experiments, participants tracked multiple sets of statistics for two artificial languages. Our results demonstrate that adult learners can track two sets of statistics simultaneously, suggesting that they form multiple representations when confronted with bilingual input. This work, along with planned infant experiments, informs a central issue in bilingualism research, namely, determining at what point listeners can form multiple representations when exposed to multiple languages.

11.
Psychol Sci ; 19(7): 709-16, 2008 Jul.
Artigo em Inglês | MEDLINE | ID: mdl-18727787

RESUMO

Poetic devices like alliteration can heighten readers' aesthetic experiences and enhance poets' recall of their epic pieces. The effects of such devices on memory for and appreciation of poetry are well known; however, the mechanisms underlying these effects are not yet understood. We used current theories of language comprehension as a framework for understanding how alliteration affects comprehension processes. Across three experiments, alliterative cues reactivated readers' memories for previous information when it was phonologically similar to the cue. These effects were obtained when participants read aloud and when they read silently, and with poetry and prose. The results support everyday intuitions about the effects of poetry and aesthetics, and explain the nature of such effects. These findings extend the scope of general memory models by indicating their capacity to explain the influence of nonsemantic discourse features.


Assuntos
Afeto , Cognição , Poesia como Assunto , Pensamento , Sinais (Psicologia) , Humanos
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA