Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 20 de 76
Filter
1.
Cogn Neurodyn ; 18(3): 931-946, 2024 Jun.
Article in English | MEDLINE | ID: mdl-38826672

ABSTRACT

The processing of speech information from various sensory modalities is crucial for human communication. Both left posterior superior temporal gyrus (pSTG) and motor cortex importantly involve in the multisensory speech perception. However, the dynamic integration of primary sensory regions to pSTG and the motor cortex remain unclear. Here, we implemented a behavioral experiment of classical McGurk effect paradigm and acquired the task functional magnetic resonance imaging (fMRI) data during synchronized audiovisual syllabic perception from 63 normal adults. We conducted dynamic causal modeling (DCM) analysis to explore the cross-modal interactions among the left pSTG, left precentral gyrus (PrG), left middle superior temporal gyrus (mSTG), and left fusiform gyrus (FuG). Bayesian model selection favored a winning model that included modulations of connections to PrG (mSTG → PrG, FuG → PrG), from PrG (PrG → mSTG, PrG → FuG), and to pSTG (mSTG → pSTG, FuG → pSTG). Moreover, the coupling strength of the above connections correlated with behavioral McGurk susceptibility. In addition, significant differences were found in the coupling strength of these connections between strong and weak McGurk perceivers. Strong perceivers modulated less inhibitory visual influence, allowed less excitatory auditory information flowing into PrG, but integrated more audiovisual information in pSTG. Taken together, our findings show that the PrG and pSTG interact dynamically with primary cortices during audiovisual speech, and support the motor cortex plays a specifically functional role in modulating the gain and salience between auditory and visual modalities. Supplementary Information: The online version contains supplementary material available at 10.1007/s11571-023-09945-z.

2.
Eur J Neurosci ; 59(11): 2979-2994, 2024 Jun.
Article in English | MEDLINE | ID: mdl-38570828

ABSTRACT

Differences between autistic and non-autistic individuals in perception of the temporal relationships between sights and sounds are theorized to underlie difficulties in integrating relevant sensory information. These, in turn, are thought to contribute to problems with speech perception and higher level social behaviour. However, the literature establishing this connection often involves limited sample sizes and focuses almost entirely on children. To determine whether these differences persist into adulthood, we compared 496 autistic and 373 non-autistic adults (aged 17 to 75 years). Participants completed an online version of the McGurk/MacDonald paradigm, a multisensory illusion indicative of the ability to integrate audiovisual speech stimuli. Audiovisual asynchrony was manipulated, and participants responded both to the syllable they perceived (revealing their susceptibility to the illusion) and to whether or not the audio and video were synchronized (allowing insight into temporal processing). In contrast with prior research with smaller, younger samples, we detected no evidence of impaired temporal or multisensory processing in autistic adults. Instead, we found that in both groups, multisensory integration correlated strongly with age. This contradicts prior presumptions that differences in multisensory perception persist and even increase in magnitude over the lifespan of autistic individuals. It also suggests that the compensatory role multisensory integration may play as the individual senses decline with age is intact. These findings challenge existing theories and provide an optimistic perspective on autistic development. They also underline the importance of expanding autism research to better reflect the age range of the autistic population.


Subject(s)
Speech Perception , Visual Perception , Humans , Adult , Middle Aged , Male , Female , Adolescent , Aged , Speech Perception/physiology , Young Adult , Visual Perception/physiology , Autistic Disorder/physiopathology , Autistic Disorder/psychology , Acoustic Stimulation/methods , Photic Stimulation/methods , Illusions/physiology , Age Factors , Auditory Perception/physiology
3.
Dev Psychobiol ; 65(7): e22431, 2023 11.
Article in English | MEDLINE | ID: mdl-37860909

ABSTRACT

Humans pay special attention to faces and speech from birth, but the interplay of developmental processes leading to specialization is poorly understood. We investigated the effects of face orientation on audiovisual (AV) speech perception in two age groups of infants (younger: 5- to 6.5-month-olds; older: 9- to 10.5-month-olds) and adults. We recorded event-related potentials (ERP) in response to videos of upright and inverted faces producing /ba/ articulation dubbed with auditory syllables that were either matching /ba/ or mismatching /ga/ the mouth movement. We observed an increase in the amplitude of audiovisual mismatch response (AVMMR) to incongruent visual /ba/-auditory /ga/ syllable in comparison to other stimuli in younger infants, while the older group of infants did not show a similar response. AV mismatch response to inverted visual /ba/-auditory /ga/ stimulus relative to congruent stimuli was also detected in the right frontal areas in the younger group and the left and right frontal areas in adults. We show that face configuration affects the neural response to AV mismatch differently across all age groups. The novel finding of the AVMMR in response to inverted incongruent AV speech may potentially imply the featural face processing in younger infants and adults when processing inverted faces articulating incongruent speech. The lack of visible differential responses to upright and inverted incongruent stimuli obtained in the older group of infants suggests a likely functional cortical reorganization in the processing of AV speech.


Subject(s)
Speech Perception , Speech , Adult , Humans , Infant , Speech/physiology , Visual Perception/physiology , Speech Perception/physiology , Evoked Potentials , Movement , Acoustic Stimulation
4.
Front Neurosci ; 17: 1217831, 2023.
Article in English | MEDLINE | ID: mdl-37901426

ABSTRACT

Background: The visual system is not fully mature at birth and continues to develop throughout infancy until it reaches adult levels through late childhood and adolescence. Disruption of vision during this postnatal period and prior to visual maturation results in deficits of visual processing and in turn may affect the development of complementary senses. Studying people who have had one eye surgically removed during early postnatal development is a useful model for understanding timelines of sensory development and the role of binocularity in visual system maturation. Adaptive auditory and audiovisual plasticity following the loss of one eye early in life has been observed for both low-and high-level visual stimuli. Notably, people who have had one eye removed early in life perceive the McGurk effect much less than binocular controls. Methods: The current study investigates whether multisensory compensatory mechanisms are also present in people who had one eye removed late in life, after postnatal visual system maturation, by measuring whether they perceive the McGurk effect compared to binocular controls and people who have had one eye removed early in life. Results: People who had one eye removed late in life perceived the McGurk effect similar to binocular viewing controls, unlike those who had one eye removed early in life. Conclusion: This suggests differences in multisensory compensatory mechanisms based on age at surgical eye removal. These results indicate that cross-modal adaptations for the loss of binocularity may be dependent on plasticity levels during cortical development.

5.
Brain Sci ; 13(8)2023 Aug 13.
Article in English | MEDLINE | ID: mdl-37626554

ABSTRACT

In the McGurk effect, perception of a spoken consonant is altered when an auditory (A) syllable is presented with an incongruent visual (V) syllable (e.g., A/pa/V/ka/ is often heard as /ka/ or /ta/). The McGurk effect provides a measure for visual influence on speech perception, becoming stronger the lower the proportion of auditory correct responses. Cross-language effects are studied to understand processing differences between one's own and foreign languages. Regarding the McGurk effect, it has sometimes been found to be stronger with foreign speakers. However, other studies have shown the opposite, or no difference between languages. Most studies have compared English with other languages. We investigated cross-language effects with native Finnish and Japanese speakers and listeners. Both groups of listeners had 49 participants. The stimuli (/ka/, /pa/, /ta/) were uttered by two female and male Finnish and Japanese speakers and presented in A, V and AV modality, including a McGurk stimulus A/pa/V/ka/. The McGurk effect was stronger with Japanese stimuli in both groups. Differences in speech perception were prominent between individual speakers but less so between native languages. Unisensory perception correlated with McGurk perception. These findings suggest that stimulus-dependent features contribute to the McGurk effect. This may have a stronger influence on syllable perception than cross-language factors.

6.
Front Neurosci ; 17: 1132269, 2023.
Article in English | MEDLINE | ID: mdl-37021133

ABSTRACT

Network architectures and learning principles have been critical in developing complex cognitive capabilities in artificial neural networks (ANNs). Spiking neural networks (SNNs) are a subset of ANNs that incorporate additional biological features such as dynamic spiking neurons, biologically specified architectures, and efficient and useful paradigms. Here we focus more on network architectures in SNNs, such as the meta operator called 3-node network motifs, which is borrowed from the biological network. We proposed a Motif-topology improved SNN (M-SNN), which is further verified efficient in explaining key cognitive phenomenon such as the cocktail party effect (a typical noise-robust speech-recognition task) and McGurk effect (a typical multi-sensory integration task). For M-SNN, the Motif topology is obtained by integrating the spatial and temporal motifs. These spatial and temporal motifs are first generated from the pre-training of spatial (e.g., MNIST) and temporal (e.g., TIDigits) datasets, respectively, and then applied to the previously introduced two cognitive effect tasks. The experimental results showed a lower computational cost and higher accuracy and a better explanation of some key phenomena of these two effects, such as new concept generation and anti-background noise. This mesoscale network motifs topology has much room for the future.

7.
Dev Sci ; 26(4): e13348, 2023 07.
Article in English | MEDLINE | ID: mdl-36394129

ABSTRACT

Autistic children (AC) show less audiovisual speech integration in the McGurk task, which correlates with their reduced mouth-looking time. The present study examined whether AC's less audiovisual speech integration in the McGurk task could be increased by increasing their mouth-looking time. We recruited 4- to 8-year-old AC and nonautistic children (NAC). In two experiments, we manipulated children's mouth-looking time, measured their audiovisual speech integration by employing the McGurk effect paradigm, and tracked their eye movements. In Experiment 1, we blurred the eyes in McGurk stimuli and compared children's performances in blurred-eyes and clear-eyes conditions. In Experiment 2, we cued children's attention to either the mouth or eyes of McGurk stimuli or asked them to view the McGurk stimuli freely. We found that both blurring the speaker's eyes and cuing to the speaker's mouth increased mouth-looking time and increased audiovisual speech integration in the McGurk task in AC. In addition, we found that blurring the speaker's eyes and cuing to the speaker's mouth also increased mouth-looking time in NAC, but neither blurring the speaker's eyes nor cuing to the speaker's mouth increased their audiovisual speech integration in the McGurk task. Our findings suggest that audiovisual speech integration in the McGurk task in AC could be increased by increasing their attention to the mouth. Our findings contribute to a deeper understanding of relations between face attention and audiovisual speech integration, and provide insights for the development of professional supports to increase audiovisual speech integration in AC. HIGHLIGHTS: The present study examined whether audiovisual speech integration in the McGurk task in AC could be increased by increasing their attention to the speaker's mouth. Blurring the speaker's eyes increased mouth-looking time and audiovisual speech integration in the McGurk task in AC. Cuing to the speaker's mouth also increased mouth-looking time and audiovisual speech integration in the McGurk task in AC. Audiovisual speech integration in the McGurk task in AC could be increased by increasing their attention to the speaker's mouth.


Subject(s)
Autistic Disorder , Speech Perception , Child , Humans , Child, Preschool , Speech , Eye Movements , Mouth , Visual Perception
8.
Brain Topogr ; 35(4): 416-430, 2022 07.
Article in English | MEDLINE | ID: mdl-35821542

ABSTRACT

Visual cues are especially vital for hearing impaired individuals such as cochlear implant (CI) users to understand speech in noise. Functional Near Infrared Spectroscopy (fNIRS) is a light-based imaging technology that is ideally suited for measuring the brain activity of CI users due to its compatibility with both the ferromagnetic and electrical components of these implants. In a preliminary step toward better elucidating the behavioral and neural correlates of audiovisual (AV) speech integration in CI users, we designed a speech-in-noise task and measured the extent to which 24 normal hearing individuals could integrate the audio of spoken monosyllabic words with the corresponding visual signals of a female speaker. In our behavioral task, we found that audiovisual pairings provided average improvements of 103% and 197% over auditory-alone listening conditions in -6 and -9 dB signal-to-noise ratios consisting of multi-talker background noise. In an fNIRS task using similar stimuli, we measured activity during auditory-only listening, visual-only lipreading, and AV listening conditions. We identified cortical activity in all three conditions over regions of middle and superior temporal cortex typically associated with speech processing and audiovisual integration. In addition, three channels active during the lipreading condition showed uncorrected correlations associated with behavioral measures of audiovisual gain as well as with the McGurk effect. Further work focusing primarily on the regions of interest identified in this study could test how AV speech integration may differ for CI users who rely on this mechanism for daily communication.


Subject(s)
Cochlear Implants , Speech Perception , Female , Humans , Spectroscopy, Near-Infrared , Speech , Visual Perception
9.
Int J Dev Disabil ; 68(1): 47-55, 2022.
Article in English | MEDLINE | ID: mdl-35173963

ABSTRACT

A weaker McGurk effect is observed in individuals with autism spectrum disorder (ASD); weaker integration is considered to be the key to understanding how low-order atypical processing leads to their maladaptive social behaviors. However, the mechanism for this weaker McGurk effect has not been fully understood. Here, we investigated (1) whether the weaker McGurk effect in individuals with high autistic traits is caused by poor lip-reading ability and (2) whether the hearing environment modifies the weaker McGurk effect in individuals with high autistic traits. To confirm them, we conducted two analogue studies among university students, based on the dimensional model of ASD. Results showed that individuals with high autistic traits have intact lip-reading ability as well as abilities to listen and recognize audiovisual congruent speech (Experiment 1). Furthermore, a weaker McGurk effect in individuals with high autistic traits, which appear under the without-noise condition, would disappear under the high noise condition (Experiments 1 and 2). Our findings suggest that high background noise might shift weight on the visual cue, thereby increasing the strength of the McGurk effect among individuals with high autistic traits.

10.
Front Hum Neurosci ; 16: 1027335, 2022.
Article in English | MEDLINE | ID: mdl-36684833

ABSTRACT

We receive information about the world around us from multiple senses which combine in a process known as multisensory integration. Multisensory integration has been shown to be dependent on attention; however, the neural mechanisms underlying this effect are poorly understood. The current study investigates whether changes in sensory noise explain the effect of attention on multisensory integration and whether attentional modulations to multisensory integration occur via modality-specific mechanisms. A task based on the McGurk Illusion was used to measure multisensory integration while attention was manipulated via a concurrent auditory or visual task. Sensory noise was measured within modality based on variability in unisensory performance and was used to predict attentional changes to McGurk perception. Consistent with previous studies, reports of the McGurk illusion decreased when accompanied with a secondary task; however, this effect was stronger for the secondary visual (as opposed to auditory) task. While auditory noise was not influenced by either secondary task, visual noise increased with the addition of the secondary visual task specifically. Interestingly, visual noise accounted for significant variability in attentional disruptions to the McGurk illusion. Overall, these results strongly suggest that sensory noise may underlie attentional alterations to multisensory integration in a modality-specific manner. Future studies are needed to determine whether this finding generalizes to other types of multisensory integration and attentional manipulations. This line of research may inform future studies of attentional alterations to sensory processing in neurological disorders, such as Schizophrenia, Autism, and ADHD.

11.
Q J Exp Psychol (Hove) ; 75(5): 924-935, 2022 May.
Article in English | MEDLINE | ID: mdl-34427494

ABSTRACT

The other-race effect indicates a perceptual advantage when processing own-race faces. This effect has been demonstrated in individuals' recognition of facial identity and emotional expressions. However, it remains unclear whether the other-race effect also exists in multisensory domains. We conducted two experiments to provide evidence for the other-race effect in facial speech recognition, using the McGurk effect. Experiment 1 tested this issue among East Asian adults, examining the magnitude of the McGurk effect during stimuli using speakers from two different races (own-race vs. other-race). We found that own-race faces induced a stronger McGurk effect than other-race faces. Experiment 2 indicated that the other-race effect was not simply due to different levels of attention being paid to the mouths of own- and other-race speakers. Our findings demonstrated that own-race faces enhance the weight of visual input during audiovisual speech perception, and they provide evidence of the own-race effect in the audiovisual interaction for speech perception in adults.


Subject(s)
Facial Recognition , Speech Perception , Adult , Attention , Humans , Recognition, Psychology , Speech
12.
Int J Behav Dev ; 45(5): 409-417, 2021 Sep 01.
Article in English | MEDLINE | ID: mdl-34650316

ABSTRACT

Although hearing often declines with age, prior research has shown that older adults may benefit from multisensory input to a greater extent when compared to younger adults, a concept known as inverse effectiveness. While there is behavioral evidence in support of this phenomenon, less is known about its neural basis. The present fMRI study examined how older and younger adults processed multimodal auditory-visual (AV) phonemic stimuli which were either congruent or incongruent across modalities. Incongruent AV pairs were designed to elicit the McGurk effect. Behaviorally, reaction times were significantly faster during congruent trials compared to incongruent trials for both age groups, and overall older adults responded more slowly. The interaction was not significant suggesting that older adults processed the AV stimuli similarly to younger adults. Although there were minimal behavioral differences, age-related differences in functional activation were identified: Younger adults elicited greater activation than older adults in primary sensory regions including superior temporal gyrus, the calcarine fissure, and left post-central gyrus. In contrast, older adults elicited greater activation than younger adults in dorsal frontal regions including middle and superior frontal gyri, as well as dorsal parietal regions. These data suggest that while there is age-related stability in behavioral sensitivity to multimodal stimuli, the neural bases for this effect differed between older and younger adults. Our results demonstrated that older adults underrecruited primary sensory cortices and had increased recruitment of regions involved in executive function, attention, and monitoring processes, which may reflect an attempt to compensate.

13.
Autism Res ; 14(12): 2592-2602, 2021 12.
Article in English | MEDLINE | ID: mdl-34415113

ABSTRACT

Autistic children show audiovisual speech integration deficits, though the underlying mechanisms remain unclear. The present study examined how audiovisual speech integration deficits in autistic children could be affected by their looking patterns. We measured audiovisual speech integration in 26 autistic children and 26 typically developing (TD) children (4- to 7-year-old) employing the McGurk task (a videotaped speaker uttering phonemes with her eyes open or closed) and tracked their eye movements. We found that, compared with TD children, autistic children showed weaker audiovisual speech integration (i.e., the McGurk effect) in the open-eyes condition and similar audiovisual speech integration in the closed-eyes condition. Autistic children viewed the speaker's mouth less in non-McGurk trials than in McGurk trials in both conditions. Importantly, autistic children's weaker audiovisual speech integration could be predicted by their reduced mouth-looking time. The present study indicated that atypical face-viewing patterns could serve as one of the cognitive mechanisms of audiovisual speech integration deficits in autistic children. LAY SUMMARY: McGurk effect occurs when the visual part of a phoneme (e.g., "ga") and the auditory part of another phoneme (e.g., "ba") uttered by a speaker were integrated into a fused perception (e.g., "da"). The present study examined how McGurk effect in autistic children could be affected by their looking patterns for the speaker's face. We found that less looking time for the speaker's mouth in autistic children could predict weaker McGurk effect. As McGurk effect manifests audiovisual speech integration, our findings imply that we could improve audiovisual speech integration in autistic children by directing them to look at the speaker's mouth in future intervention.


Subject(s)
Autism Spectrum Disorder , Autistic Disorder , Speech Perception , Acoustic Stimulation , Autism Spectrum Disorder/complications , Autistic Disorder/complications , Child , Child, Preschool , Female , Humans , Mouth , Speech , Visual Perception
14.
Front Psychol ; 12: 515237, 2021.
Article in English | MEDLINE | ID: mdl-34354620

ABSTRACT

Learning to move from auditory signals to phonemic categories is a crucial component of first, second, and multilingual language acquisition. In L1 and simultaneous multilingual acquisition, learners build up phonological knowledge to structure their perception within a language. For sequential multilinguals, this knowledge may support or interfere with acquiring language-specific representations for a new phonemic categorization system. Syllable structure is a part of this phonological knowledge, and language-specific syllabification preferences influence language acquisition, including early word segmentation. As a result, we expect to see language-specific syllable structure influencing speech perception as well. Initial evidence of an effect appears in Ali et al. (2011), who argued that cross-linguistic differences in McGurk fusion within a syllable reflected listeners' language-specific syllabification preferences. Building on a framework from Cho and McQueen (2006), we argue that this could reflect the Phonological-Superiority Hypothesis (differences in L1 syllabification preferences make some syllabic positions harder to classify than others) or the Phonetic-Superiority Hypothesis (the acoustic qualities of speech sounds in some positions make it difficult to perceive unfamiliar sounds). However, their design does not distinguish between these two hypotheses. The current research study extends the work of Ali et al. (2011) by testing Japanese, and adding audio-only and congruent audio-visual stimuli to test the effects of syllabification preferences beyond just McGurk fusion. Eighteen native English speakers and 18 native Japanese speakers were asked to transcribe nonsense words in an artificial language. English allows stop consonants in syllable codas while Japanese heavily restricts them, but both groups showed similar patterns of McGurk fusion in stop codas. This is inconsistent with the Phonological-Superiority Hypothesis. However, when visual information was added, the phonetic influences on transcription accuracy largely disappeared. This is inconsistent with the Phonetic-Superiority Hypothesis. We argue from these results that neither acoustic informativity nor interference of a listener's phonological knowledge is superior, and sketch a cognitively inspired rational cue integration framework as a third hypothesis to explain how L1 phonological knowledge affects L2 perception.

15.
Acta Psychol (Amst) ; 218: 103354, 2021 Jul.
Article in English | MEDLINE | ID: mdl-34174491

ABSTRACT

Multisensory integration, the process by which sensory information from different sensory modalities are bound together, is hypothesized to contribute to perceptual symptomatology in schizophrenia, in whom multisensory integration differences have been consistently found. Evidence is emerging that these differences extend across the schizophrenia spectrum, including individuals in the general population with higher levels of schizotypal traits. In the current study, we used the McGurk task as a measure of multisensory integration. We measured schizotypal traits using the Schizotypal Personality Questionnaire (SPQ), hypothesizing that higher levels of schizotypal traits, specifically Unusual Perceptual Experiences and Odd Speech subscales, would be associated with decreased multisensory integration of speech. Surprisingly, Unusual Perceptual Experiences were not associated with multisensory integration. However, Odd Speech was associated with multisensory integration, and this association extended more broadly across the Disorganized factor of the SPQ, including Odd or Eccentric Behaviour. Individuals with higher levels of Odd or Eccentric Behaviour scores also demonstrated poorer lip-reading abilities, which partially explained performance in the McGurk task. This suggests that aberrant perceptual processes affecting individuals across the schizophrenia spectrum may relate to disorganized symptomatology.


Subject(s)
Schizophrenia , Schizotypal Personality Disorder , Humans , Personality , Schizotypal Personality Disorder/diagnosis , Surveys and Questionnaires
16.
Atten Percept Psychophys ; 83(6): 2583-2598, 2021 Aug.
Article in English | MEDLINE | ID: mdl-33884572

ABSTRACT

Visual speech cues play an important role in speech recognition, and the McGurk effect is a classic demonstration of this. In the original McGurk & Macdonald (Nature 264, 746-748 1976) experiment, 98% of participants reported an illusory "fusion" percept of /d/ when listening to the spoken syllable /b/ and watching the visual speech movements for /g/. However, more recent work shows that subject and task differences influence the proportion of fusion responses. In the current study, we varied task (forced-choice vs. open-ended), stimulus set (including /d/ exemplars vs. not), and data collection environment (lab vs. Mechanical Turk) to investigate the robustness of the McGurk effect. Across experiments, using the same stimuli to elicit the McGurk effect, we found fusion responses ranging from 10% to 60%, thus showing large variability in the likelihood of experiencing the McGurk effect across factors that are unrelated to the perceptual information provided by the stimuli. Rather than a robust perceptual illusion, we therefore argue that the McGurk effect exists only for some individuals under specific task situations.Significance: This series of studies re-evaluates the classic McGurk effect, which shows the relevance of visual cues on speech perception. We highlight the importance of taking into account subject variables and task differences, and challenge future researchers to think carefully about the perceptual basis of the McGurk effect, how it is defined, and what it can tell us about audiovisual integration in speech.


Subject(s)
Illusions , Speech Perception , Auditory Perception , Humans , Speech , Visual Perception
17.
Proc Biol Sci ; 288(1943): 20202419, 2021 01 27.
Article in English | MEDLINE | ID: mdl-33499783

ABSTRACT

Beat gestures-spontaneously produced biphasic movements of the hand-are among the most frequently encountered co-speech gestures in human communication. They are closely temporally aligned to the prosodic characteristics of the speech signal, typically occurring on lexically stressed syllables. Despite their prevalence across speakers of the world's languages, how beat gestures impact spoken word recognition is unclear. Can these simple 'flicks of the hand' influence speech perception? Across a range of experiments, we demonstrate that beat gestures influence the explicit and implicit perception of lexical stress (e.g. distinguishing OBject from obJECT), and in turn can influence what vowels listeners hear. Thus, we provide converging evidence for a manual McGurk effect: relatively simple and widely occurring hand movements influence which speech sounds we hear.


Subject(s)
Gestures , Speech Perception , Humans , Language , Phonetics , Speech
18.
Epilepsy Behav ; 114(Pt A): 107600, 2021 01.
Article in English | MEDLINE | ID: mdl-33248941

ABSTRACT

BACKGROUND: McGurk effect is a perceptual phenomenon that demonstrates an interaction between hearing and vision in speech perception. A wide range of neuropsychological deficits have been described in people with long-standing epilepsy, which affect multimodal integration in speech perception and hence refractory epilepsy patients are ideal for testing the McGurk effect. MATERIALS AND METHODS: We studied the McGurk effect in 50 patients diagnosed with medically refractory left or right hemispheric epilepsy based on clinical, radiological, and electrophysiological data. RESULTS: The McGurk effect was better perceived (p = 0.006) in patients with left hemispheric epilepsy (n = 12, 71%) compared to right (n = 5, 29%). The other factors which compromised the perception of the McGurk effect were impairments in visual memory (p = 0.041), facial emotion recognition (p = 0.001), and lip-reading (p = 0.006). Perception of the McGurk effect reduced significantly (p = 0.006) when the epilepsy duration was 10 years or beyond. CONCLUSION: The McGurk effect can be used in refractory epilepsy patients, to detect subtle abnormalities in speech perception, before significant irreversible speech and language dysfunction become evident.


Subject(s)
Drug Resistant Epilepsy , Facial Recognition , Speech Perception , Drug Resistant Epilepsy/diagnosis , Humans , Speech , Visual Perception
19.
Conscious Cogn ; 86: 103030, 2020 11.
Article in English | MEDLINE | ID: mdl-33120291

ABSTRACT

Multisensory integration, the binding of sensory information from different sensory modalities, may contribute to perceptual symptomatology in schizophrenia, including hallucinations and aberrant speech perception. Differences in multisensory integration and temporal processing, an important component of multisensory integration, are consistently found in schizophrenia. Evidence is emerging that these differences extend across the schizophrenia spectrum, including individuals in the general population with higher schizotypal traits. In the current study, we investigated the relationship between schizotypal traits and perceptual functioning, using audiovisual speech-in-noise, McGurk, and ternary synchrony judgment tasks. We measured schizotypal traits using the Schizotypal Personality Questionnaire (SPQ), hypothesizing that higher scores on Unusual Perceptual Experiences and Odd Speech subscales would be associated with decreased multisensory integration, increased susceptibility to distracting auditory speech, and less precise temporal processing. Surprisingly, these measures were not associated with the predicted subscales, suggesting that these perceptual differences may not be present across the schizophrenia spectrum.


Subject(s)
Schizophrenia , Schizotypal Personality Disorder , Speech Perception , Time Perception , Auditory Perception , Humans , Speech
20.
Atten Percept Psychophys ; 82(7): 3544-3557, 2020 Oct.
Article in English | MEDLINE | ID: mdl-32533526

ABSTRACT

Seeing a talker's face can aid audiovisual (AV) integration when speech is presented in noise. However, few studies have simultaneously manipulated auditory and visual degradation. We aimed to establish how degrading the auditory and visual signal affected AV integration. Where people look on the face in this context is also of interest; Buchan, Paré and Munhall (Brain Research, 1242, 162-171, 2008) found fixations on the mouth increased in the presence of auditory noise whilst Wilson, Alsius, Paré and Munhall (Journal of Speech, Language, and Hearing Research, 59(4), 601-615, 2016) found mouth fixations decreased with decreasing visual resolution. In Condition 1, participants listened to clear speech, and in Condition 2, participants listened to vocoded speech designed to simulate the information provided by a cochlear implant. Speech was presented in three levels of auditory noise and three levels of visual blurring. Adding noise to the auditory signal increased McGurk responses, while blurring the visual signal decreased McGurk responses. Participants fixated the mouth more on trials when the McGurk effect was perceived. Adding auditory noise led to people fixating the mouth more, while visual degradation led to people fixating the mouth less. Combined, the results suggest that modality preference and where people look during AV integration of incongruent syllables varies according to the quality of information available.


Subject(s)
Eye Movements , Speech Perception , Auditory Perception , Humans , Speech , Visual Perception
SELECTION OF CITATIONS
SEARCH DETAIL