Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 35
Filtrar
Más filtros

Bases de datos
País/Región como asunto
Tipo del documento
Intervalo de año de publicación
1.
J Acoust Soc Am ; 148(6): 3709, 2020 12.
Artículo en Inglés | MEDLINE | ID: mdl-33379900

RESUMEN

In this study, both between-subject and within-subject variability in speech perception and speech production were examined in the same set of speakers. Perceptual acuity was determined using an ABX auditory discrimination task, whereby speakers made judgments between pairs of syllables on a /ɛ/ to /æ/ acoustic continuum. Auditory feedback perturbations of the first two formants were implemented in a production task to obtain measures of compensation, normal speech production variability, and vowel spacing. Speakers repeated the word "head" 120 times under varying feedback conditions, with the final Hold phase involving the strongest perturbations of +240 Hz in F1 and -300 Hz in F2. Multiple regression analyses were conducted to determine whether individual differences in compensatory behavior in the Hold phase could be predicted by perceptual acuity, speech production variability, and vowel spacing. Perceptual acuity significantly predicted formant changes in F1, but not in F2. These results are discussed in consideration of the importance of using larger sample sizes in the field and developing new methods to explore feedback processing at the individual participant level. The potential positive role of variability in speech motor control is also considered.

2.
J Acoust Soc Am ; 141(4): 2758, 2017 04.
Artículo en Inglés | MEDLINE | ID: mdl-28464659

RESUMEN

The interaction of language production and perception has been substantiated by empirical studies where speakers compensate their speech articulation in response to the manipulated sound of their voice heard in real-time as auditory feedback. A recent study by Max and Maffett [(2015). Neurosci. Lett. 591, 25-29] reported an absence of compensation (i.e., auditory-motor learning) for frequency-shifted formants when auditory feedback was delayed by 100 ms. In the present study, the effect of auditory feedback delay was studied when only the first formant was manipulated while delaying auditory feedback systematically. In experiment 1, a small yet significant compensation was observed even with 100 ms of auditory delay unlike the past report. This result suggests that the tolerance of feedback delay depends on different types of auditory errors being processed. In experiment 2, it was revealed that the amount of formant compensation had an inverse linear relationship with the amount of auditory delay. One of the speculated mechanisms to account for these results is that as auditory delay increases, undelayed (and unperturbed) somatosensory feedback is given more preference for accuracy control of vowel formants.


Asunto(s)
Retroalimentación Sensorial , Aprendizaje , Actividad Motora , Acústica del Lenguaje , Percepción del Habla , Calidad de la Voz , Estimulación Acústica , Adolescente , Adulto , Umbral Auditivo , Femenino , Humanos , Ruido/efectos adversos , Enmascaramiento Perceptual , Medición de la Producción del Habla , Factores de Tiempo , Adulto Joven
3.
J Acoust Soc Am ; 142(2): 838, 2017 08.
Artículo en Inglés | MEDLINE | ID: mdl-28863596

RESUMEN

Previous research has shown that speakers can adapt their speech in a flexible manner as a function of a variety of contextual and task factors. While it is known that speech tasks may play a role in speech motor behavior, it remains to be explored if the manner in which the speaking action is initiated can modify low-level, automatic control of vocal motor action. In this study, the nature (linguistic vs non-linguistic) and modality (auditory vs visual) of the go signal (i.e., the prompts) was manipulated in an otherwise identical vocal production task. Participants were instructed to produce the word "head" when prompted, and the auditory feedback they were receiving was altered by systematically changing the first formants of the vowel /ε/ in real time using a custom signal processing system. Linguistic prompts induced greater corrective behaviors to the acoustic perturbations than non-linguistic prompts. This suggests that the accepted variance for the intended speech sound decreases when external linguistic templates are provided to the speaker. Overall, this result shows that the automatic correction of vocal errors is influenced by flexible, context-dependant mechanisms.


Asunto(s)
Retroalimentación Sensorial , Lingüística , Acústica del Lenguaje , Percepción del Habla , Calidad de la Voz , Estimulación Acústica , Acústica , Adolescente , Adulto , Umbral Auditivo , Femenino , Humanos , Masculino , Estimulación Luminosa , Procesamiento de Señales Asistido por Computador , Medición de la Producción del Habla , Percepción Visual , Adulto Joven
4.
J Acoust Soc Am ; 138(1): 413-24, 2015 Jul.
Artículo en Inglés | MEDLINE | ID: mdl-26233040

RESUMEN

Past studies have shown that speakers spontaneously adjust their speech acoustics in response to their auditory feedback perturbed in real time. In the case of formant perturbation, the majority of studies have examined speaker's compensatory production using the English vowel /ɛ/ as in the word "head." Consistent behavioral observations have been reported, and there is lively discussion as to how the production system integrates auditory versus somatosensory feedback to control vowel production. However, different vowels have different oral sensation and proprioceptive information due to differences in the degree of lingual contact or jaw openness. This may in turn influence the ways in which speakers compensate for auditory feedback. The aim of the current study was to examine speakers' compensatory behavior with six English monophthongs. Specifically, the current study tested to see if "closed vowels" would show less compensatory production than "open vowels" because closed vowels' strong lingual sensation may richly specify production via somatosensory feedback. Results showed that, indeed, speakers exhibited less compensatory production with the closed vowels. Thus sensorimotor control of vowels is not fixed across all vowels; instead it exerts different influences across different vowels.


Asunto(s)
Retroalimentación Sensorial/fisiología , Fonación/fisiología , Fonética , Acústica del Lenguaje , Adolescente , Adulto , Canadá , Femenino , Humanos , Lenguaje , Estados Unidos/etnología , Adulto Joven
5.
J Neurosci ; 33(10): 4339-48, 2013 Mar 06.
Artículo en Inglés | MEDLINE | ID: mdl-23467350

RESUMEN

The everyday act of speaking involves the complex processes of speech motor control. An important component of control is monitoring, detection, and processing of errors when auditory feedback does not correspond to the intended motor gesture. Here we show, using fMRI and converging operations within a multivoxel pattern analysis framework, that this sensorimotor process is supported by functionally differentiated brain networks. During scanning, a real-time speech-tracking system was used to deliver two acoustically different types of distorted auditory feedback or unaltered feedback while human participants were vocalizing monosyllabic words, and to present the same auditory stimuli while participants were passively listening. Whole-brain analysis of neural-pattern similarity revealed three functional networks that were differentially sensitive to distorted auditory feedback during vocalization, compared with during passive listening. One network of regions appears to encode an "error signal" regardless of acoustic features of the error: this network, including right angular gyrus, right supplementary motor area, and bilateral cerebellum, yielded consistent neural patterns across acoustically different, distorted feedback types, only during articulation (not during passive listening). In contrast, a frontotemporal network appears sensitive to the speech features of auditory stimuli during passive listening; this preference for speech features was diminished when the same stimuli were presented as auditory concomitants of vocalization. A third network, showing a distinct functional pattern from the other two, appears to capture aspects of both neural response profiles. Together, our findings suggest that auditory feedback processing during speech motor control may rely on multiple, interactive, functionally differentiated neural systems.


Asunto(s)
Vías Auditivas/fisiología , Percepción Auditiva/fisiología , Mapeo Encefálico , Encéfalo/fisiología , Retroalimentación Sensorial/fisiología , Habla/fisiología , Estimulación Acústica , Adulto , Vías Auditivas/irrigación sanguínea , Encéfalo/irrigación sanguínea , Femenino , Humanos , Procesamiento de Imagen Asistido por Computador , Imagen por Resonancia Magnética , Masculino , Oxígeno/sangre , Tiempo de Reacción/fisiología , Adulto Joven
6.
J Acoust Soc Am ; 135(5): 2986-94, 2014 May.
Artículo en Inglés | MEDLINE | ID: mdl-24815278

RESUMEN

Previous research employing a real-time auditory perturbation paradigm has shown that talkers monitor their own speech attributes such as fundamental frequency, vowel intensity, vowel formants, and fricative noise as part of speech motor control. In the case of vowel formants or fricative noise, what was manipulated is spectral information about the filter function of the vocal tract. However, segments can be contrasted by parameters other than spectral configuration. It is possible that the feedback system monitors phonation timing in the way it does spectral information. This study examined whether talkers exhibit a compensatory behavior when manipulating information about voicing. When talkers received feedback of the cognate of the intended voicing category (saying "tipper" while hearing "dipper" or vice versa), they changed the voice onset time and in some cases the following vowel.


Asunto(s)
Retroalimentación Psicológica/fisiología , Retroalimentación Sensorial/fisiología , Distorsión de la Percepción/fisiología , Fonación/fisiología , Percepción del Habla/fisiología , Adaptación Fisiológica/fisiología , Adolescente , Sistemas de Computación , Femenino , Humanos , Destreza Motora/fisiología , Ruido , Psicoacústica , Medición de la Producción del Habla , Factores de Tiempo , Adulto Joven
7.
Psychol Sci ; 24(4): 423-31, 2013 Apr.
Artículo en Inglés | MEDLINE | ID: mdl-23462756

RESUMEN

Mounting physiological and behavioral evidence has shown that the detectability of a visual stimulus can be enhanced by a simultaneously presented sound. The mechanisms underlying these cross-sensory effects, however, remain largely unknown. Using continuous flash suppression (CFS), we rendered a complex, dynamic visual stimulus (i.e., a talking face) consciously invisible to participants. We presented the visual stimulus together with a suprathreshold auditory stimulus (i.e., a voice speaking a sentence) that either matched or mismatched the lip movements of the talking face. We compared how long it took for the talking face to overcome interocular suppression and become visible to participants in the matched and mismatched conditions. Our results showed that the detection of the face was facilitated by the presentation of a matching auditory sentence, in comparison with the presentation of a mismatching sentence. This finding indicates that the registration of audiovisual correspondences occurs at an early stage of processing, even when the visual information is blocked from conscious awareness.


Asunto(s)
Concienciación/fisiología , Percepción del Habla/fisiología , Percepción Visual/fisiología , Estimulación Acústica , Estado de Conciencia/fisiología , Femenino , Humanos , Inhibición Psicológica , Masculino , Estimulación Luminosa , Detección de Señal Psicológica , Adulto Joven
8.
J Acoust Soc Am ; 133(5): 2993-3003, 2013 May.
Artículo en Inglés | MEDLINE | ID: mdl-23654403

RESUMEN

The representation of speech goals was explored using an auditory feedback paradigm. When talkers produce vowels the formant structure of which is perturbed in real time, they compensate to preserve the intended goal. When vowel formants are shifted up or down in frequency, participants change the formant frequencies in the opposite direction to the feedback perturbation. In this experiment, the specificity of vowel representation was explored by examining the magnitude of vowel compensation when the second formant frequency of a vowel was perturbed for speakers of two different languages (English and French). Even though the target vowel was the same for both language groups, the pattern of compensation differed. French speakers compensated to smaller perturbations and made larger compensations overall. Moreover, French speakers modified the third formant in their vowels to strengthen the compensation even though the third formant was not perturbed. English speakers did not alter their third formant. Changes in the perceptual goodness ratings by the two groups of participants were consistent with the threshold to initiate vowel compensation in production. These results suggest that vowel goals not only specify the quality of the vowel but also the relationship of the vowel to the vowel space of the spoken language.


Asunto(s)
Fonética , Acústica del Lenguaje , Medición de la Producción del Habla , Calidad de la Voz , Adulto , Retroalimentación Sensorial , Femenino , Humanos , Procesamiento de Señales Asistido por Computador , Percepción del Habla , Factores de Tiempo , Adulto Joven
9.
J Exp Psychol Gen ; 152(6): 1598-1621, 2023 Jun.
Artículo en Inglés | MEDLINE | ID: mdl-36795429

RESUMEN

To maintain efficiency during conversation, interlocutors form and retrieve memory representations for the shared understanding or common ground that they have with their partner. Here, an online referential communication task (RCT) was used in two experiments to examine whether the strength and type of common ground between dyads influence their ability to form and recall referential labels for images. Results from both experiments show a significant association between the strength of common ground formed between dyads for images during the RCT and their verbatim-but not semantic-recall memory for image descriptions about a week later. Participants who generated the image descriptions during the RCT also showed superior verbatim and semantic recall memory performance. In Experiment 2, a group of friends with pre-existing personal common ground were significantly more efficient in their use of words to describe images during the RCT than a group of strangers without personal common ground. However, personal common ground did not lead to enhanced recall memory performance. Together, these findings provide evidence that individuals can remember some verbatim words and phrases from conversations, and partially support the theoretical notion that common ground and memory are intricately linked conversational processes. The null findings with regard to semantic recall memory suggest that the structured nature of the RCT may have constrained the types of memory representations that individuals formed during the interaction. Findings are discussed in relation to the multidimensional nature of common ground and the importance of developing more natural conversational tasks for future work. (PsycInfo Database Record (c) 2023 APA, all rights reserved).


Asunto(s)
Comunicación , Memoria , Humanos , Recuerdo Mental , Amigos , Cognición
10.
Front Hum Neurosci ; 16: 905365, 2022.
Artículo en Inglés | MEDLINE | ID: mdl-36092651

RESUMEN

Sensory information, including auditory feedback, is used by talkers to maintain fluent speech articulation. Current models of speech motor control posit that speakers continually adjust their motor commands based on discrepancies between the sensory predictions made by a forward model and the sensory consequences of their speech movements. Here, in two within-subject design experiments, we used a real-time formant manipulation system to explore how reliant speech articulation is on the accuracy or predictability of auditory feedback information. This involved introducing random formant perturbations during vowel production that varied systematically in their spatial location in formant space (Experiment 1) and temporal consistency (Experiment 2). Our results indicate that, on average, speakers' responses to auditory feedback manipulations varied based on the relevance and degree of the error that was introduced in the various feedback conditions. In Experiment 1, speakers' average production was not reliably influenced by random perturbations that were introduced every utterance to the first (F1) and second (F2) formants in various locations of formant space that had an overall average of 0 Hz. However, when perturbations were applied that had a mean of +100 Hz in F1 and -125 Hz in F2, speakers demonstrated reliable compensatory responses that reflected the average magnitude of the applied perturbations. In Experiment 2, speakers did not significantly compensate for perturbations of varying magnitudes that were held constant for one and three trials at a time. Speakers' average productions did, however, significantly deviate from a control condition when perturbations were held constant for six trials. Within the context of these conditions, our findings provide evidence that the control of speech movements is, at least in part, dependent upon the reliability and stability of the sensory information that it receives over time.

11.
J Acoust Soc Am ; 129(2): 955-65, 2011 Feb.
Artículo en Inglés | MEDLINE | ID: mdl-21361452

RESUMEN

Two auditory feedback perturbation experiments were conducted to examine the nature of control of the first two formants in vowels. In the first experiment, talkers heard their auditory feedback with either F1 or F2 shifted in frequency. Talkers altered production of the perturbed formant by changing its frequency in the opposite direction to the perturbation but did not produce a correlated alteration of the unperturbed formant. Thus, the motor control system is capable of fine-grained independent control of F1 and F2. In the second experiment, a large meta-analysis was conducted on data from talkers who received feedback where both F1 and F2 had been perturbed. A moderate correlation was found between individual compensations in F1 and F2 suggesting that the control of F1 and F2 is processed in a common manner at some level. While a wide range of individual compensation magnitudes were observed, no significant correlations were found between individuals' compensations and vowel space differences. Similarly, no significant correlations were found between individuals' compensations and variability in normal vowel production. Further, when receiving normal auditory feedback, most of the population exhibited no significant correlation between the natural variation in production of F1 and F2.


Asunto(s)
Retroalimentación Sensorial , Neuronas Motoras/fisiología , Acústica del Lenguaje , Inteligibilidad del Habla , Percepción del Habla , Adolescente , Umbral Auditivo , Femenino , Humanos , Medición de la Producción del Habla , Adulto Joven
12.
J Acoust Soc Am ; 130(5): 2978-86, 2011 Nov.
Artículo en Inglés | MEDLINE | ID: mdl-22087926

RESUMEN

Past studies have shown that when formants are perturbed in real time, speakers spontaneously compensate for the perturbation by changing their formant frequencies in the opposite direction to the perturbation. Further, the pattern of these results suggests that the processing of auditory feedback error operates at a purely acoustic level. This hypothesis was tested by comparing the response of three language groups to real-time formant perturbations, (1) native English speakers producing an English vowel /ε/, (2) native Japanese speakers producing a Japanese vowel (/e([inverted perpendicular])/), and (3) native Japanese speakers learning English, producing /ε/. All three groups showed similar production patterns when F1 was decreased; however, when F1 was increased, the Japanese groups did not compensate as much as the native English speakers. Due to this asymmetry, the hypothesis that the compensatory production for formant perturbation operates at a purely acoustic level was rejected. Rather, some level of phonological processing influences the feedback processing behavior.


Asunto(s)
Retroalimentación Psicológica , Multilingüismo , Fonética , Acústica del Lenguaje , Percepción del Habla , Adolescente , Análisis de Varianza , Femenino , Humanos , Medición de la Producción del Habla , Factores de Tiempo , Adulto Joven
13.
J Cogn Neurosci ; 22(8): 1770-81, 2010 Aug.
Artículo en Inglés | MEDLINE | ID: mdl-19642886

RESUMEN

The fluency and the reliability of speech production suggest a mechanism that links motor commands and sensory feedback. Here, we examined the neural organization supporting such links by using fMRI to identify regions in which activity during speech production is modulated according to whether auditory feedback matches the predicted outcome or not and by examining the overlap with the network recruited during passive listening to speech sounds. We used real-time signal processing to compare brain activity when participants whispered a consonant-vowel-consonant word ("Ted") and either heard this clearly or heard voice-gated masking noise. We compared this to when they listened to yoked stimuli (identical recordings of "Ted" or noise) without speaking. Activity along the STS and superior temporal gyrus bilaterally was significantly greater if the auditory stimulus was (a) processed as the auditory concomitant of speaking and (b) did not match the predicted outcome (noise). The network exhibiting this Feedback Type x Production/Perception interaction includes a superior temporal gyrus/middle temporal gyrus region that is activated more when listening to speech than to noise. This is consistent with speech production and speech perception being linked in a control system that predicts the sensory outcome of speech acts and that processes an error signal in speech-sensitive regions when this and the sensory data do not match.


Asunto(s)
Mapeo Encefálico , Encéfalo/fisiología , Retroalimentación Sensorial/fisiología , Percepción del Habla/fisiología , Habla/fisiología , Estimulación Acústica/métodos , Adolescente , Adulto , Percepción Auditiva , Encéfalo/irrigación sanguínea , Femenino , Lateralidad Funcional , Humanos , Procesamiento de Imagen Asistido por Computador/métodos , Imagen por Resonancia Magnética/métodos , Masculino , Persona de Mediana Edad , Fonética , Adulto Joven
14.
J Acoust Soc Am ; 127(2): 1059-68, 2010 Feb.
Artículo en Inglés | MEDLINE | ID: mdl-20136227

RESUMEN

Previous auditory perturbation studies have demonstrated that talkers spontaneously compensate for real-time formant-shifts by altering formant production in a manner opposite to the perturbation. Here, two experiments were conducted to examine the effect of amplitude of perturbation on the compensatory behavior for the vowel /epsilon/. In the first experiment, 20 male talkers received three step-changes in acoustic feedback: F1 was increased by 50, 100, and 200 Hz, while F2 was simultaneously decreased by 75, 125, and 250 Hz. In the second experiment, 21 male talkers received acoustic feedback in which the shifts in F1 and F2 were incremented by +4 and -5 Hz on each utterance to a maximum of +350 and -450 Hz, respectively. In both experiments, talkers altered production of F1 and F2 in a manner opposite to that of the formant-shift perturbation. Compensation was approximately 25%-30% of the perturbation magnitude for shifts in F1 and F2 up to 200 and 250 Hz, respectively. As larger shifts were applied, compensation reached a plateau and then decreased. The similarity of results across experiments suggests that the compensatory response is dependent on the perturbation magnitude but not on the rate at which the perturbation is introduced.


Asunto(s)
Retroalimentación Psicológica , Acústica del Lenguaje , Percepción del Habla , Habla , Estimulación Acústica , Adolescente , Algoritmos , Análisis de Varianza , Humanos , Masculino , Fonética , Voz , Adulto Joven
15.
Neuroimage ; 47(4): 1522-31, 2009 Oct 01.
Artículo en Inglés | MEDLINE | ID: mdl-19481162

RESUMEN

Conventional group analysis of functional MRI (fMRI) data usually involves spatial alignment of anatomy across participants by registering every brain image to an anatomical reference image. Due to the high degree of inter-subject anatomical variability, a low-resolution average anatomical model is typically used as the target template, and/or smoothing kernels are applied to the fMRI data to increase the overlap among subjects' image data. However, such smoothing can make it difficult to resolve small regions such as subregions of auditory cortex when anatomical morphology varies among subjects. Here, we use data from an auditory fMRI study to show that using a high-dimensional registration technique (HAMMER) results in an enhanced functional signal-to-noise ratio (fSNR) for functional data analysis within auditory regions, with more localized activation patterns. The technique is validated against DARTEL, a high-dimensional diffeomorphic registration, as well as against commonly used low-dimensional normalization techniques such as the techniques provided with SPM2 (cosine basis functions) and SPM5 (unified segmentation) software packages. We also systematically examine how spatial resolution of the template image and spatial smoothing of the functional data affect the results. Only the high-dimensional technique (HAMMER) appears to be able to capitalize on the excellent anatomical resolution of a single-subject reference template, and, as expected, smoothing increased fSNR, but at the cost of spatial resolution. In general, results demonstrate significant improvement in fSNR using HAMMER compared to analysis after normalization using DARTEL, or conventional normalization such as cosine basis function and unified segmentation in SPM, with more precisely localized activation foci, at least for activation in the region of auditory cortex.


Asunto(s)
Corteza Auditiva/anatomía & histología , Corteza Auditiva/fisiología , Potenciales Evocados Auditivos/fisiología , Aumento de la Imagen/métodos , Imagen por Resonancia Magnética/métodos , Lóbulo Temporal/anatomía & histología , Lóbulo Temporal/fisiología , Algoritmos , Mapeo Encefálico/métodos , Femenino , Humanos , Masculino , Reproducibilidad de los Resultados , Sensibilidad y Especificidad , Adulto Joven
16.
J Acoust Soc Am ; 124(4): 2283-90, 2008 Oct.
Artículo en Inglés | MEDLINE | ID: mdl-19062866

RESUMEN

This paper presents an analysis of facial motion during speech to identify linearly independent kinematic regions. The data consists of three-dimensional displacement records of a set of markers located on a subject's face while producing speech. A QR factorization with column pivoting algorithm selects a subset of markers with independent motion patterns. The subset is used as a basis to fit the motion of the other facial markers, which determines facial regions of influence of each of the linearly independent markers. Those regions constitute kinematic "eigenregions" whose combined motion produces the total motion of the face. Facial animations may be generated by driving the independent markers with collected displacement records.


Asunto(s)
Algoritmos , Expresión Facial , Músculos Faciales/fisiología , Modelos Anatómicos , Modelos Biológicos , Habla/fisiología , Fenómenos Biomecánicos , Gráficos por Computador , Humanos , Imagenología Tridimensional
17.
J Acoust Soc Am ; 123(1): 397-413, 2008 Jan.
Artículo en Inglés | MEDLINE | ID: mdl-18177169

RESUMEN

The present study investigated the extent to which native English listeners' perception of Japanese length contrasts can be modified with perceptual training, and how their performance is affected by factors that influence segment duration, which is a primary correlate of Japanese length contrasts. Listeners were trained in a minimal-pair identification paradigm with feedback, using isolated words contrasting in vowel length, produced at a normal speaking rate. Experiment 1 tested listeners using stimuli varying in speaking rate, presentation context (in isolation versus embedded in carrier sentences), and type of length contrast. Experiment 2 examined whether performance varied by the position of the contrast within the word, and by whether the test talkers were professionally trained or not. Results did not show that trained listeners improved overall performance to a greater extent than untrained control participants. Training improved perception of trained contrast types, generalized to nonprofessional talkers' productions, and improved performance in difficult within-word positions. However, training did not enable listeners to cope with speaking rate variation, and did not generalize to untrained contrast types. These results suggest that perceptual training improves non-native listeners' perception of Japanese length contrasts only to a limited extent.


Asunto(s)
Lenguaje , Fonética , Percepción del Habla , Enseñanza , Adulto , Anciano , Femenino , Humanos , Japón , Aprendizaje , Lingüística , Masculino , Persona de Mediana Edad , Medición de la Producción del Habla , Estados Unidos
18.
Multisens Res ; 31(1-2): 111-144, 2018 Jan 01.
Artículo en Inglés | MEDLINE | ID: mdl-31264597

RESUMEN

Since its discovery 40 years ago, the McGurk illusion has been usually cited as a prototypical paradigmatic case of multisensory binding in humans, and has been extensively used in speech perception studies as a proxy measure for audiovisual integration mechanisms. Despite the well-established practice of using the McGurk illusion as a tool for studying the mechanisms underlying audiovisual speech integration, the magnitude of the illusion varies enormously across studies. Furthermore, the processing of McGurk stimuli differs from congruent audiovisual processing at both phenomenological and neural levels. This questions the suitability of this illusion as a tool to quantify the necessary and sufficient conditions under which audiovisual integration occurs in natural conditions. In this paper, we review some of the practical and theoretical issues related to the use of the McGurk illusion as an experimental paradigm. We believe that, without a richer understanding of the mechanisms involved in the processing of the McGurk effect, experimenters should be really cautious when generalizing data generated by McGurk stimuli to matching audiovisual speech events.

19.
Early Interv Psychiatry ; 12(6): 1217-1221, 2018 12.
Artículo en Inglés | MEDLINE | ID: mdl-29235251

RESUMEN

AIM: Psychotic-like experiences (PLEs) share several risk factors with psychotic disorders and confer greater risk of developing a psychotic disorder. Thus, individuals with PLEs not only comprise a valuable population in which to study the aetiology and premorbid changes associated with psychosis, but also represent a high-risk population that could benefit from clinical monitoring or early intervention efforts. METHOD: We examined the score distribution and factor structure of the current 15-item Community Assessment of Psychic Experiences-Positive Scale (CAPE-P15) in a Canadian sample. The CAPE-P15, which measures current PLEs in the general population, was completed by 1741 university students. RESULTS: The distribution of total scores was positively skewed, and confirmatory factor analysis indicated that a 3-factor structure produced the best fit. CONCLUSION: The CAPE-P15 has a similar score distribution and consistently measures three types of positive PLEs: persecutory ideation, bizarre experiences and perceptual abnormalities when administered in Canada vs Australia.


Asunto(s)
Escalas de Valoración Psiquiátrica/estadística & datos numéricos , Trastornos Psicóticos/diagnóstico , Adolescente , Adulto , Canadá , Análisis Factorial , Femenino , Humanos , Masculino , Síntomas Prodrómicos , Factores de Riesgo , Adulto Joven
20.
J Speech Lang Hear Res ; 59(4): 601-15, 2016 08 01.
Artículo en Inglés | MEDLINE | ID: mdl-27537379

RESUMEN

PURPOSE: The aim of this article is to examine the effects of visual image degradation on performance and gaze behavior in audiovisual and visual-only speech perception tasks. METHOD: We presented vowel-consonant-vowel utterances visually filtered at a range of frequencies in visual-only, audiovisual congruent, and audiovisual incongruent conditions (Experiment 1; N = 66). In Experiment 2 (N = 20), participants performed a visual-only speech perception task and in Experiment 3 (N = 20) an audiovisual task while having their gaze behavior monitored using eye-tracking equipment. RESULTS: In the visual-only condition, increasing image resolution led to monotonic increases in performance, and proficient speechreaders were more affected by the removal of high spatial information than were poor speechreaders. The McGurk effect also increased with increasing visual resolution, although it was less affected by the removal of high-frequency information. Observers tended to fixate on the mouth more in visual-only perception, but gaze toward the mouth did not correlate with accuracy of silent speechreading or the magnitude of the McGurk effect. CONCLUSIONS: The results suggest that individual differences in silent speechreading and the McGurk effect are not related. This conclusion is supported by differential influences of high-resolution visual information on the 2 tasks and differences in the pattern of gaze.


Asunto(s)
Movimientos Oculares , Lectura de los Labios , Percepción del Habla , Percepción Visual , Análisis de Varianza , Medidas del Movimiento Ocular , Movimientos Oculares/fisiología , Femenino , Humanos , Masculino , Adulto Joven
SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA