Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 26
Filtrar
Más filtros










Base de datos
Intervalo de año de publicación
1.
Sci Rep ; 13(1): 16621, 2023 10 03.
Artículo en Inglés | MEDLINE | ID: mdl-37789043

RESUMEN

Speech understanding, while effortless in quiet conditions, is challenging in noisy environments. Previous studies have revealed that a feasible approach to supplement speech-in-noise (SiN) perception consists in presenting speech-derived signals as haptic input. In the current study, we investigated whether the presentation of a vibrotactile signal derived from the speech temporal envelope can improve SiN intelligibility in a multi-talker background for untrained, normal-hearing listeners. We also determined if vibrotactile sensitivity, evaluated using vibrotactile detection thresholds, modulates the extent of audio-tactile SiN improvement. In practice, we measured participants' speech recognition in a multi-talker noise without (audio-only) and with (audio-tactile) concurrent vibrotactile stimulation delivered in three schemes: to the left or right palm, or to both. Averaged across the three stimulation delivery schemes, the vibrotactile stimulation led to a significant improvement of 0.41 dB in SiN recognition when compared to the audio-only condition. Notably, there were no significant differences observed between the improvements in these delivery schemes. In addition, audio-tactile SiN benefit was significantly predicted by participants' vibrotactile threshold levels and unimodal (audio-only) SiN performance. The extent of the improvement afforded by speech-envelope-derived vibrotactile stimulation was in line with previously uncovered vibrotactile enhancements of SiN perception in untrained listeners with no known hearing impairment. Overall, these results highlight the potential of concurrent vibrotactile stimulation to improve SiN recognition, especially in individuals with poor SiN perception abilities, and tentatively more so with increasing tactile sensitivity. Moreover, they lend support to the multimodal accounts of speech perception and research on tactile speech aid devices.


Asunto(s)
Pérdida Auditiva , Percepción del Habla , Humanos , Habla/fisiología , Percepción del Habla/fisiología , Tecnología Háptica , Audición/fisiología , Inteligibilidad del Habla
3.
Front Psychol ; 14: 1027872, 2023.
Artículo en Inglés | MEDLINE | ID: mdl-36993883

RESUMEN

Snakes and primates have coexisted for thousands of years. Given that snakes are the first of the major primate predators, natural selection may have favored primates whose snake detection abilities allowed for better defensive behavior. Aligning with this idea, we recently provided evidence for an inborn mechanism anchored in the human brain that promptly detects snakes, based on their characteristic visual features. What are the critical visual features driving human neural responses to snakes is an unresolved issue. While their prototypical curvilinear coiled shape seems of major importance, it remains possible that the brain responds to a blend of other visual features. Coloration, in particular, might be of major importance, as it has been shown to act as a powerful aposematic signal. Here, we specifically examine whether color impacts snake-specific responses in the naive, immature infant brain. For this purpose, we recorded the brain activity of 6-to 11-month-old infants using electroencephalography (EEG), while they watched sequences of color or grayscale animal pictures flickering at a periodic rate. We showed that glancing at colored and grayscale snakes generated specific neural responses in the occipital region of the brain. Color did not exert a major influence on the infant brain response but strongly increased the attention devoted to the visual streams. Remarkably, age predicted the strength of the snake-specific response. These results highlight that the expression of the brain-anchored reaction to coiled snakes bears on the refinement of the visual system.

4.
Neuroimage ; 265: 119770, 2023 01.
Artículo en Inglés | MEDLINE | ID: mdl-36462732

RESUMEN

Children have more difficulty perceiving speech in noise than adults. Whether this difficulty relates to an immature processing of prosodic or linguistic elements of the attended speech is still unclear. To address the impact of noise on linguistic processing per se, we assessed how babble noise impacts the cortical tracking of intelligible speech devoid of prosody in school-aged children and adults. Twenty adults and twenty children (7-9 years) listened to synthesized French monosyllabic words presented at 2.5 Hz, either randomly or in 4-word hierarchical structures wherein 2 words formed a phrase at 1.25 Hz, and 2 phrases formed a sentence at 0.625 Hz, with or without babble noise. Neuromagnetic responses to words, phrases and sentences were identified and source-localized. Children and adults displayed significant cortical tracking of words in all conditions, and of phrases and sentences only when words formed meaningful sentences. In children compared with adults, the cortical tracking was lower for all linguistic units in conditions without noise. In the presence of noise, the cortical tracking was similarly reduced for sentence units in both groups, but remained stable for phrase units. Critically, when there was noise, adults increased the cortical tracking of monosyllabic words in the inferior frontal gyri and supratemporal auditory cortices but children did not. This study demonstrates that the difficulties of school-aged children in understanding speech in a multi-talker background might be partly due to an immature tracking of lexical but not supra-lexical linguistic units.


Asunto(s)
Percepción del Habla , Habla , Adulto , Humanos , Niño , Percepción del Habla/fisiología , Percepción Auditiva , Ruido , Lenguaje
5.
Dev Cogn Neurosci ; 59: 101181, 2023 02.
Artículo en Inglés | MEDLINE | ID: mdl-36549148

RESUMEN

Humans' extraordinary ability to understand speech in noise relies on multiple processes that develop with age. Using magnetoencephalography (MEG), we characterize the underlying neuromaturational basis by quantifying how cortical oscillations in 144 participants (aged 5-27 years) track phrasal and syllabic structures in connected speech mixed with different types of noise. While the extraction of prosodic cues from clear speech was stable during development, its maintenance in a multi-talker background matured rapidly up to age 9 and was associated with speech comprehension. Furthermore, while the extraction of subtler information provided by syllables matured at age 9, its maintenance in noisy backgrounds progressively matured until adulthood. Altogether, these results highlight distinct behaviorally relevant maturational trajectories for the neuronal signatures of speech perception. In accordance with grain-size proposals, neuromaturational milestones are reached increasingly late for linguistic units of decreasing size, with further delays incurred by noise.


Asunto(s)
Percepción del Habla , Habla , Humanos , Adulto , Niño , Habla/fisiología , Ruido , Magnetoencefalografía , Lingüística , Percepción del Habla/fisiología
6.
Clin Infect Dis ; 76(6): 1022-1029, 2023 03 21.
Artículo en Inglés | MEDLINE | ID: mdl-36358021

RESUMEN

BACKGROUND: This prospective study characterizes the structural and metabolic cerebral correlates of cognitive impairments found in a preclinical setting that considers the lifestyle of young European men exposed to human immunodeficiency virus (HIV), including recreational drugs. METHODS: Simultaneous structural brain magnetic resonance imaging (MRI) and positron emission tomography using [18F]-fluorodeoxyglucose (FDG-PET) were acquired on a hybrid PET-MRI system in 23 asymptomatic young men having sex with men with HIV (HIVMSM; mean age, 33.6 years [range, 23-60 years]; normal CD4+ cell count, undetectable viral load). Neuroimaging data were compared with that of 26 young seronegative men under HIV preexposure prophylaxis (PrEPMSM), highly well matched for age and lifestyle, and to 23 matched young seronegative men (controls). A comprehensive neuropsychological assessment was also administered to the HIVMSM and PrEPMSM participants. RESULTS: HIVMSM had lower performances in executive, attentional, and working memory functions compared to PrEPMSM. No structural or metabolic differences were found between those 2 groups. Compared to controls, HIVMSM and PrEPMSM exhibited a common hypometabolism in the prefrontal cortex that correlated with the level of recreational drug use. No structural brain abnormality was found. CONCLUSIONS: Abnormalities of brain metabolism in our population of young HIVMSM mainly relate to recreational drug use rather than HIV per se. A complex interplay between recreational drugs and HIV might nevertheless be involved in the cognitive impairments observed in this population.


Asunto(s)
Disfunción Cognitiva , Infecciones por VIH , Drogas Ilícitas , Masculino , Humanos , Adulto , VIH , Drogas Ilícitas/efectos adversos , Drogas Ilícitas/metabolismo , Estudios Prospectivos , Cognición , Encéfalo/diagnóstico por imagen , Encéfalo/patología , Disfunción Cognitiva/patología , Fluorodesoxiglucosa F18/metabolismo , Imagen por Resonancia Magnética , Tomografía de Emisión de Positrones , Infecciones por VIH/patología , Pruebas Neuropsicológicas
7.
Neuroimage ; 253: 119061, 2022 06.
Artículo en Inglés | MEDLINE | ID: mdl-35259526

RESUMEN

Dyslexia is a frequent developmental disorder in which reading acquisition is delayed and that is usually associated with difficulties understanding speech in noise. At the neuronal level, children with dyslexia were reported to display abnormal cortical tracking of speech (CTS) at phrasal rate. Here, we aimed to determine if abnormal tracking relates to reduced reading experience, and if it is modulated by the severity of dyslexia or the presence of acoustic noise. We included 26 school-age children with dyslexia, 26 age-matched controls and 26 reading-level matched controls. All were native French speakers. Children's brain activity was recorded with magnetoencephalography while they listened to continuous speech in noiseless and multiple noise conditions. CTS values were compared between groups, conditions and hemispheres, and also within groups, between children with mild and severe dyslexia. Syllabic CTS was significantly reduced in the right superior temporal gyrus in children with dyslexia compared with controls matched for age but not for reading level. Severe dyslexia was characterized by lower rapid automatized naming (RAN) abilities compared with mild dyslexia, and phrasal CTS lateralized to the right hemisphere in children with mild dyslexia and all control groups but not in children with severe dyslexia. Finally, an alteration in phrasal CTS was uncovered in children with dyslexia compared with age-matched controls in babble noise conditions but not in other less challenging listening conditions (non-speech noise or noiseless conditions); no such effect was seen in comparison with reading-level matched controls. Overall, our results confirmed the finding of altered neuronal basis of speech perception in noiseless and babble noise conditions in dyslexia compared with age-matched peers. However, the absence of alteration in comparison with reading-level matched controls demonstrates that such alterations are associated with reduced reading level, suggesting they are merely driven by reduced reading experience rather than a cause of dyslexia. Finally, our result of altered hemispheric lateralization of phrasal CTS in relation with altered RAN abilities in severe dyslexia is in line with a temporal sampling deficit of speech at phrasal rate in dyslexia.


Asunto(s)
Dislexia , Percepción del Habla , Niño , Humanos , Magnetoencefalografía , Ruido , Fonética , Habla/fisiología , Percepción del Habla/fisiología
8.
Infancy ; 27(3): 462-478, 2022 05.
Artículo en Inglés | MEDLINE | ID: mdl-34854536

RESUMEN

Infants' ability to detect statistical regularities between visual objects has been demonstrated in previous studies (e.g., Kirkham et al., Cognition, 83, 2002, B35). The extent to which infants extract and learn the actual values of the transitional probabilities (TPs) between these objects nevertheless remains an open question. In three experiments providing identical learning conditions but contrasting different types of sequences at test, we examined 8-month-old infants' ability to discriminate between familiar sequences involving high or low values of TPs, and new sequences that involved null TPs. Results showed that infants discriminate between these three types of sequences, supporting the existence of a statistical learning mechanism by which infants extract fine-grained statistical information from a stream of visual stimuli. Interestingly, the expression of this statistical knowledge varied between experiments and specifically depended on the nature of the first two test trials. We argue that the predictability of this early test arrangement-namely whether the first two test items were either predictable or unexpected based on the habituation phase-determined infants' looking behaviors.


Asunto(s)
Cognición , Aprendizaje Espacial , Humanos , Lactante , Conducta del Lactante , Conocimiento , Extractos Vegetales
9.
J Exp Psychol Gen ; 150(10): 2137-2157, 2021 Oct.
Artículo en Inglés | MEDLINE | ID: mdl-34138602

RESUMEN

Low integration of speech sounds with the mouth movements likely contributes to language acquisition disabilities that frequently characterize young autistic children. However, the existing empirical evidence either relies on complex verbal instructions or merely focuses on preferential gaze on in-synch videos. The former method is clearly unadapted for young, minimally, or nonverbal autistic children, while the latter has several biases, making it difficult to interpret the data. We designed a Reinforced Preferential Gaze paradigm that allows to test multimodal integration in young, nonverbal autistic children and overcomes several of the methodological challenges faced by previous studies. We show that autistic children have difficulties in temporally binding the speech signal with the corresponding articulatory gestures. A condition with structurally similar nonsocial video stimuli suggests that atypical multimodal integration in autism is not limited to speech stimuli. (PsycInfo Database Record (c) 2021 APA, all rights reserved).


Asunto(s)
Trastorno Autístico , Niño , Gestos , Humanos , Desarrollo del Lenguaje , Habla
10.
PLoS Biol ; 18(8): e3000840, 2020 08.
Artículo en Inglés | MEDLINE | ID: mdl-32845876

RESUMEN

Humans' propensity to acquire literacy relates to several factors, including the ability to understand speech in noise (SiN). Still, the nature of the relation between reading and SiN perception abilities remains poorly understood. Here, we dissect the interplay between (1) reading abilities, (2) classical behavioral predictors of reading (phonological awareness, phonological memory, and rapid automatized naming), and (3) electrophysiological markers of SiN perception in 99 elementary school children (26 with dyslexia). We demonstrate that, in typical readers, cortical representation of the phrasal content of SiN relates to the degree of development of the lexical (but not sublexical) reading strategy. In contrast, classical behavioral predictors of reading abilities and the ability to benefit from visual speech to represent the syllabic content of SiN account for global reading performance (i.e., speed and accuracy of lexical and sublexical reading). In individuals with dyslexia, we found preserved integration of visual speech information to optimize processing of syntactic information but not to sustain acoustic/phonemic processing. Finally, within children with dyslexia, measures of cortical representation of the phrasal content of SiN were negatively related to reading speed and positively related to the compromise between reading precision and reading speed, potentially owing to compensatory attentional mechanisms. These results clarify the nature of the relation between SiN perception and reading abilities in typical child readers and children with dyslexia and identify novel electrophysiological markers of emergent literacy.


Asunto(s)
Corteza Cerebral/fisiología , Ruido , Lectura , Habla/fisiología , Conducta , Niño , Dislexia/fisiopatología , Humanos , Modelos Lineales , Neuroimagen , Fonética
11.
J Cogn Neurosci ; 32(5): 877-888, 2020 05.
Artículo en Inglés | MEDLINE | ID: mdl-31933439

RESUMEN

Discrimination of words from nonspeech sounds is essential in communication. Still, how selective attention can influence this early step of speech processing remains elusive. To answer that question, brain activity was recorded with magnetoencephalography in 12 healthy adults while they listened to two sequences of auditory stimuli presented at 2.17 Hz, consisting of successions of one randomized word (tagging frequency = 0.54 Hz) and three acoustically matched nonverbal stimuli. Participants were instructed to focus their attention on the occurrence of a predefined word in the verbal attention condition and on a nonverbal stimulus in the nonverbal attention condition. Steady-state neuromagnetic responses were identified with spectral analysis at sensor and source levels. Significant sensor responses peaked at 0.54 and 2.17 Hz in both conditions. Sources at 0.54 Hz were reconstructed in supratemporal auditory cortex, left superior temporal gyrus (STG), left middle temporal gyrus, and left inferior frontal gyrus. Sources at 2.17 Hz were reconstructed in supratemporal auditory cortex and STG. Crucially, source strength in the left STG at 0.54 Hz was significantly higher in verbal attention than in nonverbal attention condition. This study demonstrates speech-sensitive responses at primary auditory and speech-related neocortical areas. Critically, it highlights that, during word discrimination, top-down attention modulates activity within the left STG. This area therefore appears to play a crucial role in selective verbal attentional processes for this early step of speech processing.


Asunto(s)
Atención/fisiología , Discriminación en Psicología/fisiología , Potenciales Evocados/fisiología , Corteza Prefrontal/fisiología , Percepción del Habla/fisiología , Lóbulo Temporal/fisiología , Adulto , Corteza Auditiva/fisiología , Potenciales Evocados Auditivos/fisiología , Femenino , Humanos , Magnetoencefalografía , Masculino , Psicolingüística , Distribución Aleatoria , Adulto Joven
12.
Neuroimage ; 184: 201-213, 2019 01 01.
Artículo en Inglés | MEDLINE | ID: mdl-30205208

RESUMEN

During connected speech listening, brain activity tracks speech rhythmicity at delta (∼0.5 Hz) and theta (4-8 Hz) frequencies. Here, we compared the potential of magnetoencephalography (MEG) and high-density electroencephalography (EEG) to uncover such speech brain tracking. Ten healthy right-handed adults listened to two different 5-min audio recordings, either without noise or mixed with a cocktail-party noise of equal loudness. Their brain activity was simultaneously recorded with MEG and EEG. We quantified speech brain tracking channel-by-channel using coherence, and with all channels at once by speech temporal envelope reconstruction accuracy. In both conditions, speech brain tracking was significant at delta and theta frequencies and peaked in the temporal regions with both modalities (MEG and EEG). However, in the absence of noise, speech brain tracking estimated from MEG data was significantly higher than that obtained from EEG. Furthemore, to uncover significant speech brain tracking, recordings needed to be ∼3 times longer in EEG than MEG, depending on the frequency considered (delta or theta) and the estimation method. In the presence of noise, both EEG and MEG recordings replicated the previous finding that speech brain tracking at delta frequencies is stronger with attended speech (i.e., the sound subjects are attending to) than with the global sound (i.e., the attended speech and the noise combined). Other previously reported MEG findings were replicated based on MEG but not EEG recordings: 1) speech brain tracking at theta frequencies is stronger with attended speech than with the global sound, 2) speech brain tracking at delta frequencies is stronger in noiseless than noisy conditions, and 3) when noise is added, speech brain tracking at delta frequencies dampens less in the left hemisphere than in the right hemisphere. Finally, sources of speech brain tracking reconstructed from EEG data were systematically deeper and more posterior than those derived from MEG. The present study demonstrates that speech brain tracking is better seen with MEG than EEG. Quantitatively, EEG recordings need to be ∼3 times longer than MEG recordings to uncover significant speech brain tracking. As a consequence, MEG appears more suited than EEG to pinpoint subtle effects related to speech brain tracking in a given recording time.


Asunto(s)
Corteza Auditiva/fisiología , Electroencefalografía , Magnetoencefalografía , Acústica del Lenguaje , Estimulación Acústica , Adulto , Mapeo Encefálico/métodos , Ritmo Delta , Femenino , Humanos , Masculino , Ruido , Ritmo Teta , Adulto Joven
13.
J Exp Psychol Learn Mem Cogn ; 45(8): 1387-1397, 2019 Aug.
Artículo en Inglés | MEDLINE | ID: mdl-30284869

RESUMEN

An ongoing debate in the literature on language acquisition is whether preschool children process reference in an egocentric way or whether they spontaneously and by-default take their partner's perspective into account. The reported study implements a computerized referential task with a controlled trial presentation and simple verbal instructions. Contrary to the predictions of the partner-specific view, entrained referential precedents give rise to faster processing for 3- and 5-year-old children, independently of whether the conversational partner is the same as in the lexical entrainment phase or not. Additionally, both age groups display a processing preference for the interaction with the same partner, be it for new or previously used referential descriptions. These results suggest that preschool children may adapt to their conversational partner; however, partner-specificity is encoded as low-level auditory-phonological priming rather than through inferences about a partner's perspective. (PsycINFO Database Record (c) 2019 APA, all rights reserved).


Asunto(s)
Egocentrismo , Desarrollo del Lenguaje , Factores de Edad , Preescolar , Femenino , Humanos , Relaciones Interpersonales , Masculino , Fonética , Semántica , Percepción del Habla
14.
Q J Exp Psychol (Hove) ; 72(6): 1379-1386, 2019 Jun.
Artículo en Inglés | MEDLINE | ID: mdl-29991319

RESUMEN

In a partial reinforcement schedule where a cue repeatedly predicts the occurrence of a target in consecutive trials, reaction times to the target tend to decrease in a monotonic fashion, while participants' expectancies for the target decrease at the same time. This dissociation between reaction times and expectancies-the so-called Perruchet effect-challenges the propositional view of learning, which posits that human conditioned responses result from conscious inferences about the relationships between events. However, whether the reaction time pattern reflects the strength of a putative cue-target link, or only non-associative processes, such as motor priming, remains unclear. To address this issue, we implemented the Perruchet procedure in a two-choice reaction time task and compared reaction time patterns in an Experimental condition, in which a tone systematically preceded a visual target, and in a Control condition, in which the onset of the two stimuli were uncoupled. Participants' expectancies regarding the target were recorded separately in an initial block. Reaction times decreased with the succession of identical trials in both conditions, reflecting the impact of motor priming. Importantly, reaction time slopes were steeper in the Experimental than in the Control condition, indicating an additional influence of the associative strength between the two stimuli. Interestingly, slopes were less steep for participants who showed the gambler's fallacy in the initial block. In sum, our results suggest the mutual influences of motor priming, associative strength, and expectancies on performance. They are in line with a dual-process model of learning involving both a propositional reasoning process and an automatic link-formation mechanism.


Asunto(s)
Anticipación Psicológica/fisiología , Aprendizaje por Asociación/fisiología , Conducta de Elección/fisiología , Desempeño Psicomotor/fisiología , Tiempo de Reacción/fisiología , Adulto , Femenino , Humanos , Masculino , Adulto Joven
15.
Dev Sci ; 20(4)2017 07.
Artículo en Inglés | MEDLINE | ID: mdl-26919798

RESUMEN

Extracting the statistical regularities present in the environment is a central learning mechanism in infancy. For instance, infants are able to learn the associations between simultaneously or successively presented visual objects (Fiser & Aslin, ; Kirkham, Slemmer & Johnson, ). The present study extends these results by investigating whether infants can learn the association between a target location and the context in which it is presented. With this aim, we used a visual associative learning procedure inspired by the contextual cuing paradigm, with infants from 8 to 12 months of age. In two experiments, in which we varied the complexity of the stimuli, we first habituated infants to several scenes where the location of a target (a cartoon character) was consistently associated with a context, namely a specific configuration of geometrical shapes. Second, we examined whether infants learned the covariation between the target location and the context by measuring looking times at scenes that either respected or violated the association. In both experiments, results showed that infants learned the target-context associations, as they looked longer at the familiar scenes than at the novel ones. In particular, infants selected clusters of co-occurring contextual shapes and learned the covariation between the target location and this subset. These results support the existence of a powerful and versatile statistical learning mechanism that may influence the orientation of infants' visual attention toward areas of interest in their environment during early developmental stages. A video abstract of this article can be viewed at: https://www.youtube.com/watch?v=9Hm1unyLBn0.


Asunto(s)
Aprendizaje por Asociación , Aprendizaje , Percepción Visual/fisiología , Atención , Señales (Psicología) , Humanos , Lactante
16.
Cogn Emot ; 30(6): 1137-48, 2016 Sep.
Artículo en Inglés | MEDLINE | ID: mdl-26197360

RESUMEN

Although the influence of the emotional content of stimuli on attention has been considered as occurring within trial, recent studies revealed that the presentation of such stimuli would also involve a slow component. The aim of the present study was to investigate fast and slow effects of negative (Exp. 1) and taboo (Exp. 2) spoken words. For this purpose, we used an auditory variant of the emotional Stroop paradigm in which each emotional word was followed by a sequence of neutral words. Replicating results from our previous study, we observed slow but no fast effects of negative and taboo words, which we interpreted as reflecting difficulties to disengage attention from their emotional dimension. Interestingly, while the presentation of a negative word only delayed the processing of the immediately subsequent neutral word, slow effects of taboo words were long-lasting. Nevertheless, such attentional effects were only observed when the emotional words were presented in the first block of trials, suggesting that once participants develop strategies to perform the task, attention-grabbing effects of emotional words disappear. Hence, far from being automatic, the occurrence of these effects would depend on participants' attentional set.


Asunto(s)
Atención/fisiología , Emociones/fisiología , Tiempo de Reacción/fisiología , Habla/fisiología , Test de Stroop/estadística & datos numéricos , Tabú/psicología , Adolescente , Adulto , Femenino , Humanos , Masculino , Persona de Mediana Edad , Adulto Joven
17.
Front Psychol ; 6: 1806, 2015.
Artículo en Inglés | MEDLINE | ID: mdl-26648884

RESUMEN

The statistical regularities of a sequence of visual shapes can be learned incidentally. Arciuli et al. (2014) recently argued that intentional instructions only improve learning at slow presentation rates as they favor the use of explicit strategies. The aim of the present study was (1) to test this assumption directly by investigating how instructions (incidental vs. intentional) and presentation rate (fast vs. slow) affect the acquisition of knowledge and (2) to examine how these factors influence the conscious vs. unconscious nature of the knowledge acquired. To this aim, we exposed participants to four triplets of shapes, presented sequentially in a pseudo-random order, and assessed their degree of learning in a subsequent completion task that integrated confidence judgments. Supporting Arciuli et al.'s (2014) claim, participant performance only benefited from intentional instructions at slow presentation rates. Moreover, informing participants beforehand about the existence of statistical regularities increased their explicit knowledge of the sequences, an effect that was not modulated by presentation speed. These results support that, although visual statistical learning can take place incidentally and, to some extent, outside conscious awareness, factors such as presentation rate and prior knowledge can boost learning of these regularities, presumably by favoring the acquisition of explicit knowledge.

18.
Exp Psychol ; 62(5): 346-51, 2015.
Artículo en Inglés | MEDLINE | ID: mdl-26592534

RESUMEN

The Rapid Serial Visual Presentation procedure is a method widely used in visual perception research. In this paper we propose an adaptation of this method which can be used with auditory material and enables assessment of statistical learning in speech segmentation. Adult participants were exposed to an artificial speech stream composed of statistically defined trisyllabic nonsense words. They were subsequently instructed to perform a detection task in a Rapid Serial Auditory Presentation (RSAP) stream in which they had to detect a syllable in a short speech stream. Results showed that reaction times varied as a function of the statistical predictability of the syllable: second and third syllables of each word were responded to faster than first syllables. This result suggests that the RSAP procedure provides a reliable and sensitive indirect measure of auditory statistical learning.


Asunto(s)
Aprendizaje/fisiología , Percepción del Habla/fisiología , Habla/fisiología , Adulto , Femenino , Humanos , Lenguaje , Masculino , Adulto Joven
19.
Behav Res Methods ; 46(4): 1098-107, 2014 Dec.
Artículo en Inglés | MEDLINE | ID: mdl-24519495

RESUMEN

Through this study, we aimed to validate a new tool for inducing moods in experimental contexts. Five audio stories with sad, joyful, frightening, erotic, or neutral content were presented to 60 participants (33 women, 27 men) in a within-subjects design, each for about 10 min. Participants were asked (1) to report their moods before and after listening to each story, (2) to assess the emotional content of the excerpts on various emotional scales, and (3) to rate their level of projection into the stories. The results confirmed our a priori emotional classification. The emotional stories were effective in inducing the desired mood, with no difference found between male and female participants. These stories therefore constitute a valuable corpus for inducing moods in French-speaking participants, and they are made freely available for use in scientific research.


Asunto(s)
Afecto/clasificación , Investigación Conductal/instrumentación , Bases de Datos Factuales , Emociones/clasificación , Narración , Adulto , Análisis de Varianza , Miedo/psicología , Femenino , Felicidad , Humanos , Lenguaje , Masculino , Motivación , Valores de Referencia , Factores Sexuales , Encuestas y Cuestionarios
20.
Front Psychol ; 5: 1541, 2014.
Artículo en Inglés | MEDLINE | ID: mdl-25620943

RESUMEN

Visual statistical learning (VSL) is the ability to extract the joint and conditional probabilities of shapes co-occurring during passive viewing of complex visual configurations. Evidence indicates that even infants are sensitive to these regularities (e.g., Kirkham et al., 2002). However, there is continuing debate as to whether VSL is accompanied by conscious awareness of the statistical regularities between sequence elements. Bertels et al. (2012) addressed this question in young adults. Here, we adapted their paradigm to investigate VSL and conscious awareness in children. Using the same version of the paradigm, we also tested young adults so as to directly compare results from both age groups. Fifth graders and undergraduates were exposed to a stream of visual shapes arranged in triplets. Learning of these sequences was then assessed using both direct and indirect measures. In order to assess the extent to which learning occurred explicitly, we also measured confidence through subjective measures in the direct task (i.e., binary confidence judgments). Results revealed that both children and young adults learned the statistical regularities between shapes. In both age groups, participants who performed above chance in the completion task had conscious access to their knowledge. Nevertheless, although adults performed above chance even when they claimed to guess, there was no evidence of implicit knowledge in children. These results suggest that the role of implicit and explicit influences in VSL may follow a developmental trajectory.

SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA
...