Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 23
Filtrar
1.
Eur J Neurosci ; 2024 Aug 27.
Artículo en Inglés | MEDLINE | ID: mdl-39188179

RESUMEN

While infants' sensitivity to visual speech cues and the benefit of these cues have been well-established by behavioural studies, there is little evidence on the effect of visual speech cues on infants' neural processing of continuous auditory speech. In this study, we investigated whether visual speech cues, such as the movements of the lips, jaw, and larynx, facilitate infants' neural speech tracking. Ten-month-old Dutch-learning infants watched videos of a speaker reciting passages in infant-directed speech while electroencephalography (EEG) was recorded. In the videos, either the full face of the speaker was displayed or the speaker's mouth and jaw were masked with a block, obstructing the visual speech cues. To assess neural tracking, speech-brain coherence (SBC) was calculated, focusing particularly on the stress and syllabic rates (1-1.75 and 2.5-3.5 Hz respectively in our stimuli). First, overall, SBC was compared to surrogate data, and then, differences in SBC in the two conditions were tested at the frequencies of interest. Our results indicated that infants show significant tracking at both stress and syllabic rates. However, no differences were identified between the two conditions, meaning that infants' neural tracking was not modulated further by the presence of visual speech cues. Furthermore, we demonstrated that infants' neural tracking of low-frequency information is related to their subsequent vocabulary development at 18 months. Overall, this study provides evidence that infants' neural tracking of speech is not necessarily impaired when visual speech cues are not fully visible and that neural tracking may be a potential mechanism in successful language acquisition.

2.
Dev Sci ; 27(2): e13436, 2024 Mar.
Artículo en Inglés | MEDLINE | ID: mdl-37551932

RESUMEN

The environment in which infants learn language is multimodal and rich with social cues. Yet, the effects of such cues, such as eye contact, on early speech perception have not been closely examined. This study assessed the role of ostensive speech, signalled through the speaker's eye gaze direction, on infants' word segmentation abilities. A familiarisation-then-test paradigm was used while electroencephalography (EEG) was recorded. Ten-month-old Dutch-learning infants were familiarised with audio-visual stories in which a speaker recited four sentences with one repeated target word. The speaker addressed them either with direct or with averted gaze while speaking. In the test phase following each story, infants heard familiar and novel words presented via audio-only. Infants' familiarity with the words was assessed using event-related potentials (ERPs). As predicted, infants showed a negative-going ERP familiarity effect to the isolated familiarised words relative to the novel words over the left-frontal region of interest during the test phase. While the word familiarity effect did not differ as a function of the speaker's gaze over the left-frontal region of interest, there was also a (not predicted) positive-going early ERP familiarity effect over right fronto-central and central electrodes in the direct gaze condition only. This study provides electrophysiological evidence that infants can segment words from audio-visual speech, regardless of the ostensiveness of the speaker's communication. However, the speaker's gaze direction seems to influence the processing of familiar words. RESEARCH HIGHLIGHTS: We examined 10-month-old infants' ERP word familiarity response using audio-visual stories, in which a speaker addressed infants with direct or averted gaze while speaking. Ten-month-old infants can segment and recognise familiar words from audio-visual speech, indicated by their negative-going ERP response to familiar, relative to novel, words. This negative-going ERP word familiarity effect was present for isolated words over left-frontal electrodes regardless of whether the speaker offered eye contact while speaking. An additional positivity in response to familiar words was observed for direct gaze only, over right fronto-central and central electrodes.


Asunto(s)
Percepción del Habla , Habla , Lactante , Humanos , Habla/fisiología , Fijación Ocular , Lenguaje , Potenciales Evocados/fisiología , Percepción del Habla/fisiología
3.
Neuroimage ; 252: 119049, 2022 05 15.
Artículo en Inglés | MEDLINE | ID: mdl-35248707

RESUMEN

Music is often described in the laboratory and in the classroom as a beneficial tool for memory encoding and retention, with a particularly strong effect when words are sung to familiar compared to unfamiliar melodies. However, the neural mechanisms underlying this memory benefit, especially for benefits related to familiar music are not well understood. The current study examined whether neural tracking of the slow syllable rhythms of speech and song is modulated by melody familiarity. Participants became familiar with twelve novel melodies over four days prior to MEG testing. Neural tracking of the same utterances spoken and sung revealed greater cerebro-acoustic phase coherence for sung compared to spoken utterances, but did not show an effect of familiar melody when stimuli were grouped by their assigned (trained) familiarity. However, when participant's subjective ratings of perceived familiarity were used to group stimuli, a large effect of familiarity was observed. This effect was not specific to song, as it was observed in both sung and spoken utterances. Exploratory analyses revealed some in-session learning of unfamiliar and spoken utterances, with increased neural tracking for untrained stimuli by the end of the MEG testing session. Our results indicate that top-down factors like familiarity are strong modulators of neural tracking for music and language. Participants' neural tracking was related to their perception of familiarity, which was likely driven by a combination of effects from repeated listening, stimulus-specific melodic simplicity, and individual differences. Beyond simply the acoustic features of music, top-down factors built into the music listening experience, like repetition and familiarity, play a large role in the way we attend to and encode information presented in a musical context.


Asunto(s)
Música , Canto , Percepción Auditiva , Humanos , Reconocimiento en Psicología , Habla
4.
Infancy ; 25(5): 699-718, 2020 09.
Artículo en Inglés | MEDLINE | ID: mdl-32794372

RESUMEN

Infants exploit acoustic boundaries to perceptually organize phrases in speech. This prosodic parsing ability is well-attested and is a cornerstone to the development of speech perception and grammar. However, infants also receive linguistic input in child songs. This study provides evidence that infants parse songs into meaningful phrasal units and replicates previous research for speech. Six-month-old Dutch infants (n = 80) were tested in the song or speech modality in the head-turn preference procedure. First, infants were familiarized to two versions of the same word sequence: One version represented a well-formed unit, and the other contained a phrase boundary halfway through. At test, infants were presented two passages, each containing one version of the familiarized sequence. The results for speech replicated the previously observed preference for the passage containing the well-formed sequence, but only in a more fine-grained analysis. The preference for well-formed phrases was also observed in the song modality, indicating that infants recognize phrase structure in song. There were acoustic differences between stimuli of the current and previous studies, suggesting that infants are flexible in their processing of boundary cues while also providing a possible explanation for differences in effect sizes.


Asunto(s)
Desarrollo Infantil/fisiología , Conducta de Elección/fisiología , Conducta del Lactante/fisiología , Reconocimiento en Psicología/fisiología , Canto , Percepción del Habla/fisiología , Femenino , Humanos , Lactante , Masculino
5.
Proc Natl Acad Sci U S A ; 109(52): 21504-9, 2012 Dec 26.
Artículo en Inglés | MEDLINE | ID: mdl-23236162

RESUMEN

The human brain has the extraordinary capability to transform cluttered sensory input into distinct object representations. For example, it is able to rapidly and seemingly without effort detect object categories in complex natural scenes. Surprisingly, category tuning is not sufficient to achieve conscious recognition of objects. What neural process beyond category extraction might elevate neural representations to the level where objects are consciously perceived? Here we show that visible and invisible faces produce similar category-selective responses in the ventral visual cortex. The pattern of neural activity evoked by visible faces could be used to decode the presence of invisible faces and vice versa. However, only visible faces caused extensive response enhancements and changes in neural oscillatory synchronization, as well as increased functional connectivity between higher and lower visual areas. We conclude that conscious face perception is more tightly linked to neural processes of sustained information integration and binding than to processes accommodating face category tuning.


Asunto(s)
Estado de Conciencia/fisiología , Neuronas/fisiología , Corteza Visual/fisiología , Percepción Visual/fisiología , Cara , Femenino , Humanos , Masculino
6.
Dev Cogn Neurosci ; 64: 101297, 2023 Dec.
Artículo en Inglés | MEDLINE | ID: mdl-37778275

RESUMEN

Eye gaze is a powerful ostensive cue in infant-caregiver interactions, with demonstrable effects on language acquisition. While the link between gaze following and later vocabulary is well-established, the effects of eye gaze on other aspects of language, such as speech processing, are less clear. In this EEG study, we examined the effects of the speaker's eye gaze on ten-month-old infants' neural tracking of naturalistic audiovisual speech, a marker for successful speech processing. Infants watched videos of a speaker telling stories, addressing the infant with direct or averted eye gaze. We assessed infants' speech-brain coherence at stress (1-1.75 Hz) and syllable (2.5-3.5 Hz) rates, tested for differences in attention by comparing looking times and EEG theta power in the two conditions, and investigated whether neural tracking predicts later vocabulary. Our results showed that infants' brains tracked the speech rhythm both at the stress and syllable rates, and that infants' neural tracking at the syllable rate predicted later vocabulary. However, speech-brain coherence did not significantly differ between direct and averted gaze conditions and infants did not show greater attention to direct gaze. Overall, our results suggest significant neural tracking at ten months, related to vocabulary development, but not modulated by speaker's gaze.


Asunto(s)
Fijación Ocular , Percepción del Habla , Lactante , Humanos , Habla , Desarrollo del Lenguaje , Lenguaje , Encéfalo
7.
Neurobiol Lang (Camb) ; 3(3): 495-514, 2022.
Artículo en Inglés | MEDLINE | ID: mdl-37216063

RESUMEN

During speech processing, neural activity in non-autistic adults and infants tracks the speech envelope. Recent research in adults indicates that this neural tracking relates to linguistic knowledge and may be reduced in autism. Such reduced tracking, if present already in infancy, could impede language development. In the current study, we focused on children with a family history of autism, who often show a delay in first language acquisition. We investigated whether differences in tracking of sung nursery rhymes during infancy relate to language development and autism symptoms in childhood. We assessed speech-brain coherence at either 10 or 14 months of age in a total of 22 infants with high likelihood of autism due to family history and 19 infants without family history of autism. We analyzed the relationship between speech-brain coherence in these infants and their vocabulary at 24 months as well as autism symptoms at 36 months. Our results showed significant speech-brain coherence in the 10- and 14-month-old infants. We found no evidence for a relationship between speech-brain coherence and later autism symptoms. Importantly, speech-brain coherence in the stressed syllable rate (1-3 Hz) predicted later vocabulary. Follow-up analyses showed evidence for a relationship between tracking and vocabulary only in 10-month-olds but not in 14-month-olds and indicated possible differences between the likelihood groups. Thus, early tracking of sung nursery rhymes is related to language development in childhood.

8.
Front Psychol ; 12: 680882, 2021.
Artículo en Inglés | MEDLINE | ID: mdl-34552527

RESUMEN

Rhyme perception is an important predictor for future literacy. Assessing rhyme abilities, however, commonly requires children to make explicit rhyme judgements on single words. Here we explored whether infants already implicitly process rhymes in natural rhyming contexts (child songs) and whether this response correlates with later vocabulary size. In a passive listening ERP study, 10.5 month-old Dutch infants were exposed to rhyming and non-rhyming child songs. Two types of rhyme effects were analysed: (1) ERPs elicited by the first rhyme occurring in each song (rhyme sensitivity) and (2) ERPs elicited by rhymes repeating after the first rhyme in each song (rhyme repetition). Only for the latter a tentative negativity for rhymes from 0 to 200 ms after the onset of the rhyme word was found. This rhyme repetition effect correlated with productive vocabulary at 18 months-old, but not with any other vocabulary measure (perception at 10.5 or 18 months-old). While awaiting future replication, the study indicates precursors of phonological awareness already during infancy and with ecologically valid linguistic stimuli.

9.
Front Hum Neurosci ; 15: 629648, 2021.
Artículo en Inglés | MEDLINE | ID: mdl-34163338

RESUMEN

The nature of phonological representations has been extensively studied in phonology and psycholinguistics. While full specification is still the norm in psycholinguistic research, underspecified representations may better account for perceptual asymmetries. In this paper, we report on a mismatch negativity (MMN) study with Dutch listeners who took part in a passive oddball paradigm to investigate when the brain notices the difference between expected and observed vowels. In particular, we tested neural discrimination (indicating perceptual discrimination) of the tense mid vowel pairs /o/-/ø/ (place contrast), /e/-/ø/ (labiality or rounding contrast), and /e/-/o/ (place and labiality contrast). Our results show (a) a perceptual asymmetry for place in the /o/-/ø/ contrast, supporting underspecification of [CORONAL] and replicating earlier results for German, and (b) a perceptual asymmetry for labiality for the /e/-/ø/ contrast, which was not reported in the German study. A labial deviant [ø] (standard /e/) yielded a larger MMN than a deviant [e] (standard /ø/). No asymmetry was found for the two-feature contrast. This study partly replicates a similar MMN study on German vowels, and partly presents new findings indicating cross-linguistic differences. Although the vowel inventory of Dutch and German is to a large extent comparable, their (morpho)phonological systems are different, which is reflected in processing.

10.
Neuroimage ; 52(4): 1633-44, 2010 Oct 01.
Artículo en Inglés | MEDLINE | ID: mdl-20493954

RESUMEN

In a recent fMRI study we showed that left posterior middle temporal gyrus (LpMTG) subserves the retrieval of a word's lexical-syntactic properties from the mental lexicon (long-term memory), while left posterior inferior frontal gyrus (LpIFG) is involved in unifying (on-line integration of) this information into a sentence structure (Snijders et al., 2009). In addition, the right IFG, right MTG, and the right striatum were involved in the unification process. Here we report results from a psychophysical interactions (PPI) analysis in which we investigated the effective connectivity between LpIFG and LpMTG during unification, and how the right hemisphere areas and the striatum are functionally connected to the unification network. LpIFG and LpMTG both showed enhanced connectivity during the unification process with a region slightly superior to our previously reported LpMTG. Right IFG better predicted right temporal activity when unification processes were more strongly engaged, just as LpIFG better predicted left temporal activity. Furthermore, the striatum showed enhanced coupling to LpIFG and LpMTG during unification. We conclude that bilateral inferior frontal and posterior temporal regions are functionally connected during sentence-level unification. Cortico-subcortical connectivity patterns suggest cooperation between inferior frontal and striatal regions in performing unification operations on lexical-syntactic representations retrieved from LpMTG.


Asunto(s)
Encéfalo/fisiología , Comprensión/fisiología , Lenguaje , Imagen por Resonancia Magnética , Memoria/fisiología , Red Nerviosa/fisiología , Vías Nerviosas/fisiología , Semántica , Adolescente , Adulto , Femenino , Humanos , Masculino , Adulto Joven
11.
Cereb Cortex ; 19(7): 1493-503, 2009 Jul.
Artículo en Inglés | MEDLINE | ID: mdl-19001084

RESUMEN

Sentence comprehension requires the retrieval of single word information from long-term memory, and the integration of this information into multiword representations. The current functional magnetic resonance imaging study explored the hypothesis that the left posterior temporal gyrus supports the retrieval of lexical-syntactic information, whereas left inferior frontal gyrus (LIFG) contributes to syntactic unification. Twenty-eight subjects read sentences and word sequences containing word-category (noun-verb) ambiguous words at critical positions. Regions contributing to the syntactic unification process should show enhanced activation for sentences compared to words, and only within sentences display a larger signal for ambiguous than unambiguous conditions. The posterior LIFG showed exactly this predicted pattern, confirming our hypothesis that LIFG contributes to syntactic unification. The left posterior middle temporal gyrus was activated more for ambiguous than unambiguous conditions (main effect over both sentences and word sequences), as predicted for regions subserving the retrieval of lexical-syntactic information from memory. We conclude that understanding language involves the dynamic interplay between left inferior frontal and left posterior temporal regions.


Asunto(s)
Mapeo Encefálico/métodos , Corteza Cerebral/fisiología , Comprensión/fisiología , Potenciales Evocados/fisiología , Lenguaje , Imagen por Resonancia Magnética/métodos , Semántica , Adolescente , Adulto , Femenino , Humanos , Masculino , Adulto Joven
12.
Brain Sci ; 10(1)2020 Jan 09.
Artículo en Inglés | MEDLINE | ID: mdl-31936586

RESUMEN

Children's songs are omnipresent and highly attractive stimuli in infants' input. Previous work suggests that infants process linguistic-phonetic information from simplified sung melodies. The present study investigated whether infants learn words from ecologically valid children's songs. Testing 40 Dutch-learning 10-month-olds in a familiarization-then-test electroencephalography (EEG) paradigm, this study asked whether infants can segment repeated target words embedded in songs during familiarization and subsequently recognize those words in continuous speech in the test phase. To replicate previous speech work and compare segmentation across modalities, infants participated in both song and speech sessions. Results showed a positive event-related potential (ERP) familiarity effect to the final compared to the first target occurrences during both song and speech familiarization. No evidence was found for word recognition in the test phase following either song or speech. Comparisons across the stimuli of the present and a comparable previous study suggested that acoustic prominence and speech rate may have contributed to the polarity of the ERP familiarity effect and its absence in the test phase. Overall, the present study provides evidence that 10-month-old infants can segment words embedded in songs, and it raises questions about the acoustic and other factors that enable or hinder infant word segmentation from songs and speech.

13.
Front Psychol ; 11: 589096, 2020.
Artículo en Inglés | MEDLINE | ID: mdl-33584424

RESUMEN

Eye gaze is a ubiquitous cue in child-caregiver interactions, and infants are highly attentive to eye gaze from very early on. However, the question of why infants show gaze-sensitive behavior, and what role this sensitivity to gaze plays in their language development, is not yet well-understood. To gain a better understanding of the role of eye gaze in infants' language learning, we conducted a broad systematic review of the developmental literature for all studies that investigate the role of eye gaze in infants' language development. Across 77 peer-reviewed articles containing data from typically developing human infants (0-24 months) in the domain of language development, we identified two broad themes. The first tracked the effect of eye gaze on four developmental domains: (1) vocabulary development, (2) word-object mapping, (3) object processing, and (4) speech processing. Overall, there is considerable evidence that infants learn more about objects and are more likely to form word-object mappings in the presence of eye gaze cues, both of which are necessary for learning words. In addition, there is good evidence for longitudinal relationships between infants' gaze following abilities and later receptive and expressive vocabulary. However, many domains (e.g., speech processing) are understudied; further work is needed to decide whether gaze effects are specific to tasks, such as word-object mapping or whether they reflect a general learning enhancement mechanism. The second theme explored the reasons why eye gaze might be facilitative for learning, addressing the question of whether eye gaze is treated by infants as a specialized socio-cognitive cue. We concluded that the balance of evidence supports the idea that eye gaze facilitates infants' learning by enhancing their arousal, memory, and attentional capacities to a greater extent than other low-level attentional cues. However, as yet, there are too few studies that directly compare the effect of eye gaze cues and non-social, attentional cues for strong conclusions to be drawn. We also suggest that there might be a developmental effect, with eye gaze, over the course of the first 2 years of life, developing into a truly ostensive cue that enhances language learning across the board.

14.
Infant Behav Dev ; 52: 130-139, 2018 08.
Artículo en Inglés | MEDLINE | ID: mdl-30086413

RESUMEN

Children's songs often contain rhyming words at phrase endings. In this study, we investigated whether infants can already recognize this phonological pattern in songs. Earlier studies using lists of spoken words were equivocal on infants' spontaneous processing of rhymes (Hayes et al., 2000; Jusczyk et al., 1999). Songs, however, constitute an ecologically valid rhyming stimulus, which could allow for spontaneous processing of this phonological pattern in infants. Novel children's songs with rhyming and non-rhyming lyrics using pseudo-words were presented to 35 9-month-old Dutch infants using the Headturn Preference Procedure. Infants on average listened longer to the non-rhyming songs, with around half of the infants however exhibiting a preference for the rhyming songs. These results highlight that infants have the processing abilities to benefit from their natural rhyming input for the development of their phonological abilities.


Asunto(s)
Percepción Auditiva/fisiología , Música , Fonética , Femenino , Alemania , Humanos , Lactante , Masculino
15.
Brain Res ; 1178: 106-13, 2007 Oct 31.
Artículo en Inglés | MEDLINE | ID: mdl-17931604

RESUMEN

Previous studies have shown that segmentation skills are language-specific, making it difficult to segment continuous speech in an unfamiliar language into its component words. Here we present the first study capturing the delay in segmentation and recognition in the foreign listener using ERPs. We compared the ability of Dutch adults and of English adults without knowledge of Dutch ('foreign listeners') to segment familiarized words from continuous Dutch speech. We used the known effect of repetition on the event-related potential (ERP) as an index of recognition of words in continuous speech. Our results show that word repetitions in isolation are recognized with equivalent facility by native and foreign listeners, but word repetitions in continuous speech are not. First, words familiarized in isolation are recognized faster by native than by foreign listeners when they are repeated in continuous speech. Second, when words that have previously been heard only in a continuous-speech context re-occur in continuous speech, the repetition is detected by native listeners, but is not detected by foreign listeners. A preceding speech context facilitates word recognition for native listeners, but delays or even inhibits word recognition for foreign listeners. We propose that the apparent difference in segmentation rate between native and foreign listeners is grounded in the difference in language-specific skills available to the listeners.


Asunto(s)
Lenguaje , Percepción del Habla/fisiología , Adolescente , Adulto , Interpretación Estadística de Datos , Electroencefalografía , Potenciales Evocados/fisiología , Femenino , Humanos , Masculino , Memoria a Corto Plazo/fisiología
16.
Neuropsychologia ; 95: 21-29, 2017 01 27.
Artículo en Inglés | MEDLINE | ID: mdl-27939189

RESUMEN

In everyday communication speakers often refer in speech and/or gesture to objects in their immediate environment, thereby shifting their addressee's attention to an intended referent. The neurobiological infrastructure involved in the comprehension of such basic multimodal communicative acts remains unclear. In an event-related fMRI study, we presented participants with pictures of a speaker and two objects while they concurrently listened to her speech. In each picture, one of the objects was singled out, either through the speaker's index-finger pointing gesture or through a visual cue that made the object perceptually more salient in the absence of gesture. A mismatch (compared to a match) between speech and the object singled out by the speaker's pointing gesture led to enhanced activation in left IFG and bilateral pMTG, showing the importance of these areas in conceptual matching between speech and referent. Moreover, a match (compared to a mismatch) between speech and the object made salient through a visual cue led to enhanced activation in the mentalizing system, arguably reflecting an attempt to converge on a jointly attended referent in the absence of pointing. These findings shed new light on the neurobiological underpinnings of the core communicative process of comprehending a speaker's multimodal referential act and stress the power of pointing as an important natural device to link speech to objects.


Asunto(s)
Encéfalo/fisiología , Gestos , Percepción del Habla/fisiología , Percepción Visual/fisiología , Adolescente , Adulto , Mapeo Encefálico , Comprensión/fisiología , Señales (Psicología) , Femenino , Humanos , Imagen por Resonancia Magnética , Masculino , Pruebas Neuropsicológicas , Adulto Joven
17.
Brain Lang ; 172: 16-21, 2017 09.
Artículo en Inglés | MEDLINE | ID: mdl-27059522

RESUMEN

The CNTNAP2 gene encodes a cell-adhesion molecule that influences the properties of neural networks and the morphology and density of neurons and glial cells. Previous studies have shown association of CNTNAP2 variants with language-related phenotypes in health and disease. Here, we report associations of a common CNTNAP2 polymorphism (rs7794745) with variation in grey matter in a region in the dorsal visual stream. We tried to replicate an earlier study on 314 subjects by Tan et al. (2010), but now in a substantially larger group of more than 1700 subjects. Carriers of the T allele showed reduced grey matter volume in left superior occipital gyrus, while we did not replicate associations with grey matter volume in other regions identified by Tan et al. (2010). Our work illustrates the importance of independent replication in neuroimaging genetic studies of language-related candidate genes.


Asunto(s)
Sustancia Gris/patología , Proteínas de la Membrana/genética , Proteínas del Tejido Nervioso/genética , Lóbulo Occipital/metabolismo , Lóbulo Occipital/patología , Polimorfismo de Nucleótido Simple/genética , Alelos , Femenino , Estudios de Asociación Genética , Humanos , Lenguaje , Masculino , Neuroimagen
18.
Brain Lang ; 163: 22-31, 2016 Dec.
Artículo en Inglés | MEDLINE | ID: mdl-27639117

RESUMEN

Native speakers of Dutch do not always adhere to prescriptive grammar rules in their daily speech. These grammatical norm violations can elicit emotional reactions in language purists, mostly high-educated people, who claim that for them these constructions are truly ungrammatical. However, linguists generally assume that grammatical norm violations are in fact truly grammatical, especially when they occur frequently in a language. In an fMRI study we investigated the processing of grammatical norm violations in the brains of language purists, and compared them with truly grammatical and truly ungrammatical sentences. Grammatical norm violations were found to be unique in that their processing resembled not only the processing of truly grammatical sentences (in left medial Superior Frontal Gyrus and Angular Gyrus), but also that of truly ungrammatical sentences (in Inferior Frontal Gyrus), despite what theories of grammar would usually lead us to believe.


Asunto(s)
Encéfalo/fisiología , Lingüística , Percepción del Habla/fisiología , Habla , Adulto , Mapeo Encefálico , Femenino , Lóbulo Frontal/fisiología , Humanos , Imagen por Resonancia Magnética , Masculino , Persona de Mediana Edad , Países Bajos
19.
Front Psychol ; 6: 667, 2015.
Artículo en Inglés | MEDLINE | ID: mdl-26074838

RESUMEN

Both categorization and segmentation processes play a crucial role in face perception. However, the functional relation between these subprocesses is currently unclear. The present study investigates the temporal relation between segmentation-related and category-selective responses in the brain, using electroencephalography (EEG). Surface segmentation and category content were both manipulated using texture-defined objects, including faces. This allowed us to study brain activity related to segmentation and to categorization. In the main experiment, participants viewed texture-defined objects for a duration of 800 ms. EEG results revealed that segmentation-related responses precede category-selective responses. Three additional experiments revealed that the presence and timing of categorization depends on stimulus properties and presentation duration. Photographic objects were presented for a long and short (92 ms) duration and evoked fast category-selective responses in both cases. On the other hand, presentation of texture-defined objects for a short duration only evoked segmentation-related but no category-selective responses. Category-selective responses were much slower when evoked by texture-defined than by photographic objects. We suggest that in case of categorization of objects under suboptimal conditions, such as when low-level stimulus properties are not sufficient for fast object categorization, segmentation facilitates the slower categorization process.

20.
J Autism Dev Disord ; 44(2): 443-51, 2014 Feb.
Artículo en Inglés | MEDLINE | ID: mdl-23838729

RESUMEN

Superiority in visual search for individuals diagnosed with autism spectrum disorder (ASD) is a well-reported finding. We administered two visual search tasks to individuals with ASD and matched controls. One showed no difference between the groups, and one did show the expected superior performance for individuals with ASD. These results offer an explanation, formulated in terms of load theory. We suggest that there is a limit to the superiority in visual search for individuals with ASD, related to the perceptual load of the stimuli. When perceptual load becomes so high that no additional task-(ir)relevant information can be processed, performance will be based on single stimulus identification, in which no differences between individuals with ASD and controls have been demonstrated.


Asunto(s)
Trastornos Generalizados del Desarrollo Infantil/fisiopatología , Percepción Visual , Adolescente , Estudios de Casos y Controles , Movimientos Oculares , Femenino , Humanos , Masculino , Tiempo de Reacción , Adulto Joven
SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA