Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 16 de 16
Filtrar
Más filtros










Base de datos
Intervalo de año de publicación
1.
Curr Biol ; 34(8): 1750-1754.e4, 2024 04 22.
Artículo en Inglés | MEDLINE | ID: mdl-38521063

RESUMEN

Using words to refer to objects in the environment is a core feature of the human language faculty. Referential understanding assumes the formation of mental representations of these words.1,2 Such understanding of object words has not yet been demonstrated as a general capacity in any non-human species,3 despite multiple behavior-based case reports.4,5,6,7,8,9,10 In human event-related potential (ERP) studies, object word knowledge is typically tested using the semantic violation paradigm, where words are presented either with their referent (match) or another object (mismatch).11,12 Such mismatch elicits an N400 effect, a well-established neural correlate of semantic processing.12,13 Reports of preverbal infant N400 evoked by semantic violations14 assert the use of this paradigm to probe mental representations of object words in nonverbal populations. Here, measuring dogs' (Canis familiaris) ERPs to objects primed with matching or mismatching object words, we found a mismatch effect at a frontal electrode, with a latency (206-606 ms) comparable to the human N400. A greater difference for words that dogs knew better, according to owner reports, further supported a semantic interpretation of this effect. Semantic expectations emerged irrespective of vocabulary size, demonstrating the prevalence of referential understanding in dogs. These results provide the first neural evidence for object word knowledge in a non-human animal. VIDEO ABSTRACT.


Asunto(s)
Potenciales Evocados , Semántica , Animales , Perros/fisiología , Masculino , Femenino , Potenciales Evocados/fisiología , Comprensión/fisiología , Electroencefalografía , Humanos
2.
Sci Rep ; 13(1): 3150, 2023 02 23.
Artículo en Inglés | MEDLINE | ID: mdl-36823218

RESUMEN

Since the dawn of comparative cognitive research, dogs were suspected to possess some capacity for responding to human spoken language. Neuroimaging studies have supported the existence of relevant mechanisms, but convincing behavioral performance is rare, with only few exceptional dogs worldwide demonstrating a lexicon of object-labels they respond to. In the present study we aimed to investigate if and how a capacity for processing verbal stimuli is expressed in dogs (N = 20), whose alleged knowledge of verbal labels is only backed-up by owner reports taken at face value, and concerning only a few words (on average 5). Dogs were tested in a two-choice paradigm with familiar objects. The experiment was divided into a cue-control condition (objects visible to the owner vs. shielded by a panel, thereby controlling the owner's ability to emit cues to the dog) and a response type condition (fetching vs. looking). Above chance performance in fetching and looking at the named object emerged on the level of the sample as a whole. Only one individual performed reliably above chance, but the group-level effect did not depend on this data point. The presence of the panel also had no influence, which supports that performance was not driven by non-verbal cues from the owners. The group-level effect suggests that in typical dogs object-label learning is an instable process, either due to the animals primarily engaging in contextual learning or possibly analogous to the early stages of implicit, statistical learning of words in humans and opposed to the rapid mapping reported in exceptional dogs with larger passive vocabulary.


Asunto(s)
Señales (Psicología) , Aprendizaje , Humanos , Perros , Animales
3.
Brain Res ; 1805: 148246, 2023 04 15.
Artículo en Inglés | MEDLINE | ID: mdl-36657631

RESUMEN

To process speech in a multi-talker environment, listeners need to segregate the mixture of incoming speech streams and focus their attention on one of them. Potentially, speech prosody could aid the segregation of different speakers, the selection of the desired speech stream, and detecting targets within the attended stream. For testing these issues, we recorded behavioral responses and extracted event-related potentials and functional brain networks from electroencephalographic signals recorded while participants listened to two concurrent speech streams, performing a lexical detection and a recognition memory task in parallel. Prosody manipulation was applied to the attended speech stream in one group of participants and to the ignored speech stream in another group. Naturally recorded speech stimuli were either intact, synthetically F0-flattened, or prosodically suppressed by the speaker. Results show that prosody - especially the parsing cues mediated by speech rate - facilitates stream selection, while playing a smaller role in auditory stream segmentation and target detection.


Asunto(s)
Percepción del Habla , Humanos , Percepción del Habla/fisiología , Habla , Estimulación Acústica/métodos , Percepción Auditiva/fisiología , Electroencefalografía/métodos
4.
PLoS One ; 17(8): e0273226, 2022.
Artículo en Inglés | MEDLINE | ID: mdl-36001644

RESUMEN

Powerful figures, such as politicians, who show a behavioural pattern of exuberant self-confidence, recklessness, and contempt for others may be the subject of the acquired personality disorder, the hubris syndrome, which has been demonstrated to leave its mark on speech patterns. Our study explores characteristic language patterns of Hungarian prime ministers (PMs) with a special emphasis on one of the key indicators of hubris, the shift from the first person "I" to "we" in spontaneous speech. We analyzed the ratio of the first-person singular ("I") and plural ("we") pronouns and verbal inflections in the spontaneous parliamentary speeches of four Hungarian PMs between 1998-2018. We found that Viktor Orbán during his second premiership (2010-2014) used first person plural relative to singular inflections more often than the other three PMs during their terms. Orbán and another Hungarian PM, Ferenc Gyurcsány, who were re-elected at some point showed an increased ratio of first-person plural vs. singular inflections and personal pronouns by their second term, likely reflecting increasing hubristic tendencies. The results show that the ratio of "I" and "we" usually studied in English texts also show changes in a structurally different language, Hungarian. This finding suggests that it is extended periods of premiership that may increase hubristic behaviour in political leaders, not only experiencing excessive power. The results are particularly elucidating regarding the role of re-elections in political leaders' hubristic speech-and behaviour.


Asunto(s)
Lenguaje , Habla , Humanos , Hungría , Autoimagen , Síndrome
5.
R Soc Open Sci ; 9(4): 211769, 2022 Apr.
Artículo en Inglés | MEDLINE | ID: mdl-35401994

RESUMEN

Recent advances in the field of canine neuro-cognition allow for the non-invasive research of brain mechanisms in family dogs. Considering the striking similarities between dog's and human (infant)'s socio-cognition at the behavioural level, both similarities and differences in neural background can be of particular relevance. The current study investigates brain responses of n = 17 family dogs to human and conspecific emotional vocalizations using a fully non-invasive event-related potential (ERP) paradigm. We found that similarly to humans, dogs show a differential ERP response depending on the species of the caller, demonstrated by a more positive ERP response to human vocalizations compared to dog vocalizations in a time window between 250 and 650 ms after stimulus onset. A later time window between 800 and 900 ms also revealed a valence-sensitive ERP response in interaction with the species of the caller. Our results are, to our knowledge, the first ERP evidence to show the species sensitivity of vocal neural processing in dogs along with indications of valence sensitive processes in later post-stimulus time periods.

6.
Curr Biol ; 31(24): 5512-5521.e5, 2021 12 20.
Artículo en Inglés | MEDLINE | ID: mdl-34717832

RESUMEN

To learn words, humans extract statistical regularities from speech. Multiple species use statistical learning also to process speech, but the neural underpinnings of speech segmentation in non-humans remain largely unknown. Here, we investigated computational and neural markers of speech segmentation in dogs, a phylogenetically distant mammal that efficiently navigates humans' social and linguistic environment. Using electroencephalography (EEG), we compared event-related responses (ERPs) for artificial words previously presented in a continuous speech stream with different distributional statistics. Results revealed an early effect (220-470 ms) of transitional probability and a late component (590-790 ms) modulated by both word frequency and transitional probability. Using fMRI, we searched for brain regions sensitive to statistical regularities in speech. Structured speech elicited lower activity in the basal ganglia, a region involved in sequence learning, and repetition enhancement in the auditory cortex. Speech segmentation in dogs, similar to that of humans, involves complex computations, engaging both domain-general and modality-specific brain areas. VIDEO ABSTRACT.


Asunto(s)
Percepción del Habla , Habla , Animales , Perros , Electroencefalografía , Potenciales Evocados/fisiología , Aprendizaje , Mamíferos , Habla/fisiología , Percepción del Habla/fisiología
7.
Sci Rep ; 11(1): 2222, 2021 01 26.
Artículo en Inglés | MEDLINE | ID: mdl-33500506

RESUMEN

Learning object names after few exposures, is thought to be a typically human capacity. Previous accounts of similar skills in dogs did not include control testing procedures, leaving unanswered the question whether this ability is uniquely human. To investigate the presence of the capacity to rapidly learn words in dogs, we tested object-name learning after four exposures in two dogs with knowledge of multiple toy-names. The dogs were exposed to new object-names either while playing with the objects with the owner who named those in a social context or during an exclusion-based task similar to those used in previous studies. The dogs were then tested on the learning outcome of the new object-names. Both dogs succeeded after exposure in the social context but not after exposure to the exclusion-based task. Their memory of the object-names lasted for at least two minutes and tended to decay after retention intervals of 10 min and 1 h. This reveals that rapid object-name learning is possible for a non-human species (dogs), although memory consolidation may require more exposures. We suggest that rapid learning presupposes learning in a social context. To investigate whether rapid learning of object names in a social context is restricted to dogs that have already shown the ability to learn multiple object-names, we used the same procedure with 20 typical family dogs. These dogs did not demonstrate any evidence of learning the object names. This suggests that only a few subjects show this ability. Future studies should investigate whether this outstanding capacity stems from the exceptional talent of some individuals or whether it emerges from previous experience with object name learning.


Asunto(s)
Aprendizaje/fisiología , Animales , Señales (Psicología) , Perros , Femenino , Aprendizaje Verbal , Vocabulario
8.
J Eye Mov Res ; 13(3)2020 Mar 30.
Artículo en Inglés | MEDLINE | ID: mdl-33828798

RESUMEN

Based on Kuzmicová's [1] phenomenological typology of narrative styles, we studied the specific contributions of mental imagery to literary reading experience and to reading behavior by combining questionnaires with eye-tracking methodology. Specifically, we focused on the two main categories in Kuzmicová's [1] typology, i.e., texts dominated by an "enactive" style, and texts dominated by a "descriptive" style. "Enactive" style texts render characters interacting with their environment, and "descriptive" style texts render environments dissociated from human action. The quantitative analyses of word category distributions of two dominantly enactive and two dominantly descriptive texts indicated significant differences especially in the number of verbs, with more verbs in enactment compared to descriptive texts. In a second study, participants read two texts (one theoretically cueing descriptive imagery, the other cueing enactment imagery) while their eye movements were recorded. After reading, participants completed questionnaires assessing aspects of the reading experience generally, as well as their text-elicited mental imagery specifically. Results show that readers experienced more difficulties conjuring up mental images during reading descriptive style texts and that longer fixation duration on words were associated with enactive style text. We propose that enactive style involves more imagery processes which can be reflected in eye movement behavior.

9.
Biomed Eng Online ; 17(1): 37, 2018 Mar 27.
Artículo en Inglés | MEDLINE | ID: mdl-29580236

RESUMEN

BACKGROUND: Accurately solving the electroencephalography (EEG) forward problem is crucial for precise EEG source analysis. Previous studies have shown that the use of multicompartment head models in combination with the finite element method (FEM) can yield high accuracies both numerically and with regard to the geometrical approximation of the human head. However, the workload for the generation of multicompartment head models has often been too high and the use of publicly available FEM implementations too complicated for a wider application of FEM in research studies. In this paper, we present a MATLAB-based pipeline that aims to resolve this lack of easy-to-use integrated software solutions. The presented pipeline allows for the easy application of five-compartment head models with the FEM within the FieldTrip toolbox for EEG source analysis. METHODS: The FEM from the SimBio toolbox, more specifically the St. Venant approach, was integrated into the FieldTrip toolbox. We give a short sketch of the implementation and its application, and we perform a source localization of somatosensory evoked potentials (SEPs) using this pipeline. We then evaluate the accuracy that can be achieved using the automatically generated five-compartment hexahedral head model [skin, skull, cerebrospinal fluid (CSF), gray matter, white matter] in comparison to a highly accurate tetrahedral head model that was generated on the basis of a semiautomatic segmentation with very careful and time-consuming manual corrections. RESULTS: The source analysis of the SEP data correctly localizes the P20 component and achieves a high goodness of fit. The subsequent comparison to the highly detailed tetrahedral head model shows that the automatically generated five-compartment head model performs about as well as a highly detailed four-compartment head model (skin, skull, CSF, brain). This is a significant improvement in comparison to a three-compartment head model, which is frequently used in praxis, since the importance of modeling the CSF compartment has been shown in a variety of studies. CONCLUSION: The presented pipeline facilitates the use of five-compartment head models with the FEM for EEG source analysis. The accuracy with which the EEG forward problem can thereby be solved is increased compared to the commonly used three-compartment head models, and more reliable EEG source reconstruction results can be obtained.


Asunto(s)
Electroencefalografía , Procesamiento de Señales Asistido por Computador , Encéfalo/fisiología , Potenciales Evocados Somatosensoriales , Análisis de Elementos Finitos , Cabeza , Humanos
10.
Front Psychol ; 8: 211, 2017.
Artículo en Inglés | MEDLINE | ID: mdl-28270782

RESUMEN

In every-day conversations, the gap between turns of conversational partners is most frequently between 0 and 200 ms. We were interested how speakers achieve such fast transitions. We designed an experiment in which participants listened to pre-recorded questions about images presented on a screen and were asked to answer these questions. We tested whether speakers already prepare their answers while they listen to questions and whether they can prepare for the time of articulation by anticipating when questions end. In the experiment, it was possible to guess the answer at the beginning of the questions in half of the experimental trials. We also manipulated whether it was possible to predict the length of the last word of the questions. The results suggest when listeners know the answer early they start speech production already during the questions. Speakers can also time when to speak by predicting the duration of turns. These temporal predictions can be based on the length of anticipated words and on the overall probability of turn durations.

11.
Sci Rep ; 5: 12881, 2015 Aug 05.
Artículo en Inglés | MEDLINE | ID: mdl-26242909

RESUMEN

A striking puzzle about language use in everyday conversation is that turn-taking latencies are usually very short, whereas planning language production takes much longer. This implies overlap between language comprehension and production processes, but the nature and extent of such overlap has never been studied directly. Combining an interactive quiz paradigm with EEG measurements in an innovative way, we show that production planning processes start as soon as possible, that is, within half a second after the answer to a question can be retrieved (up to several seconds before the end of the question). Localization of ERP data shows early activation even of brain areas related to late stages of production planning (e.g., syllabification). Finally, oscillation results suggest an attention switch from comprehension to production around the same time frame. This perspective from interactive language use throws new light on the performance characteristics that language competence involves.


Asunto(s)
Encéfalo/fisiología , Conducta de Elección , Conducta Verbal , Adolescente , Atención , Comprensión , Electroencefalografía , Potenciales Evocados , Femenino , Humanos , Masculino , Tiempo de Reacción , Adulto Joven
12.
Neuroimage ; 109: 50-62, 2015 Apr 01.
Artículo en Inglés | MEDLINE | ID: mdl-25583610

RESUMEN

EEG mu rhythms (8-13 Hz) recorded at fronto-central electrodes are generally considered as markers of motor cortical activity in humans, because they are modulated when participants perform an action, when they observe another's action or even when they imagine performing an action. In this study, we analyzed the time-frequency (TF) modulation of mu rhythms while participants read action language ("You will cut the strawberry cake"), abstract language ("You will doubt the patient's argument"), and perceptive language ("You will notice the bright day"). The results indicated that mu suppression at fronto-central sites is associated with action language rather than with abstract or perceptive language. Also, the largest difference between conditions occurred quite late in the sentence, while reading the first noun, (contrast Action vs. Abstract), or the second noun following the action verb (contrast Action vs. Perceptive). This suggests that motor activation is associated with the integration of words across the sentence beyond the lexical processing of the action verb. Source reconstruction localized mu suppression associated with action sentences in premotor cortex (BA 6). The present study suggests (1) that the understanding of action language activates motor networks in the human brain, and (2) that this activation occurs online based on semantic integration across multiple words in the sentence.


Asunto(s)
Ondas Encefálicas , Encéfalo/fisiología , Comprensión/fisiología , Lenguaje , Movimiento , Adulto , Electroencefalografía , Femenino , Humanos , Masculino , Corteza Motora/fisiología , Lectura , Análisis Espectral , Adulto Joven
13.
J Cogn Neurosci ; 26(11): 2530-9, 2014 Nov.
Artículo en Inglés | MEDLINE | ID: mdl-24893743

RESUMEN

RTs in conversation, with average gaps of 200 msec and often less, beat standard RTs, despite the complexity of response and the lag in speech production (600 msec or more). This can only be achieved by anticipation of timing and content of turns in conversation, about which little is known. Using EEG and an experimental task with conversational stimuli, we show that estimation of turn durations are based on anticipating the way the turn would be completed. We found a neuronal correlate of turn-end anticipation localized in ACC and inferior parietal lobule, namely a beta-frequency desynchronization as early as 1250 msec, before the end of the turn. We suggest that anticipation of the other's utterance leads to accurately timed transitions in everyday conversations.


Asunto(s)
Anticipación Psicológica/fisiología , Encéfalo/fisiología , Comunicación , Relaciones Interpersonales , Percepción del Habla/fisiología , Habla/fisiología , Adulto , Ritmo beta , Electroencefalografía , Femenino , Humanos , Masculino , Pruebas Neuropsicológicas , Adulto Joven
14.
Front Psychol ; 3: 376, 2012.
Artículo en Inglés | MEDLINE | ID: mdl-23112776

RESUMEN

During conversation listeners have to perform several tasks simultaneously. They have to comprehend their interlocutor's turn, while also having to prepare their own next turn. Moreover, a careful analysis of the timing of natural conversation reveals that next speakers also time their turns very precisely. This is possible only if listeners can predict accurately when the speaker's turn is going to end. But how are people able to predict when a turn-ends? We propose that people know when a turn-ends, because they know how it ends. We conducted a gating study to examine if better turn-end predictions coincide with more accurate anticipation of the last words of a turn. We used turns from an earlier button-press experiment where people had to press a button exactly when a turn-ended. We show that the proportion of correct guesses in our experiment is higher when a turn's end was estimated better in time in the button-press experiment. When people were too late in their anticipation in the button-press experiment, they also anticipated more words in our gating study. We conclude that people made predictions in advance about the upcoming content of a turn and used this prediction to estimate the duration of the turn. We suggest an economical model of turn-end anticipation that is based on anticipation of words and syntactic frames in comprehension.

15.
Hum Brain Mapp ; 33(12): 2898-912, 2012 Dec.
Artículo en Inglés | MEDLINE | ID: mdl-22488914

RESUMEN

The relationship between the evoked responses (ERPs/ERFs) and the event-related changes in EEG/MEG power that can be observed during sentence-level language comprehension is as yet unclear. This study addresses a possible relationship between MEG power changes and the N400m component of the event-related field. Whole-head MEG was recorded while subjects listened to spoken sentences with incongruent (IC) or congruent (C) sentence endings. A clear N400m was observed over the left hemisphere, and was larger for the IC sentences than for the C sentences. A time-frequency analysis of power revealed a decrease in alpha and beta power over the left hemisphere in roughly the same time range as the N400m for the IC relative to the C condition. A linear regression analysis revealed a positive linear relationship between N400m and beta power for the IC condition, not for the C condition. No such linear relation was found between N400m and alpha power for either condition. The sources of the beta decrease were estimated in the LIFG, a region known to be involved in semantic unification operations. One source of the N400m was estimated in the left superior temporal region, which has been related to lexical retrieval. We interpret our data within a framework in which beta oscillations are inversely related to the engagement of task-relevant brain networks. The source reconstructions of the beta power suppression and the N400m effect support the notion of a dynamic communication between the LIFG and the left superior temporal region during language comprehension.


Asunto(s)
Ritmo beta/fisiología , Corteza Cerebral/fisiología , Comprensión/fisiología , Potenciales Evocados/fisiología , Lenguaje , Adolescente , Adulto , Mapeo Encefálico , Femenino , Humanos , Magnetoencefalografía , Masculino , Percepción del Habla/fisiología
16.
J Cogn Neurosci ; 22(7): 1333-47, 2010 Jul.
Artículo en Inglés | MEDLINE | ID: mdl-19580386

RESUMEN

There is growing evidence suggesting that synchronization changes in the oscillatory neuronal dynamics in the EEG or MEG reflect the transient coupling and uncoupling of functional networks related to different aspects of language comprehension. In this work, we examine how sentence-level syntactic unification operations are reflected in the oscillatory dynamics of the MEG. Participants read sentences that were either correct, contained a word category violation, or were constituted of random word sequences devoid of syntactic structure. A time-frequency analysis of MEG power changes revealed three types of effects. The first type of effect was related to the detection of a (word category) violation in a syntactically structured sentence, and was found in the alpha and gamma frequency bands. A second type of effect was maximally sensitive to the syntactic manipulations: A linear increase in beta power across the sentence was present for correct sentences, was disrupted upon the occurrence of a word category violation, and was absent in syntactically unstructured random word sequences. We therefore relate this effect to syntactic unification operations. Thirdly, we observed a linear increase in theta power across the sentence for all syntactically structured sentences. The effects are tentatively related to the building of a working memory trace of the linguistic input. In conclusion, the data seem to suggest that syntactic unification is reflected by neuronal synchronization in the lower-beta frequency band.


Asunto(s)
Ritmo beta/psicología , Encéfalo/fisiología , Comprensión/fisiología , Lenguaje , Neuronas/fisiología , Humanos , Magnetoencefalografía , Lectura , Semántica , Vocabulario
SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA
...