Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 6 de 6
Filtrar
Mais filtros










Base de dados
Intervalo de ano de publicação
1.
Neuroimage ; 243: 118534, 2021 11.
Artigo em Inglês | MEDLINE | ID: mdl-34469813

RESUMO

Recognizing the actions of others depends on segmentation into meaningful events. After decades of research in this area, it remains still unclear how humans do this and which brain areas support underlying processes. Here we show that a computer vision-based model of touching and untouching events can predict human behavior in segmenting object manipulation actions with high accuracy. Using this computational model and functional Magnetic Resonance Imaging (fMRI), we pinpoint the neural networks underlying this segmentation behavior during an implicit action observation task. Segmentation was announced by a strong increase of visual activity at touching events followed by the engagement of frontal, hippocampal and insula regions, signaling updating expectation at subsequent untouching events. Brain activity and behavior show that touching-untouching motifs are critical features for identifying the key elements of actions including object manipulations.


Assuntos
Mapeamento Encefálico/métodos , Encéfalo/fisiologia , Tato/fisiologia , Adolescente , Adulto , Simulação por Computador , Feminino , Humanos , Imageamento por Ressonância Magnética , Masculino , Percepção de Movimento/fisiologia , Movimento/fisiologia , Redes Neurais de Computação , Reconhecimento Psicológico , Adulto Jovem
2.
PLoS One ; 16(7): e0253130, 2021.
Artigo em Inglês | MEDLINE | ID: mdl-34293800

RESUMO

Auditory and visual percepts are integrated even when they are not perfectly temporally aligned with each other, especially when the visual signal precedes the auditory signal. This window of temporal integration for asynchronous audiovisual stimuli is relatively well examined in the case of speech, while other natural action-induced sounds have been widely neglected. Here, we studied the detection of audiovisual asynchrony in three different whole-body actions with natural action-induced sounds-hurdling, tap dancing and drumming. In Study 1, we examined whether audiovisual asynchrony detection, assessed by a simultaneity judgment task, differs as a function of sound production intentionality. Based on previous findings, we expected that auditory and visual signals should be integrated over a wider temporal window for actions creating sounds intentionally (tap dancing), compared to actions creating sounds incidentally (hurdling). While percentages of perceived synchrony differed in the expected way, we identified two further factors, namely high event density and low rhythmicity, to induce higher synchrony ratings as well. Therefore, we systematically varied event density and rhythmicity in Study 2, this time using drumming stimuli to exert full control over these variables, and the same simultaneity judgment tasks. Results suggest that high event density leads to a bias to integrate rather than segregate auditory and visual signals, even at relatively large asynchronies. Rhythmicity had a similar, albeit weaker effect, when event density was low. Our findings demonstrate that shorter asynchronies and visual-first asynchronies lead to higher synchrony ratings of whole-body action, pointing to clear parallels with audiovisual integration in speech perception. Overconfidence in the naturally expected, that is, synchrony of sound and sight, was stronger for intentional (vs. incidental) sound production and for movements with high (vs. low) rhythmicity, presumably because both encourage predictive processes. In contrast, high event density appears to increase synchronicity judgments simply because it makes the detection of audiovisual asynchrony more difficult. More studies using real-life audiovisual stimuli with varying event densities and rhythmicities are needed to fully uncover the general mechanisms of audiovisual integration.


Assuntos
Percepção Auditiva , Dança/fisiologia , Música , Atletismo/fisiologia , Percepção Visual , Estimulação Acústica , Adulto , Dança/psicologia , Feminino , Humanos , Masculino , Música/psicologia , Estimulação Luminosa , Som , Atletismo/psicologia , Percepção Visual/fisiologia , Adulto Jovem
3.
PLoS One ; 15(12): e0243829, 2020.
Artigo em Inglês | MEDLINE | ID: mdl-33370343

RESUMO

Predicting other people's upcoming action is key to successful social interactions. Previous studies have started to disentangle the various sources of information that action observers exploit, including objects, movements, contextual cues and features regarding the acting person's identity. We here focus on the role of static and dynamic inter-object spatial relations that change during an action. We designed a virtual reality setup and tested recognition speed for ten different manipulation actions. Importantly, all objects had been abstracted by emulating them with cubes such that participants could not infer an action using object information. Instead, participants had to rely only on the limited information that comes from the changes in the spatial relations between the cubes. In spite of these constraints, participants were able to predict actions in, on average, less than 64% of the action's duration. Furthermore, we employed a computational model, the so-called enriched Semantic Event Chain (eSEC), which incorporates the information of different types of spatial relations: (a) objects' touching/untouching, (b) static spatial relations between objects and (c) dynamic spatial relations between objects during an action. Assuming the eSEC as an underlying model, we show, using information theoretical analysis, that humans mostly rely on a mixed-cue strategy when predicting actions. Machine-based action prediction is able to produce faster decisions based on individual cues. We argue that human strategy, though slower, may be particularly beneficial for prediction of natural and more complex actions with more variable or partial sources of information. Our findings contribute to the understanding of how individuals afford inferring observed actions' goals even before full goal accomplishment, and may open new avenues for building robots for conflict-free human-robot cooperation.


Assuntos
Simulação por Computador , Atividades Humanas , Modelos Biológicos , Semântica , Percepção Espacial , Adulto , Feminino , Humanos , Masculino , Realidade Virtual , Adulto Jovem
4.
Front Neurosci ; 14: 483, 2020.
Artigo em Inglês | MEDLINE | ID: mdl-32477059

RESUMO

Most human actions produce concomitant sounds. Action sounds can be either part of the action goal (GAS, goal-related action sounds), as for instance in tap dancing, or a mere by-product of the action (BAS, by-product action sounds), as for instance in hurdling. It is currently unclear whether these two types of action sounds-incidental or intentional-differ in their neural representation and whether the impact on the performance evaluation of an action diverges between the two. We here examined whether during the observation of tap dancing compared to hurdling, auditory information is a more important factor for positive action quality ratings. Moreover, we tested whether observation of tap dancing vs. hurdling led to stronger attenuation in primary auditory cortex, and a stronger mismatch signal when sounds do not match our expectations. We recorded individual point-light videos of newly trained participants performing tap dancing and hurdling. In the subsequent functional magnetic resonance imaging (fMRI) session, participants were presented with the videos that displayed their own actions, including corresponding action sounds, and were asked to rate the quality of their performance. Videos were either in their original form or scrambled regarding the visual modality, the auditory modality, or both. As hypothesized, behavioral results showed significantly lower rating scores in the GAS condition compared to the BAS condition when the auditory modality was scrambled. Functional MRI contrasts between BAS and GAS actions revealed higher activation of primary auditory cortex in the BAS condition, speaking in favor of stronger attenuation in GAS, as well as stronger activation of posterior superior temporal gyri and the supplementary motor area in GAS. Results suggest that the processing of self-generated action sounds depends on whether we have the intention to produce a sound with our action or not, and action sounds may be more prone to be used as sensory feedback when they are part of the explicit action goal. Our findings contribute to a better understanding of the function of action sounds for learning and controlling sound-producing actions.

5.
Front Psychol ; 9: 1942, 2018.
Artigo em Inglês | MEDLINE | ID: mdl-30459670

RESUMO

Two decades of research indicate that visual processing is typically enhanced for items that are in the space near the hands (near-hand space). Enhanced attention and cognitive control have been thought to be responsible for the observed effects, amongst others. As accumulating experimental evidence and recent theories of dual-tasking suggest an involvement of cognitive control and attentional processes during dual tasking, dual-task performance may be modulated in the near-hand space. Therefore, we performed a series of three experiments that aimed to test if the near-hand space affects the shift between task-component processing in two visual-manual tasks. We applied a Psychological Refractory Period Paradigm (PRP) with varying stimulus-onset asynchrony (SOA) and manipulated stimulus-hand proximity by placing hands either on the side of a computer screen (near-hand condition) or on the lap (far-hand condition). In Experiment 1, Task 1 was a number categorization task (odd vs. even) and Task 2 was a letter categorization task (vowel vs. consonant). Stimulus presentation was spatially segregated with Stimulus 1 presented on the right side of the screen, appearing first and then Stimulus 2, presented on the left side of the screen, appearing second. In Experiment 2, we replaced Task 2 with a color categorization task (orange vs. blue). In Experiment 3, Stimulus 1 and Stimulus 2 were centrally presented as a single bivalent stimulus. The classic PRP effect was shown in all three experiments, with Task 2 performance declining at short SOA while Task 1 performance being relatively unaffected by task-overlap. In none of the three experiments did stimulus-hand proximity affect the size of the PRP effect. Our results indicate that the switching operation between two tasks in the PRP paradigm is neither optimized nor disturbed by being processed in near-hand space.

6.
Brain Lang ; 179: 11-21, 2018 04.
Artigo em Inglês | MEDLINE | ID: mdl-29482170

RESUMO

The investigation of specific lexical categories has substantially contributed to advancing our knowledge on how meaning is neurally represented. One sensory domain that has received particularly little attention is olfaction. This study aims to investigate the neural representation of lexical olfaction. In an fMRI experiment, participants read olfactory metaphors, their literal paraphrases, and literal olfactory sentences. Regions of interest were defined by a functional localizer run of odor processing. We observed activation in secondary olfactory areas during metaphorical and literal olfactory processing, thus extending previous findings to the novel source domain of olfaction. Previously reported enhanced activation in emotion-related areas due to metaphoricity could not be replicated. Finally, no primary olfactory cortex was found active during lexical olfaction processing. We suggest that this absence is due to olfactory hedonicity being crucial to understand the meaning of the current olfactory expressions. Consequently, the processing of olfactory hedonicity recruits secondary olfactory areas.


Assuntos
Compreensão/fisiologia , Metáfora , Córtex Pré-Frontal/diagnóstico por imagem , Olfato/fisiologia , Adulto , Mapeamento Encefálico , Emoções , Feminino , Humanos , Imageamento por Ressonância Magnética , Masculino , Adulto Jovem
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA
...