Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 5 de 5
Filtrar
Mais filtros

Base de dados
Tipo de documento
Intervalo de ano de publicação
1.
Front Psychol ; 15: 1373191, 2024.
Artigo em Inglês | MEDLINE | ID: mdl-38550642

RESUMO

Introduction: A substantial amount of research from the last two decades suggests that infants' attention to the eyes and mouth regions of talking faces could be a supporting mechanism by which they acquire their native(s) language(s). Importantly, attentional strategies seem to be sensitive to three types of constraints: the properties of the stimulus, the infants' attentional control skills (which improve with age and brain maturation) and their previous linguistic and non-linguistic knowledge. The goal of the present paper is to present a probabilistic model to simulate infants' visual attention control to talking faces as a function of their language learning environment (monolingual vs. bilingual), attention maturation (i.e., age) and their increasing knowledge concerning the task at stake (detecting and learning to anticipate information displayed in the eyes or the mouth region of the speaker). Methods: To test the model, we first considered experimental eye-tracking data from monolingual and bilingual infants (aged between 12 and 18 months; in part already published) exploring a face speaking in their native language. In each of these conditions, we compared the proportion of total looking time on each of the two areas of interest (eyes vs. mouth of the speaker). Results: In line with previous studies, our experimental results show a strong bias for the mouth (over the eyes) region of the speaker, regardless of age. Furthermore, monolingual and bilingual infants appear to have different developmental trajectories, which is consistent with and extends previous results observed in the first year. Comparison of model simulations with experimental data shows that the model successfully captures patterns of visuo-attentional orientation through the three parameters that effectively modulate the simulated visuo-attentional behavior. Discussion: We interpret parameter values, and find that they adequately reflect evolution of strength and speed of anticipatory learning; we further discuss their descriptive and explanatory power.

2.
Eur J Psychol ; 19(3): 299-307, 2023 Aug.
Artigo em Inglês | MEDLINE | ID: mdl-37731753

RESUMO

Research on the perception of interpersonal distance has shown the existence of an asymmetry effect which depends on the reference point of the estimation: the distance from oneself to others can be perceived as longer or shorter than the distance from others to oneself. The mechanism underlying this asymmetric effect is related to the object's cognitive salience. The self often functions as a habitual reference point and therefore one's own salience may be higher than that of other objects. In this case, an egocentric asymmetry effect appears with a perceived shorter distance from others to oneself. However, if others are more salient than oneself, then the reverse can happen (allocentric asymmetry effect). The present work investigates if asymmetry in self-other(s) distance perception changes when the other is a social robot. An experiment was conducted with 174 participants who were asked to estimate the distance between themselves and both robotic and human assistants on a schematic map of a hospital emergency room (between-subjects design). With robust ANOVA, the results showed that the participants felt closer to the human assistant than to the robot, notably when the person served as the estimation reference point. Perceived distances to the social robot were not significantly distorted. If a rather allocentric effect with the human assistant might reflect an affiliation goal on the part of the participants, the absence of effect with the social robot forces us to reconsider its humanization. This could nevertheless reflect a purely mechanical and utilitarian conception of it.

3.
PLoS One ; 18(8): e0289787, 2023.
Artigo em Inglês | MEDLINE | ID: mdl-37556492

RESUMO

Musculoskeletal disorders (MSDs) are the main occupational diseases and are pathologies of multifactorial origin, with posture being one of them. This creates new human-robot collaboration situations that can modify operator behaviors and performance in their task. These changes raise questions about human-robot team performance and operator health. This study aims to understand the consequences of introducing a cobot on work performance, operator posture, and the quality of interactions. It also aims to evaluate the impact of two levels of difficulty in a dual task on these measures. For this purpose, thirty-four participants performed an assembly task in collaboration with a co-worker, either a human or a cobot with two articulated arms. In addition to this motor task, the participants had to perform an auditory task with two levels of difficulty (dual task). They were equipped with seventeen motion capture sensors. The collaborative work was filmed with a camera, and the actions of the participants and co-worker were coded based on the dichotomy of idle and activity. Interactions were coded based on time out, cooperation, and collaboration. The results showed that performance (number of products manufactured) was lower when the participant collaborated with a cobot rather than a human, with also less collaboration and activity time. However, RULA scores were lower-indicating a reduced risk of musculoskeletal disorders-during collaboration with a cobot compared to a human. Despite a decrease in production and a loss of fluidity, likely due to the characteristics of the cobot, working in collaboration with a cobot makes the task safer in terms of the risk of musculoskeletal disorders.


Assuntos
Doenças Musculoesqueléticas , Doenças Profissionais , Desempenho Profissional , Humanos , Postura
4.
Accid Anal Prev ; 63: 83-8, 2014 Feb.
Artigo em Inglês | MEDLINE | ID: mdl-24269864

RESUMO

Driving through rain results in reduced visual performance, and car designers have proposed countermeasures in order to reduce the impact of rain on driving performance. In this paper, we propose a methodology dedicated to the quantitative estimation of the loss of visual performance due to the falling rain. We have considered the rain falling on the windshield as the main factor which reduces visual performance in driving. A laboratory experiment was conducted with 40 participants. The reduction of visual performance through rain was considered with respect to two driving tasks: the detection of an object on the road (contrast threshold) and reading a road sign. This experiment was conducted in a laboratory under controlled artificial rain. Two levels of rain intensity were compared, as well as two wiper conditions (new and worn), while the reference condition was without rain. The reference driving situation was night driving. Effects of both the rain level and the wipers characteristics were found, which validates the proposed methodology for the quantitative estimation of rain countermeasures in terms of visual performance.


Assuntos
Condução de Veículo , Desempenho Psicomotor , Chuva , Percepção Visual , Adulto , Automóveis , Desenho de Equipamento , Feminino , Humanos , Masculino , Pessoa de Meia-Idade , Testes Visuais
5.
Front Hum Neurosci ; 7: 316, 2013.
Artigo em Inglês | MEDLINE | ID: mdl-23818879

RESUMO

HIGHLIGHTSThe redundant bilateral visual presentation of verbal stimuli decreases asymmetry and increases the cooperation between the two hemispheres.The increased cooperation between the hemispheres is related to semantic information during lexical processing.The inter-hemispheric interaction is represented by both inhibition and cooperation. This study explores inter-hemispheric interaction (IHI) during a lexical decision task by using a behavioral approach, the bilateral presentation of stimuli within a divided visual field experiment. Previous studies have shown that compared to unilateral presentation, the bilateral redundant (BR) presentation decreases the inter-hemispheric asymmetry and facilitates the cooperation between hemispheres. However, it is still poorly understood which type of information facilitates this cooperation. In the present study, verbal stimuli were presented unilaterally (left or right visual hemi-field successively) and bilaterally (left and right visual hemi-field simultaneously). Moreover, during the bilateral presentation of stimuli, we manipulated the relationship between target and distractors in order to specify the type of information which modulates the IHI. Thus, three types of information were manipulated: perceptual, semantic, and decisional, respectively named pre-lexical, lexical and post-lexical processing. Our results revealed left hemisphere (LH) lateralization during the lexical decision task. In terms of inter-hemisphere interaction, the perceptual and decision-making information increased the inter-hemispheric asymmetry, suggesting the inhibition of one hemisphere upon the other. In contrast, semantic information decreased the inter-hemispheric asymmetry, suggesting cooperation between the hemispheres. We discussed our results according to current models of IHI and concluded that cerebral hemispheres interact and communicate according to various excitatory and inhibitory mechanisms, all which depend on specific processes and various levels of word processing.

SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA