Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Resultados 1 - 3 de 3
Filtrar
Más filtros

Banco de datos
Asunto principal
Tipo del documento
Publication year range
1.
Dev Sci ; 27(2): e13449, 2024 Mar.
Artículo en Inglés | MEDLINE | ID: mdl-37750490

RESUMEN

What is the optimal penalty for errors in infant skill learning? Behavioral analyses indicate that errors are frequent but trivial as infants acquire foundational skills. In learning to walk, for example, falling is commonplace but appears to incur only a negligible penalty. Behavioral data, however, cannot reveal whether a low penalty for falling is beneficial for learning to walk. Here, we used a simulated bipedal robot as an embodied model to test the optimal penalty for errors in learning to walk. We trained the robot to walk using 12,500 independent simulations on walking paths produced by infants during free play and systematically varied the penalty for falling-a level of precision, control, and magnitude impossible with real infants. When trained with lower penalties for falling, the robot learned to walk farther and better on familiar, trained paths and better generalized its learning to novel, untrained paths. Indeed, zero penalty for errors led to the best performance for both learning and generalization. Moreover, the beneficial effects of a low penalty were stronger for generalization than for learning. Robot simulations corroborate prior behavioral data and suggest that a low penalty for errors helps infants learn foundational skills (e.g., walking, talking, and social interactions) that require immense flexibility, creativity, and adaptability. RESEARCH HIGHLIGHTS: During infant skill acquisition, errors are commonplace but appear to incur a low penalty; when learning to walk, for example, falls are frequent but trivial. To test the optimal penalty for errors, we trained a simulated robot to walk using real infant paths and systematically manipulated the penalty for falling. Lower penalties in training led to better performance on familiar, trained paths and on novel untrained paths, and zero penalty was most beneficial. Benefits of a low penalty were stronger for untrained than for trained paths, suggesting that discounting errors facilitates acquiring skills that require immense flexibility and generalization.


Asunto(s)
Robótica , Lactante , Humanos , Accidentes por Caídas , Caminata , Aprendizaje , Generalización Psicológica
2.
Front Robot AI ; 11: 1424845, 2024.
Artículo en Inglés | MEDLINE | ID: mdl-39445149

RESUMEN

Simulation-based learning is an integral part of hands-on learning and is often done through role-playing games or patients simulated by professional actors. In this article, we present the use of a humanoid robot as a simulation patient for the presentation of disease symptoms in the setting of medical education. In a study, 12 participants watched both the patient simulation by the robotic patient and the video with the actor patient. We asked participants about their subjective impressions of the robotic patient simulation compared to the video with the human actor patient using a self-developed questionnaire. In addition, we used the Affinity for Technology Interaction Scale. The evaluation of the questionnaire provided insights into whether the robot was able to realistically represent the patient which features still need to be improved, and whether the robot patient simulation was accepted by the participants as a learning method. Sixty-seven percent of the participants indicated that they would use the robot as a training opportunity in addition to the videos with acting patients. The majority of participants indicated that they found it very beneficial to have the robot repeat the case studies at their own pace.

3.
Front Neurorobot ; 7: 22, 2013.
Artículo en Inglés | MEDLINE | ID: mdl-24273511

RESUMEN

Humans and other biological agents are able to autonomously learn and cache different skills in the absence of any biological pressure or any assigned task. In this respect, Intrinsic Motivations (i.e., motivations not connected to reward-related stimuli) play a cardinal role in animal learning, and can be considered as a fundamental tool for developing more autonomous and more adaptive artificial agents. In this work, we provide an exhaustive analysis of a scarcely investigated problem: which kind of IM reinforcement signal is the most suitable for driving the acquisition of multiple skills in the shortest time? To this purpose we implemented an artificial agent with a hierarchical architecture that allows to learn and cache different skills. We tested the system in a setup with continuous states and actions, in particular, with a kinematic robotic arm that has to learn different reaching tasks. We compare the results of different versions of the system driven by several different intrinsic motivation signals. The results show (a) that intrinsic reinforcements purely based on the knowledge of the system are not appropriate to guide the acquisition of multiple skills, and (b) that the stronger the link between the IM signal and the competence of the system, the better the performance.

SELECCIÓN DE REFERENCIAS
Detalles de la búsqueda