Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 30
Filtrar
1.
Sensors (Basel) ; 20(3)2020 Jan 29.
Artículo en Inglés | MEDLINE | ID: mdl-32013226

RESUMEN

Wearable sensors are gaining in popularity because they enable outdoor experimental monitoring. This paper presents a cost-effective sensorised insole based on a mesh of tactile capacitive sensors. Each sensor's spatial resolution is about 4 taxels/cm 2 in order to have an accurate reconstruction of the contact pressure distribution. As a consequence, the insole provides information such as contact forces, moments, and centre of pressure. To retrieve this information, a calibration technique that fuses measurements from a vacuum chamber and shoes equipped with force/torque sensors is proposed. The validation analysis shows that the best performance achieved a root mean square error (RMSE) of about 7 N for the contact forces and 2 N m for the contact moments when using the force/torque shoe data as ground truth. Thus, the insole may be an alternative to force/torque sensors for certain applications, with a considerably more cost-effective and less invasive hardware.


Asunto(s)
Técnicas Biosensibles , Pie/fisiología , Marcha/fisiología , Tacto/fisiología , Fenómenos Biomecánicos , Ortesis del Pié , Humanos , Presión , Dispositivos Electrónicos Vestibles
2.
Dev Sci ; 17(6): 809-25, 2014 Nov.
Artículo en Inglés | MEDLINE | ID: mdl-24946990

RESUMEN

Human expertise in face perception grows over development, but even within minutes of birth, infants exhibit an extraordinary sensitivity to face-like stimuli. The dominant theory accounts for innate face detection by proposing that the neonate brain contains an innate face detection device, dubbed 'Conspec'. Newborn face preference has been promoted as some of the strongest evidence for innate knowledge, and forms a canonical stage for the modern form of the nature-nurture debate in psychology. Interpretation of newborn face preference results has concentrated on monocular stimulus properties, with little mention or focused investigation of potential binocular involvement. However, the question of whether and how newborns integrate the binocular visual streams bears directly on the generation of observable visual preferences. In this theoretical paper, we employ a synthetic approach utilizing robotic and computational models to draw together the threads of binocular integration and face preference in newborns, and demonstrate cases where the former may explain the latter. We suggest that a system-level view considering the binocular embodiment of newborn vision may offer a mutually satisfying resolution to some long-running arguments in the polarizing debate surrounding the existence and causal structure of newborns' 'innate knowledge' of faces.


Asunto(s)
Atención/fisiología , Encéfalo/crecimiento & desarrollo , Desarrollo Infantil/fisiología , Cara , Modelos Biológicos , Reconocimiento Visual de Modelos/fisiología , Humanos , Recién Nacido , Visión Ocular , Vías Visuales/crecimiento & desarrollo
3.
Sci Robot ; 9(86): eadh3834, 2024 Jan 24.
Artículo en Inglés | MEDLINE | ID: mdl-38266102

RESUMEN

We present an avatar system designed to facilitate the embodiment of humanoid robots by human operators, validated through iCub3, a humanoid developed at the Istituto Italiano di Tecnologia. More precisely, the paper makes two contributions: First, we present the humanoid iCub3 as a robotic avatar that integrates the latest significant improvements after about 15 years of development of the iCub series. Second, we present a versatile avatar system enabling humans to embody humanoid robots encompassing aspects such as locomotion, manipulation, voice, and facial expressions with comprehensive sensory feedback including visual, auditory, haptic, weight, and touch modalities. We validated the system by implementing several avatar architecture instances, each tailored to specific requirements. First, we evaluated the optimized architecture for verbal, nonverbal, and physical interactions with a remote recipient. This testing involved the operator in Genoa and the avatar in the Biennale di Venezia, Venice-about 290 kilometers away-thus allowing the operator to visit the Italian art exhibition remotely. Second, we evaluated the optimized architecture for recipient physical collaboration and public engagement on stage, live, at the We Make Future show, a prominent world digital innovation festival. In this instance, the operator was situated in Genoa while the avatar operated in Rimini-about 300 kilometers away-interacting with a recipient who entrusted the avatar with a payload to carry on stage before an audience of approximately 2000 spectators. Third, we present the architecture implemented by the iCub Team for the All Nippon Airways (ANA) Avatar XPrize competition.


Asunto(s)
Avatar , Robótica , Humanos , Retroalimentación Sensorial , Interfaces Hápticas , Locomoción
4.
Nat Mater ; 15(9): 921-5, 2016 08 24.
Artículo en Inglés | MEDLINE | ID: mdl-27554988
6.
Front Robot AI ; 5: 10, 2018.
Artículo en Inglés | MEDLINE | ID: mdl-33500897

RESUMEN

This paper describes open source software (available at https://github.com/robotology/natural-speech) to build automatic speech recognition (ASR) systems and run them within the YARP platform. The toolkit is designed (i) to allow non-ASR experts to easily create their own ASR system and run it on iCub and (ii) to build deep learning-based models specifically addressing the main challenges an ASR system faces in the context of verbal human-iCub interactions. The toolkit mostly consists of Python, C++ code and shell scripts integrated in YARP. As additional contribution, a second codebase (written in Matlab) is provided for more expert ASR users who want to experiment with bio-inspired and developmental learning-inspired ASR systems. Specifically, we provide code for two distinct kinds of speech recognition: "articulatory" and "unsupervised" speech recognition. The first is largely inspired by influential neurobiological theories of speech perception which assume speech perception to be mediated by brain motor cortex activities. Our articulatory systems have been shown to outperform strong deep learning-based baselines. The second type of recognition systems, the "unsupervised" systems, do not use any supervised information (contrary to most ASR systems, including our articulatory systems). To some extent, they mimic an infant who has to discover the basic speech units of a language by herself. In addition, we provide resources consisting of pre-trained deep learning models for ASR, and a 2.5-h speech dataset of spoken commands, the VoCub dataset, which can be used to adapt an ASR system to the typical acoustic environments in which iCub operates.

7.
Sci Rep ; 8(1): 17842, 2018 12 14.
Artículo en Inglés | MEDLINE | ID: mdl-30552377

RESUMEN

Most experimental protocols examining joint attention with the gaze cueing paradigm are "observational" and "offline", thereby not involving social interaction. We examined whether within a naturalistic online interaction, real-time eye contact influences the gaze cueing effect (GCE). We embedded gaze cueing in an interactive protocol with the iCub humanoid robot. This has the advantage of ecological validity combined with excellent experimental control. Critically, before averting the gaze, iCub either established eye contact or not, a manipulation enabled by an algorithm detecting position of the human eyes. For non-predictive gaze cueing procedure (Experiment 1), only the eye contact condition elicited GCE, while for counter-predictive procedure (Experiment 2), only the condition with no eye contact induced GCE. These results reveal an interactive effect of strategic (gaze validity) and social (eye contact) top-down components on the reflexive orienting of attention induced by gaze cues. More generally, we propose that naturalistic protocols with an embodied presence of an agent can cast a new light on mechanisms of social cognition.

8.
Adv Sci (Weinh) ; 5(2): 1700587, 2018 02.
Artículo en Inglés | MEDLINE | ID: mdl-29619306

RESUMEN

Stretchable capacitive devices are instrumental for new-generation multifunctional haptic technologies particularly suited for soft robotics and electronic skin applications. A majority of elongating soft electronics still rely on silicone for building devices or sensors by multiple-step replication. In this study, fabrication of a reliable elongating parallel-plate capacitive touch sensor, using nitrile rubber gloves as templates, is demonstrated. Spray coating both sides of a rubber piece cut out of a glove with a conductive polymer suspension carrying dispersed carbon nanofibers (CnFs) or graphene nanoplatelets (GnPs) is sufficient for making electrodes with low sheet resistance values (≈10 Ω sq-1). The electrodes based on CnFs maintain their conductivity up to 100% elongation whereas the GnPs-based ones form cracks before 60% elongation. However, both electrodes are reliable under elongation levels associated with human joints motility (≈20%). Strikingly, structural damages due to repeated elongation/recovery cycles could be healed through annealing. Haptic sensing characteristics of a stretchable capacitive device by wrapping it around the fingertip of a robotic hand (ICub) are demonstrated. Tactile forces as low as 0.03 N and as high as 5 N can be easily sensed by the device under elongation or over curvilinear surfaces.

9.
Front Robot AI ; 5: 22, 2018.
Artículo en Inglés | MEDLINE | ID: mdl-33500909

RESUMEN

Generating complex, human-like behavior in a humanoid robot like the iCub requires the integration of a wide range of open source components and a scalable cognitive architecture. Hence, we present the iCub-HRI library which provides convenience wrappers for components related to perception (object recognition, agent tracking, speech recognition, and touch detection), object manipulation (basic and complex motor actions), and social interaction (speech synthesis and joint attention) exposed as a C++ library with bindings for Java (allowing to use iCub-HRI within Matlab) and Python. In addition to previously integrated components, the library allows for simple extension to new components and rapid prototyping by adapting to changes in interfaces between components. We also provide a set of modules which make use of the library, such as a high-level knowledge acquisition module and an action recognition module. The proposed architecture has been successfully employed for a complex human-robot interaction scenario involving the acquisition of language capabilities, execution of goal-oriented behavior and expression of a verbal narrative of the robot's experience in the world. Accompanying this paper is a tutorial which allows a subset of this interaction to be reproduced. The architecture is aimed at researchers familiarizing themselves with the iCub ecosystem, as well as expert users, and we expect the library to be widely used in the iCub community.

10.
Prog Brain Res ; 164: 39-59, 2007.
Artículo en Inglés | MEDLINE | ID: mdl-17920425

RESUMEN

In this chapter we discuss the mirror-neurons system, a cortical network of areas that enables individuals to understand the meaning of actions performed by others through the activation of internal representations, which motorically code for the observed actions. We review evidence indicating that this capability does not depend on the amount of visual stimulation relative to the observed action, or on the sensory modality specifically addressed (visual, acoustical). Any sensorial cue that can evoke the "idea" of a meaningful action activates the vocabulary of motor representations stored in the ventral premotor cortex and, in humans, especially in Broca's area. This is true also for phonoarticulatory actions, which determine speech production. We present also a model of the mirror-neurons system and its partial implementation in a set of two experiments. The results, according to our model, show that motor information plays a significant role in the interpretation of actions and that a mirror-like representation can be developed autonomously as a result of the interaction between the individual and the environment.


Asunto(s)
Corteza Cerebral/citología , Modelos Neurológicos , Redes Neurales de la Computación , Neuronas/fisiología , Animales , Mapeo Encefálico , Corteza Cerebral/fisiología , Humanos , Red Nerviosa/fisiología , Vías Nerviosas/fisiología
SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA