Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 30
Filtrar
1.
Sensors (Basel) ; 20(3)2020 Jan 29.
Artigo em Inglês | MEDLINE | ID: mdl-32013226

RESUMO

Wearable sensors are gaining in popularity because they enable outdoor experimental monitoring. This paper presents a cost-effective sensorised insole based on a mesh of tactile capacitive sensors. Each sensor's spatial resolution is about 4 taxels/cm 2 in order to have an accurate reconstruction of the contact pressure distribution. As a consequence, the insole provides information such as contact forces, moments, and centre of pressure. To retrieve this information, a calibration technique that fuses measurements from a vacuum chamber and shoes equipped with force/torque sensors is proposed. The validation analysis shows that the best performance achieved a root mean square error (RMSE) of about 7 N for the contact forces and 2 N m for the contact moments when using the force/torque shoe data as ground truth. Thus, the insole may be an alternative to force/torque sensors for certain applications, with a considerably more cost-effective and less invasive hardware.


Assuntos
Técnicas Biossensoriais , Pé/fisiologia , Marcha/fisiologia , Tato/fisiologia , Fenômenos Biomecânicos , Órtoses do Pé , Humanos , Pressão , Dispositivos Eletrônicos Vestíveis
2.
Dev Sci ; 17(6): 809-25, 2014 Nov.
Artigo em Inglês | MEDLINE | ID: mdl-24946990

RESUMO

Human expertise in face perception grows over development, but even within minutes of birth, infants exhibit an extraordinary sensitivity to face-like stimuli. The dominant theory accounts for innate face detection by proposing that the neonate brain contains an innate face detection device, dubbed 'Conspec'. Newborn face preference has been promoted as some of the strongest evidence for innate knowledge, and forms a canonical stage for the modern form of the nature-nurture debate in psychology. Interpretation of newborn face preference results has concentrated on monocular stimulus properties, with little mention or focused investigation of potential binocular involvement. However, the question of whether and how newborns integrate the binocular visual streams bears directly on the generation of observable visual preferences. In this theoretical paper, we employ a synthetic approach utilizing robotic and computational models to draw together the threads of binocular integration and face preference in newborns, and demonstrate cases where the former may explain the latter. We suggest that a system-level view considering the binocular embodiment of newborn vision may offer a mutually satisfying resolution to some long-running arguments in the polarizing debate surrounding the existence and causal structure of newborns' 'innate knowledge' of faces.


Assuntos
Atenção/fisiologia , Encéfalo/crescimento & desenvolvimento , Desenvolvimento Infantil/fisiologia , Face , Modelos Biológicos , Reconhecimento Visual de Modelos/fisiologia , Humanos , Recém-Nascido , Visão Ocular , Vias Visuais/crescimento & desenvolvimento
3.
Sci Robot ; 9(86): eadh3834, 2024 Jan 24.
Artigo em Inglês | MEDLINE | ID: mdl-38266102

RESUMO

We present an avatar system designed to facilitate the embodiment of humanoid robots by human operators, validated through iCub3, a humanoid developed at the Istituto Italiano di Tecnologia. More precisely, the paper makes two contributions: First, we present the humanoid iCub3 as a robotic avatar that integrates the latest significant improvements after about 15 years of development of the iCub series. Second, we present a versatile avatar system enabling humans to embody humanoid robots encompassing aspects such as locomotion, manipulation, voice, and facial expressions with comprehensive sensory feedback including visual, auditory, haptic, weight, and touch modalities. We validated the system by implementing several avatar architecture instances, each tailored to specific requirements. First, we evaluated the optimized architecture for verbal, nonverbal, and physical interactions with a remote recipient. This testing involved the operator in Genoa and the avatar in the Biennale di Venezia, Venice-about 290 kilometers away-thus allowing the operator to visit the Italian art exhibition remotely. Second, we evaluated the optimized architecture for recipient physical collaboration and public engagement on stage, live, at the We Make Future show, a prominent world digital innovation festival. In this instance, the operator was situated in Genoa while the avatar operated in Rimini-about 300 kilometers away-interacting with a recipient who entrusted the avatar with a payload to carry on stage before an audience of approximately 2000 spectators. Third, we present the architecture implemented by the iCub Team for the All Nippon Airways (ANA) Avatar XPrize competition.


Assuntos
Avatar , Robótica , Humanos , Retroalimentação Sensorial , Interface Háptica , Locomoção
4.
Nat Mater ; 15(9): 921-5, 2016 08 24.
Artigo em Inglês | MEDLINE | ID: mdl-27554988
6.
Front Robot AI ; 5: 10, 2018.
Artigo em Inglês | MEDLINE | ID: mdl-33500897

RESUMO

This paper describes open source software (available at https://github.com/robotology/natural-speech) to build automatic speech recognition (ASR) systems and run them within the YARP platform. The toolkit is designed (i) to allow non-ASR experts to easily create their own ASR system and run it on iCub and (ii) to build deep learning-based models specifically addressing the main challenges an ASR system faces in the context of verbal human-iCub interactions. The toolkit mostly consists of Python, C++ code and shell scripts integrated in YARP. As additional contribution, a second codebase (written in Matlab) is provided for more expert ASR users who want to experiment with bio-inspired and developmental learning-inspired ASR systems. Specifically, we provide code for two distinct kinds of speech recognition: "articulatory" and "unsupervised" speech recognition. The first is largely inspired by influential neurobiological theories of speech perception which assume speech perception to be mediated by brain motor cortex activities. Our articulatory systems have been shown to outperform strong deep learning-based baselines. The second type of recognition systems, the "unsupervised" systems, do not use any supervised information (contrary to most ASR systems, including our articulatory systems). To some extent, they mimic an infant who has to discover the basic speech units of a language by herself. In addition, we provide resources consisting of pre-trained deep learning models for ASR, and a 2.5-h speech dataset of spoken commands, the VoCub dataset, which can be used to adapt an ASR system to the typical acoustic environments in which iCub operates.

7.
Sci Rep ; 8(1): 17842, 2018 12 14.
Artigo em Inglês | MEDLINE | ID: mdl-30552377

RESUMO

Most experimental protocols examining joint attention with the gaze cueing paradigm are "observational" and "offline", thereby not involving social interaction. We examined whether within a naturalistic online interaction, real-time eye contact influences the gaze cueing effect (GCE). We embedded gaze cueing in an interactive protocol with the iCub humanoid robot. This has the advantage of ecological validity combined with excellent experimental control. Critically, before averting the gaze, iCub either established eye contact or not, a manipulation enabled by an algorithm detecting position of the human eyes. For non-predictive gaze cueing procedure (Experiment 1), only the eye contact condition elicited GCE, while for counter-predictive procedure (Experiment 2), only the condition with no eye contact induced GCE. These results reveal an interactive effect of strategic (gaze validity) and social (eye contact) top-down components on the reflexive orienting of attention induced by gaze cues. More generally, we propose that naturalistic protocols with an embodied presence of an agent can cast a new light on mechanisms of social cognition.

8.
Adv Sci (Weinh) ; 5(2): 1700587, 2018 02.
Artigo em Inglês | MEDLINE | ID: mdl-29619306

RESUMO

Stretchable capacitive devices are instrumental for new-generation multifunctional haptic technologies particularly suited for soft robotics and electronic skin applications. A majority of elongating soft electronics still rely on silicone for building devices or sensors by multiple-step replication. In this study, fabrication of a reliable elongating parallel-plate capacitive touch sensor, using nitrile rubber gloves as templates, is demonstrated. Spray coating both sides of a rubber piece cut out of a glove with a conductive polymer suspension carrying dispersed carbon nanofibers (CnFs) or graphene nanoplatelets (GnPs) is sufficient for making electrodes with low sheet resistance values (≈10 Ω sq-1). The electrodes based on CnFs maintain their conductivity up to 100% elongation whereas the GnPs-based ones form cracks before 60% elongation. However, both electrodes are reliable under elongation levels associated with human joints motility (≈20%). Strikingly, structural damages due to repeated elongation/recovery cycles could be healed through annealing. Haptic sensing characteristics of a stretchable capacitive device by wrapping it around the fingertip of a robotic hand (ICub) are demonstrated. Tactile forces as low as 0.03 N and as high as 5 N can be easily sensed by the device under elongation or over curvilinear surfaces.

9.
Front Robot AI ; 5: 22, 2018.
Artigo em Inglês | MEDLINE | ID: mdl-33500909

RESUMO

Generating complex, human-like behavior in a humanoid robot like the iCub requires the integration of a wide range of open source components and a scalable cognitive architecture. Hence, we present the iCub-HRI library which provides convenience wrappers for components related to perception (object recognition, agent tracking, speech recognition, and touch detection), object manipulation (basic and complex motor actions), and social interaction (speech synthesis and joint attention) exposed as a C++ library with bindings for Java (allowing to use iCub-HRI within Matlab) and Python. In addition to previously integrated components, the library allows for simple extension to new components and rapid prototyping by adapting to changes in interfaces between components. We also provide a set of modules which make use of the library, such as a high-level knowledge acquisition module and an action recognition module. The proposed architecture has been successfully employed for a complex human-robot interaction scenario involving the acquisition of language capabilities, execution of goal-oriented behavior and expression of a verbal narrative of the robot's experience in the world. Accompanying this paper is a tutorial which allows a subset of this interaction to be reproduced. The architecture is aimed at researchers familiarizing themselves with the iCub ecosystem, as well as expert users, and we expect the library to be widely used in the iCub community.

10.
Prog Brain Res ; 164: 39-59, 2007.
Artigo em Inglês | MEDLINE | ID: mdl-17920425

RESUMO

In this chapter we discuss the mirror-neurons system, a cortical network of areas that enables individuals to understand the meaning of actions performed by others through the activation of internal representations, which motorically code for the observed actions. We review evidence indicating that this capability does not depend on the amount of visual stimulation relative to the observed action, or on the sensory modality specifically addressed (visual, acoustical). Any sensorial cue that can evoke the "idea" of a meaningful action activates the vocabulary of motor representations stored in the ventral premotor cortex and, in humans, especially in Broca's area. This is true also for phonoarticulatory actions, which determine speech production. We present also a model of the mirror-neurons system and its partial implementation in a set of two experiments. The results, according to our model, show that motor information plays a significant role in the interpretation of actions and that a mirror-like representation can be developed autonomously as a result of the interaction between the individual and the environment.


Assuntos
Córtex Cerebral/citologia , Modelos Neurológicos , Redes Neurais de Computação , Neurônios/fisiologia , Animais , Mapeamento Encefálico , Córtex Cerebral/fisiologia , Humanos , Rede Nervosa/fisiologia , Vias Neurais/fisiologia
11.
Prog Brain Res ; 164: 403-24, 2007.
Artigo em Inglês | MEDLINE | ID: mdl-17920444

RESUMO

This paper describes a developmental approach to the design of a humanoid robot. The robot, equipped with initial perceptual and motor competencies, explores the "shape" of its own body before devoting its attention to the external environment. The initial form of sensorimotor coordination consists of a set of explorative motor behaviors coupled to visual routines providing a bottom-up sensory-driven attention system. Subsequently, development leads the robot from the construction of a "body schema" to the exploration of the world of objects. The "body schema" allows controlling the arm and hand to reach and touch objects within the robot's workspace. Eventually, the interaction between the environment and the robot's body is exploited to acquire a visual model of the objects the robot encounters which can then be used to guide a top-down attention system.


Assuntos
Força da Mão/fisiologia , Aprendizagem/fisiologia , Desempenho Psicomotor/fisiologia , Robótica/métodos , Tato/fisiologia , Humanos , Lactente , Recém-Nascido , Fenômenos Fisiológicos Musculoesqueléticos
12.
Front Psychol ; 8: 1663, 2017.
Artigo em Inglês | MEDLINE | ID: mdl-29046651

RESUMO

Robots are increasingly envisaged as our future cohabitants. However, while considerable progress has been made in recent years in terms of their technological realization, the ability of robots to interact with humans in an intuitive and social way is still quite limited. An important challenge for social robotics is to determine how to design robots that can perceive the user's needs, feelings, and intentions, and adapt to users over a broad range of cognitive abilities. It is conceivable that if robots were able to adequately demonstrate these skills, humans would eventually accept them as social companions. We argue that the best way to achieve this is using a systematic experimental approach based on behavioral and physiological neuroscience methods such as motion/eye-tracking, electroencephalography, or functional near-infrared spectroscopy embedded in interactive human-robot paradigms. This approach requires understanding how humans interact with each other, how they perform tasks together and how they develop feelings of social connection over time, and using these insights to formulate design principles that make social robots attuned to the workings of the human brain. In this review, we put forward the argument that the likelihood of artificial agents being perceived as social companions can be increased by designing them in a way that they are perceived as intentional agents that activate areas in the human brain involved in social-cognitive processing. We first review literature related to social-cognitive processes and mechanisms involved in human-human interactions, and highlight the importance of perceiving others as intentional agents to activate these social brain areas. We then discuss how attribution of intentionality can positively affect human-robot interaction by (a) fostering feelings of social connection, empathy and prosociality, and by (b) enhancing performance on joint human-robot tasks. Lastly, we describe circumstances under which attribution of intentionality to robot agents might be disadvantageous, and discuss challenges associated with designing social robots that are inspired by neuroscientific principles.

13.
Sci Robot ; 2(13)2017 12 20.
Artigo em Inglês | MEDLINE | ID: mdl-33157880

RESUMO

The iCub open-source humanoid robot child is a successful initiative supporting research in embodied artificial intelligence.

14.
PLoS One ; 11(10): e0163713, 2016.
Artigo em Inglês | MEDLINE | ID: mdl-27711136

RESUMO

This paper investigates a biologically motivated model of peripersonal space through its implementation on a humanoid robot. Guided by the present understanding of the neurophysiology of the fronto-parietal system, we developed a computational model inspired by the receptive fields of polymodal neurons identified, for example, in brain areas F4 and VIP. The experiments on the iCub humanoid robot show that the peripersonal space representation i) can be learned efficiently and in real-time via a simple interaction with the robot, ii) can lead to the generation of behaviors like avoidance and reaching, and iii) can contribute to the understanding the biological principle of motor equivalence. More specifically, with respect to i) the present model contributes to hypothesizing a learning mechanisms for peripersonal space. In relation to point ii) we show how a relatively simple controller can exploit the learned receptive fields to generate either avoidance or reaching of an incoming stimulus and for iii) we show how the robot can select arbitrary body parts as the controlled end-point of an avoidance or reaching movement.


Assuntos
Espaço Pessoal , Robótica , Segurança , Pele Artificial , Percepção do Tato , Percepção Visual , Humanos , Aprendizagem/fisiologia , Probabilidade , Percepção Espacial
15.
IEEE Trans Neural Netw Learn Syst ; 26(5): 1035-47, 2015 May.
Artigo em Inglês | MEDLINE | ID: mdl-25029488

RESUMO

This paper proposes a learning from demonstration system based on a motion feature, called phase transfer sequence. The system aims to synthesize the knowledge on humanoid whole body motions learned during teacher-supported interactions, and apply this knowledge during different physical interactions between a robot and its surroundings. The phase transfer sequence represents the temporal order of the changing points in multiple time sequences. It encodes the dynamical aspects of the sequences so as to absorb the gaps in timing and amplitude derived from interaction changes. The phase transfer sequence was evaluated in reinforcement learning of sitting-up and walking motions conducted by a real humanoid robot and compatible simulator. In both tasks, the robotic motions were less dependent on physical interactions when learned by the proposed feature than by conventional similarity measurements. Phase transfer sequence also enhanced the convergence speed of motion learning. Our proposed feature is original primarily because it absorbs the gaps caused by changes of the originally acquired physical interactions, thereby enhancing the learning speed in subsequent interactions.


Assuntos
Aprendizagem/fisiologia , Movimento (Física) , Atividade Motora/fisiologia , Robótica , Simulação por Computador , Lateralidade Funcional , Humanos , Conhecimento , Simbolismo , Fatores de Tempo , Transferência de Experiência , Caminhada
16.
Behav Brain Sci ; 24(6): 1068-1069, 2001 Dec.
Artigo em Inglês | MEDLINE | ID: mdl-18241380

RESUMO

In agreement with the target article, we would like to point out a few aspects related to embodiment which further support the position of biorobotics. We argue that, especially when complex systems are considered, modeling through a physical implementation can provide hints to comprehend the whole picture behind the specific set of experimental data.

17.
Front Neurorobot ; 8: 9, 2014.
Artigo em Inglês | MEDLINE | ID: mdl-24611045

RESUMO

Empirical studies have revealed remarkable perceptual organization in neonates. Newborn behavioral distinctions have often been interpreted as implying functionally specific modular adaptations, and are widely cited as evidence supporting the nativist agenda. In this theoretical paper, we approach newborn perception and attention from an embodied, developmental perspective. At the mechanistic level, we argue that a generative mechanism based on mutual gain control between bilaterally corresponding points may underly a number of functionally defined "innate predispositions" related to spatial-configural perception. At the computational level, bilateral gain control implements beamforming, which enables spatial-configural tuning at the front end sampling stage. At the psychophysical level, we predict that selective attention in newborns will favor contrast energy which projects to bilaterally corresponding points on the neonate subject's sensor array. The current work extends and generalizes previous work to formalize the bilateral correlation model of newborn attention at a high level, and demonstrate in minimal agent-based simulations how bilateral gain control can enable a simple, robust and "social" attentional bias.

18.
Front Syst Neurosci ; 8: 29, 2014.
Artigo em Inglês | MEDLINE | ID: mdl-24616670

RESUMO

Visual scan paths exhibit complex, stochastic dynamics. Even during visual fixation, the eye is in constant motion. Fixational drift and tremor are thought to reflect fluctuations in the persistent neural activity of neural integrators in the oculomotor brainstem, which integrate sequences of transient saccadic velocity signals into a short term memory of eye position. Despite intensive research and much progress, the precise mechanisms by which oculomotor posture is maintained remain elusive. Drift exhibits a stochastic statistical profile which has been modeled using random walk formalisms. Tremor is widely dismissed as noise. Here we focus on the dynamical profile of fixational tremor, and argue that tremor may be a signal which usefully reflects the workings of oculomotor postural control. We identify signatures reminiscent of a certain flavor of transient neurodynamics; toric traveling waves which rotate around a central phase singularity. Spiral waves play an organizational role in dynamical systems at many scales throughout nature, though their potential functional role in brain activity remains a matter of educated speculation. Spiral waves have a repertoire of functionally interesting dynamical properties, including persistence, which suggest that they could in theory contribute to persistent neural activity in the oculomotor postural control system. Whilst speculative, the singularity hypothesis of oculomotor postural control implies testable predictions, and could provide the beginnings of an integrated dynamical framework for eye movements across scales.

20.
IEEE Trans Neural Netw Learn Syst ; 25(1): 183-202, 2014 Jan.
Artigo em Inglês | MEDLINE | ID: mdl-24806653

RESUMO

This paper describes a developmental framework for action-driven perception in anthropomorphic robots. The key idea of the framework is that action generation develops the agent's perception of its own body and actions. Action-driven development is critical for identifying changing body parts and understanding the effects of actions in unknown or nonstationary environments. We embedded minimal knowledge into the robot's cognitive system in the form of motor synergies and actions to allow motor exploration. The robot voluntarily generates actions and develops the ability to perceive its own body and the effect that it generates on the environment. The robot, in addition, can compose this kind of learned primitives to perform complex actions and characterize them in terms of their sensory effects. After learning, the robot can recognize manipulative human behaviors with cross-modal anticipation for recovery of unavailable sensory modality, and reproduce the recognized actions afterward. We evaluated the proposed framework in the experiments with a real robot. In the experiments, we achieved autonomous body identification, learning of fixation, reaching and grasping actions, and developmental recognition of human actions as well as their reproduction.


Assuntos
Inteligência Artificial , Biomimética/métodos , Cognição/fisiologia , Movimento/fisiologia , Robótica/métodos , Autoimagem , Adaptação Fisiológica/fisiologia , Algoritmos , Humanos , Movimento (Física) , Reconhecimento Automatizado de Padrão/métodos
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA