Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 79
Filtrar
1.
Front Comput Neurosci ; 18: 1349408, 2024.
Artigo em Inglês | MEDLINE | ID: mdl-38585280

RESUMO

The trend in industrial/service robotics is to develop robots that can cooperate with people, interacting with them in an autonomous, safe and purposive way. These are the fundamental elements characterizing the fourth and the fifth industrial revolutions (4IR, 5IR): the crucial innovation is the adoption of intelligent technologies that can allow the development of cyber-physical systems, similar if not superior to humans. The common wisdom is that intelligence might be provided by AI (Artificial Intelligence), a claim that is supported more by media coverage and commercial interests than by solid scientific evidence. AI is currently conceived in a quite broad sense, encompassing LLMs and a lot of other things, without any unifying principle, but self-motivating for the success in various areas. The current view of AI robotics mostly follows a purely disembodied approach that is consistent with the old-fashioned, Cartesian mind-body dualism, reflected in the software-hardware distinction inherent to the von Neumann computing architecture. The working hypothesis of this position paper is that the road to the next generation of autonomous robotic agents with cognitive capabilities requires a fully brain-inspired, embodied cognitive approach that avoids the trap of mind-body dualism and aims at the full integration of Bodyware and Cogniware. We name this approach Artificial Cognition (ACo) and ground it in Cognitive Neuroscience. It is specifically focused on proactive knowledge acquisition based on bidirectional human-robot interaction: the practical advantage is to enhance generalization and explainability. Moreover, we believe that a brain-inspired network of interactions is necessary for allowing humans to cooperate with artificial cognitive agents, building a growing level of personal trust and reciprocal accountability: this is clearly missing, although actively sought, in current AI. The ACo approach is a work in progress that can take advantage of a number of research threads, some of them antecedent the early attempts to define AI concepts and methods. In the rest of the paper we will consider some of the building blocks that need to be re-visited in a unitary framework: the principles of developmental robotics, the methods of action representation with prospection capabilities, and the crucial role of social interaction.

2.
Front Bioeng Biotechnol ; 12: 1285107, 2024.
Artigo em Inglês | MEDLINE | ID: mdl-38638317

RESUMO

Immersive technology, such as extended reality, holds promise as a tool for educating ophthalmologists about the effects of low vision and for enhancing visual rehabilitation protocols. However, immersive simulators have not been evaluated for their ability to induce changes in the oculomotor system, which is crucial for understanding the visual experiences of visually impaired individuals. This study aimed to assess the REALTER (Wearable Egocentric Altered Reality Simulator) system's capacity to induce specific alterations in healthy individuals' oculomotor systems under simulated low-vision conditions. We examined task performance, eye movements, and head movements in healthy participants across various simulated scenarios. Our findings suggest that REALTER can effectively elicit behaviors in healthy individuals resembling those observed in individuals with low vision. Participants with simulated binocular maculopathy demonstrated unstable fixations and a high frequency of wide saccades. Individuals with simulated homonymous hemianopsia showed a tendency to maintain a fixed head position while executing wide saccades to survey their surroundings. Simulation of tubular vision resulted in a significant reduction in saccade amplitudes. REALTER holds promise as both a training tool for ophthalmologists and a research instrument for studying low vision conditions. The simulator has the potential to enhance ophthalmologists' comprehension of the limitations imposed by visual disabilities, thereby facilitating the development of new rehabilitation protocols.

4.
Multisens Res ; 37(1): 75-88, 2023 Dec 20.
Artigo em Inglês | MEDLINE | ID: mdl-38118461

RESUMO

While navigating through the surroundings, we constantly rely on inertial vestibular signals for self-motion along with visual and acoustic spatial references from the environment. However, the interaction between inertial cues and environmental spatial references is not yet fully understood. Here we investigated whether vestibular self-motion sensitivity is influenced by sensory spatial references. Healthy participants were administered a Vestibular Self-Motion Detection Task in which they were asked to detect vestibular self-motion sensations induced by low-intensity Galvanic Vestibular Stimulation. Participants performed this detection task with or without an external visual or acoustic spatial reference placed directly in front of them. We computed the d prime ( d ' ) as a measure of participants' vestibular sensitivity and the criterion as an index of their response bias. Results showed that the visual spatial reference increased sensitivity to detect vestibular self-motion. Conversely, the acoustic spatial reference did not influence self-motion sensitivity. Both visual and auditory spatial references did not cause changes in response bias. Environmental visual spatial references provide relevant information to enhance our ability to perceive inertial self-motion cues, suggesting a specific interaction between visual and vestibular systems in self-motion perception.


Assuntos
Percepção de Movimento , Percepção Espacial , Vestíbulo do Labirinto , Humanos , Percepção de Movimento/fisiologia , Masculino , Vestíbulo do Labirinto/fisiologia , Feminino , Adulto , Adulto Jovem , Percepção Espacial/fisiologia , Sinais (Psicologia) , Percepção Visual/fisiologia , Estimulação Acústica , Percepção Auditiva/fisiologia
5.
Sci Rep ; 13(1): 22845, 2023 12 21.
Artigo em Inglês | MEDLINE | ID: mdl-38129483

RESUMO

Frequently in rehabilitation, visually impaired persons are passive agents of exercises with fixed environmental constraints. In fact, a printed tactile map, i.e. a particular picture with a specific spatial arrangement, can usually not be edited. Interaction with map content, instead, facilitates the learning of spatial skills because it exploits mental imagery, manipulation and strategic planning simultaneously. However, it has rarely been applied to maps, mainly because of technological limitations. This study aims to understand if visually impaired people can autonomously build objects that are completely virtual. Specifically, we investigated if a group of twelve blind persons, with a wide age range, could exploit mental imagery to interact with virtual content and actively manipulate it by means of a haptic device. The device is mouse-shaped and designed to jointly perceive, with one finger only, local tactile height and inclination cues of arbitrary scalar fields. Spatial information can be mentally constructed by integrating local tactile cues, given by the device, with global proprioceptive cues, given by hand and arm motion. The experiment consisted of a bi-manual task, in which one hand explored some basic virtual objects and the other hand acted on a keyboard to change the position of one object in real-time. The goal was to merge basic objects into more complex objects, like a puzzle. The experiment spanned different resolutions of the tactile information. We measured task accuracy, efficiency, usability and execution time. The average accuracy in solving the puzzle was 90.5%. Importantly, accuracy was linearly predicted by efficiency, measured as the number of moves needed to solve the task. Subjective parameters linked to usability and spatial resolutions did not predict accuracy; gender modulated the execution time, with men being faster than women. Overall, we show that building purely virtual tactile objects is possible in absence of vision and that the process is measurable and achievable in partial autonomy. Introducing virtual tactile graphics in rehabilitation protocols could facilitate the stimulation of mental imagery, a basic element for the ability to orient in space. The behavioural variable introduced in the current study can be calculated after each trial and therefore could be used to automatically measure and tailor protocols to specific user needs. In perspective, our experimental setup can inspire remote rehabilitation scenarios for visually impaired people.


Assuntos
Pessoas com Deficiência Visual , Feminino , Humanos , Masculino , Identidade de Gênero , Aprendizagem , Tato/fisiologia , Visão Ocular , Pessoas com Deficiência Visual/reabilitação
6.
J Neuroeng Rehabil ; 20(1): 143, 2023 10 24.
Artigo em Inglês | MEDLINE | ID: mdl-37875916

RESUMO

BACKGROUND: Learning of a visuomotor task not only leads to changes in motor performance but also improves proprioceptive function of the trained joint/limb system. Such sensorimotor learning may show intra-joint transfer that is observable at a previously untrained degrees of freedom of the trained joint. OBJECTIVE: Here, we examined if and to what extent such learning transfers to neighboring joints of the same limb and whether such transfer is observable in the motor as well as in the proprioceptive domain. Documenting such intra-limb transfer of sensorimotor learning holds promise for the neurorehabilitation of an impaired joint by training the neighboring joints. METHODS: Using a robotic exoskeleton, 15 healthy young adults (18-35 years) underwent a visuomotor training that required them to make continuous, increasingly precise, small amplitude wrist movements. Wrist and elbow position sense just-noticeable-difference (JND) thresholds and spatial movement accuracy error (MAE) at wrist and elbow in an untrained pointing task were assessed before and immediately after, as well as 24 h after training. RESULTS: First, all participants showed evidence of proprioceptive and motor learning in both trained and untrained joints. The mean JND threshold decreased significantly by 30% in trained wrist (M: 1.26° to 0.88°) and by 35% in untrained elbow (M: 1.96° to 1.28°). Second, mean MAE in untrained pointing task reduced by 20% in trained wrist and the untrained elbow. Third, after 24 h the gains in proprioceptive learning persisted at both joints, while transferred motor learning gains had decayed to such extent that they were no longer significant at the group level. CONCLUSION: Our findings document that a one-time sensorimotor training induces rapid learning gains in proprioceptive acuity and untrained sensorimotor performance at the practiced joint. Importantly, these gains transfer almost fully to the neighboring, proximal joint/limb system.


Assuntos
Robótica , Punho , Adulto Jovem , Humanos , Cotovelo , Extremidade Superior , Propriocepção
7.
Curr Biol ; 33(20): R1038-R1040, 2023 10 23.
Artigo em Inglês | MEDLINE | ID: mdl-37875073

RESUMO

Primary visual cortex (V1) retains a form of plasticity in adult humans: a brief period of monocular deprivation induces an enhanced response to the deprived eye, which can stabilize into a consolidated plastic change1,2 despite unaltered thalamic input3. This form of homeostatic plasticity in adults is thought to act through neuronal competition between the representations of the two eyes, which are still separate in primary visual cortex4,5. During monocular occlusion, neurons of the deprived eye are thought to increase response gain given the absence of visual input, leading to the post-deprivation enhancement. If the decrease of reliability of the monocular response is crucial to establish homeostatic plasticity, this could be induced in several different ways. There is increasing evidence that V1 processing is affected by voluntary action, allowing it to take into account the visual effects of self-motion6, important for efficient active vision7. Here we asked whether ocular dominance homeostatic plasticity could be elicited without degrading the quality of monocular visual images but simply by altering their role in visuomotor control by introducing a visual delay in one eye while participants actively performed a visuomotor task; this causes a discrepancy between what the subject sees and what he/she expects to see. Our results show that homeostatic plasticity is gated by the consistency between the monocular visual inputs and a person's actions, suggesting that action not only shapes visual processing but may also be essential for plasticity in adults.


Assuntos
Dominância Ocular , Córtex Visual , Feminino , Humanos , Adulto , Reprodutibilidade dos Testes , Visão Monocular/fisiologia , Córtex Visual/fisiologia , Plasticidade Neuronal/fisiologia , Privação Sensorial/fisiologia
8.
Eur J Neurosci ; 58(9): 4034-4042, 2023 11.
Artigo em Inglês | MEDLINE | ID: mdl-37688501

RESUMO

Determining the spatial relation between objects and our location in the surroundings is essential for survival. Vestibular inputs provide key information about the position and movement of our head in the three-dimensional space, contributing to spatial navigation. Yet, their role in encoding spatial localisation of environmental targets remains to be fully understood. We probed the accuracy and precision of healthy participants' representations of environmental space by measuring their ability to encode the spatial location of visual targets (Experiment 1). Participants were asked to detect a visual light and then walk towards it. Vestibular signalling was artificially disrupted using stochastic galvanic vestibular stimulation (sGVS) applied selectively during encoding targets' location. sGVS impaired the accuracy and precision of locating the environmental visual targets. Importantly, this effect was specific to the visual modality. The location of acoustic targets was not influenced by vestibular alterations (Experiment 2). Our findings indicate that the vestibular system plays a role in localising visual targets in the surrounding environment, suggesting a crucial functional interaction between vestibular and visual signals for the encoding of the spatial relationship between our body position and the surrounding objects.


Assuntos
Percepção Espacial , Vestíbulo do Labirinto , Humanos , Percepção Espacial/fisiologia , Vestíbulo do Labirinto/fisiologia , Sensação , Movimento
9.
Atten Percept Psychophys ; 84(8): 2670-2683, 2022 Nov.
Artigo em Inglês | MEDLINE | ID: mdl-36261764

RESUMO

Vestibular cues are crucial to sense the linear and angular acceleration of our head in three-dimensional space. Previous literature showed that vestibular information precociously combines with other sensory modalities, such as proprioceptive and visual, to facilitate spatial navigation. Recent studies suggest that auditory cues may improve self-motion perception as well. The present study investigated the ability to estimate passive rotational displacements with and without virtual acoustic landmarks to determine how vestibular and auditory information interact in processing self-motion information. We performed two experiments. In both, healthy participants sat on a Rotational-Translational Chair. They experienced yaw rotations along the earth-vertical axis and performed a self-motion discrimination task. Their goal was to estimate both clockwise and counterclockwise rotations' amplitude, with no visual information available, reporting whether they felt to be rotated more or less than 45°. According to the condition, vestibular-only or audio-vestibular information was present. Between the two experiments, we manipulated the procedure of presentation of the auditory cues (passive vs. active production of sounds). We computed the point of subjective equality (PSE) as a measure of accuracy and the just noticeable difference (JND) as the precision of the estimations for each condition and direction of rotations. Results in both experiments show a strong overestimation bias of the rotations, regardless of the condition, the direction, and the sound generation conditions. Similar to previously found heading biases, this bias in rotation estimation may facilitate the perception of substantial deviations from the most relevant directions in daily navigation activities.


Assuntos
Percepção de Movimento , Vestíbulo do Labirinto , Humanos , Propriocepção , Movimento (Física) , Viés , Percepção Espacial
10.
Front Neurorobot ; 16: 882483, 2022.
Artigo em Inglês | MEDLINE | ID: mdl-35978569

RESUMO

A key goal in human-robot interaction (HRI) is to design scenarios between humanoid robots and humans such that the interaction is perceived as collaborative and natural, yet safe and comfortable for the human. Human skills like verbal and non-verbal communication are essential elements as humans tend to attribute social behaviors to robots. However, aspects like the uncanny valley and different technical affinity levels can impede the success of HRI scenarios, which has consequences on the establishment of long-term interaction qualities like trust and rapport. In the present study, we investigate the impact of a humanoid robot on human emotional responses during the performance of a cognitively demanding task. We set up three different conditions for the robot with increasing levels of social cue expressions in a between-group study design. For the analysis of emotions, we consider the eye gaze behavior, arousal-valence for affective states, and the detection of action units. Our analysis reveals that the participants display a high tendency toward positive emotions in presence of a robot with clear social skills compared to other conditions, where we show how emotions occur only at task onset. Our study also shows how different expression levels influence the analysis of the robots' role in HRI. Finally, we critically discuss the current trend of automatized emotion or affective state recognition in HRI and demonstrate issues that have direct consequences on the interpretation and, therefore, claims about human emotions in HRI studies.

11.
IEEE Trans Haptics ; 15(2): 339-350, 2022.
Artigo em Inglês | MEDLINE | ID: mdl-35344495

RESUMO

Haptic exploration strategies have been traditionally studied focusing on hand movements and neglecting how objects are moved in space. However, in daily life situations touch and movement cannot be disentangled. Furthermore, the relation between object manipulation as well as performance in haptic tasks and spatial skill is still little understood. In this study, we used iCube, a sensorized cube recording its orientation in space as well as the location of the points of contact on its faces. Participants had to explore the cube faces where little pins were positioned in varying number and count the number of pins on the faces with either even or odd number of pins. At the end of this task, they also completed a standard visual mental rotation test (MRT). Results showed that higher MRT scores were associated with better performance in the task with iCube both in term of accuracy and exploration speed and exploration strategies associated with better performance were identified. High performers tended to rotate the cube so that the explored face had the same spatial orientation (i.e., they preferentially explored the upward face and rotated iCube to explore the next face in the same orientation). They also explored less often twice the same face and were faster and more systematic in moving from one face to the next. These findings indicate that iCube could be used to infer subjects' spatial skill in a more natural and unobtrusive fashion than with standard MRTs.


Assuntos
Tecnologia Háptica , Percepção do Tato , Mãos , Humanos , Percepção Espacial , Tato
12.
J Exp Psychol Hum Percept Perform ; 48(2): 174-189, 2022 Feb.
Artigo em Inglês | MEDLINE | ID: mdl-35225632

RESUMO

When moving through space, we encode multiple sensory cues that guide our orientation through the environment. The integration between visual and self-motion cues is known to improve navigation. However, spatial navigation may also benefit from multisensory external signals. The present study aimed to investigate whether humans combine auditory and visual landmarks with improving their navigation abilities. Two experiments with different cue reliability were conducted. In both, participants' task was to return an object to its original location by using landmarks, which could be visual-only, auditory-only, or audiovisual. We took error and variability of object relocation distance as measures of accuracy and precision. To quantify interference between cues and assess their weights, we ran a conflict condition with a spatial discrepancy between visual and auditory landmarks. Results showed comparable accuracy and precision when navigating with visual-only and audiovisual landmarks but greater error and variability with auditory-only landmarks. Splitting participants into two groups based on given unimodal weights revealed that only subjects who associated similar weights to auditory and visual cues showed precision benefit in audiovisual conditions. These findings suggest that multisensory integration occurs depending on idiosyncratic cue weighting. Future multisensory procedures to aid mobility must consider individual differences in encoding landmarks. (PsycInfo Database Record (c) 2022 APA, all rights reserved).


Assuntos
Navegação Espacial , Percepção Auditiva , Sinais (Psicologia) , Humanos , Reprodutibilidade dos Testes , Percepção Visual
13.
PLoS One ; 16(12): e0260700, 2021.
Artigo em Inglês | MEDLINE | ID: mdl-34905544

RESUMO

Working memory is a cognitive system devoted to storage and retrieval processing of information. Numerous studies on the development of working memory have investigated the processing of visuo-spatial and verbal non-spatialized information; however, little is known regarding the refinement of acoustic spatial and memory abilities across development. Here, we hypothesize that audio-spatial memory skills improve over development, due to strengthening spatial and cognitive skills such as semantic elaboration. We asked children aged 6 to 11 years old (n = 55) to pair spatialized animal calls with the corresponding animal spoken name. Spatialized sounds were emitted from an audio-haptic device, haptically explored by children with the dominant hand's index finger. Children younger than 8 anchored their exploration strategy on previously discovered sounds instead of holding this information in working memory and performed worse than older peers when asked to pair the spoken word with the corresponding animal call. In line with our hypothesis, these findings demonstrate that age-related improvements in spatial exploration and verbal coding memorization strategies affect how children learn and memorize items belonging to a complex acoustic spatial layout. Similar to vision, audio-spatial memory abilities strongly depend on cognitive development in early years of life.


Assuntos
Cognição/fisiologia , Memória de Curto Prazo/fisiologia , Reconhecimento Fisiológico de Modelo/fisiologia , Memória Espacial/fisiologia , Fatores Etários , Animais , Criança , Cães , Feminino , Interface Háptica , Humanos , Masculino , Semântica , Vocalização Animal/fisiologia
14.
Front Robot AI ; 8: 812583, 2021.
Artigo em Inglês | MEDLINE | ID: mdl-34970600

RESUMO

[This corrects the article DOI: 10.3389/frobt.2020.00121.].

15.
J Neuroeng Rehabil ; 18(1): 146, 2021 09 25.
Artigo em Inglês | MEDLINE | ID: mdl-34563218

RESUMO

BACKGROUND: In this work, we present a novel sensory substitution system that enables to learn three dimensional digital information via touch when vision is unavailable. The system is based on a mouse-shaped device, designed to jointly perceive, with one finger only, local tactile height and inclination cues of arbitrary scalar fields. The device hosts a tactile actuator with three degrees of freedom: elevation, roll and pitch. The actuator approximates the tactile interaction with a plane tangential to the contact point between the finger and the field. Spatial information can therefore be mentally constructed by integrating local and global tactile cues: the actuator provides local cues, whereas proprioception associated with the mouse motion provides the global cues. METHODS: The efficacy of the system is measured by a virtual/real object-matching task. Twenty-four gender and age-matched participants (one blind and one blindfolded sighted group) matched a tactile dictionary of virtual objects with their 3D-printed solid version. The exploration of the virtual objects happened in three conditions, i.e., with isolated or combined height and inclination cues. We investigated the performance and the mental cost of approximating virtual objects in these tactile conditions. RESULTS: In both groups, elevation and inclination cues were sufficient to recognize the tactile dictionary, but their combination worked at best. The presence of elevation decreased a subjective estimate of mental effort. Interestingly, only visually impaired participants were aware of their performance and were able to predict it. CONCLUSIONS: The proposed technology could facilitate the learning of science, engineering and mathematics in absence of vision, being also an industrial low-cost solution to make graphical user interfaces accessible for people with vision loss.


Assuntos
Percepção do Tato , Pessoas com Deficiência Visual , Animais , Cegueira , Humanos , Aprendizagem , Camundongos , Tato
16.
J Neuroeng Rehabil ; 18(1): 130, 2021 08 31.
Artigo em Inglês | MEDLINE | ID: mdl-34465356

RESUMO

BACKGROUND: In recent years, many studies focused on the use of robotic devices for both the assessment and the neuro-motor reeducation of upper limb in subjects after stroke, spinal cord injuries or affected by neurological disorders. Contrarily, it is still hard to find examples of robot-aided assessment and rehabilitation after traumatic injuries in the orthopedic field. However, those benefits related to the use of robotic devices are expected also in orthopedic functional reeducation. METHODS: After a wrist injury occurred at their workplace, wrist functionality of twenty-three subjects was evaluated through a robot-based assessment and clinical measures (Patient Rated Wrist Evaluation, Jebsen-Taylor and Jamar Test), before and after a 3-week long rehabilitative treatment. Subjects were randomized in two groups: while the control group (n = 13) underwent a traditional rehabilitative protocol, the experimental group (n = 10) was treated replacing traditional exercises with robot-aided ones. RESULTS: Functionality, assessed through the function subscale of PRWE scale, improved in both groups (experimental p = 0.016; control p < 0.001) and was comparable between groups, both pre (U = 45.5, p = 0.355) and post (U = 47, p = 0.597) treatment. Additionally, even though groups' performance during the robotic assessment was comparable before the treatment (U = 36, p = 0.077), after rehabilitation the experimental group presented better results than the control one (U = 26, p = 0.015). CONCLUSIONS: This work can be considered a starting point for introducing the use of robotic devices in the orthopedic field. The robot-aided rehabilitative treatment was effective and comparable to the traditional one. Preserving efficacy and safety conditions, a systematic use of these devices could lead to decrease human therapists' effort, increase repeatability and accuracy of assessments, and promote subject's engagement and voluntary participation. Trial Registration ClinicalTrial.gov ID: NCT04739644. Registered on February 4, 2021-Retrospectively registered, https://www.clinicaltrials.gov/ct2/show/study/NCT04739644 .


Assuntos
Robótica , Reabilitação do Acidente Vascular Cerebral , Acidente Vascular Cerebral , Humanos , Extremidade Superior , Punho , Articulação do Punho
17.
Acta Psychol (Amst) ; 219: 103384, 2021 Sep.
Artigo em Inglês | MEDLINE | ID: mdl-34365274

RESUMO

Several studies have shown that impairments in a sensory modality can induce perceptual deficits in tasks involving the remaining senses. For example, people with retinal degenerative diseases like Macular Degeneration (MD) and with central scotoma show biased auditory localization abilities towards the visual field's scotoma area. This result indicates an auditory spatial reorganization of cross-modal processing in people with scotoma when the visual information is impaired. Recent works showed that multisensory training could be beneficial to improve spatial perception. In line with this idea, here we hypothesize that audio-visual and motor training could improve people's spatial skills with retinal degenerative diseases. In the present study, we tested this hypothesis by testing two groups of scotoma patients in an auditory and visual localization task before and after a training or rest performance. The training group was tested before and after multisensory training, while the control group performed the two tasks twice after 10 min of break. The training was done with a portable device positioned on the finger, providing spatially and temporally congruent audio and visual feedback during arm movement. Our findings show improved audio and visual localization for the training group and not for the control group. These results suggest that integrating multiple spatial sensory cues can improve the spatial perception of scotoma patients. This finding ignites further research and applications for people with central scotoma for whom rehabilitation is classically focused on training visual modality only.


Assuntos
Retina , Escotoma , Sinais (Psicologia) , Humanos , Movimento , Percepção Espacial
18.
Cyberpsychol Behav Soc Netw ; 24(5): 357-361, 2021 May.
Artigo em Inglês | MEDLINE | ID: mdl-34003014

RESUMO

In the past years, the field of collaborative robots has been developing fast, with applications ranging from health care to search and rescue, construction, entertainment, sports, and many others. However, current social robotics is still far from the general abilities we expect in a robot collaborator. This limitation is more evident when robots are faced with real-life contexts and activities occurring over long periods. In this article, we argue that human-robot collaboration is more than just being able to work side by side on complementary tasks: collaboration is a complex relational process that entails mutual understanding and reciprocal adaptation. Drawing on this assumption, we propose to shift the focus from "human-robot interaction" to "human-robot shared experience." We hold that for enabling the emergence of such shared experiential space between humans and robots, constructs such as coadaptation, intersubjectivity, individual differences, and identity should become the central focus of modeling. Finally, we suggest that this shift in perspective would imply changing current mainstream design approaches, which are mainly focused on functional aspects of the human-robot interaction, to the development of architectural frameworks that integrate the enabling dimensions of social cognition.


Assuntos
Robótica/métodos , Humanos
19.
Sci Rep ; 11(1): 5281, 2021 03 05.
Artigo em Inglês | MEDLINE | ID: mdl-33674684

RESUMO

Proprioceptive training is a neurorehabilitation approach known to improve proprioceptive acuity and motor performance of a joint/limb system. Here, we examined if such learning transfers to the contralateral joints. Using a robotic exoskeleton, 15 healthy, right-handed adults (18-35 years) trained a visuomotor task that required making increasingly small wrist movements challenging proprioceptive function. Wrist position sense just-noticeable-difference thresholds (JND) and spatial movement accuracy error (MAE) in a wrist-pointing task that was not trained were assessed before and immediately as well as 24 h after training. The main results are: first, training reduced JND thresholds (- 27%) and MAE (- 33%) in the trained right wrist. Sensory and motor gains were observable 24 h after training. Second, in the untrained left wrist, mean JND significantly decreased (- 32%) at posttest. However, at retention the effect was no longer significant. Third, motor error at the untrained wrist declined slowly. Gains were not significant at posttest, but MAE was significantly reduced (- 27%) at retention. This study provides first evidence that proprioceptive-focused visuomotor training can induce proprioceptive and motor gains not only in the trained joint but also in the contralateral, homologous joint. We discuss the possible neurophysiological mechanism behind such sensorimotor transfer and its implications for neurorehabilitation.


Assuntos
Exoesqueleto Energizado , Atividade Motora/fisiologia , Propriocepção/fisiologia , Robótica , Articulação do Punho/fisiologia , Punho/fisiologia , Adolescente , Adulto , Feminino , Lateralidade Funcional , Voluntários Saudáveis , Humanos , Masculino , Adulto Jovem
20.
Cyberpsychol Behav Soc Netw ; 24(5): 315-323, 2021 May.
Artigo em Inglês | MEDLINE | ID: mdl-33471584

RESUMO

The investigation of emerging adults' expectations of development of the next generation of robots is a fundamental challenge to narrow the gap between expectations and real technological advances, which can potentially impact the effectiveness of future interactions between humans and robots. Furthermore, the literature highlights the important role played by negative attitudes toward robots in setting people's expectations. To better explore these expectations, we administered the Scale for Robotic Needs and performed a latent profile analysis to describe different expectation profiles about the development of future robots. The profiles identified through this methodology can be placed along a continuum of robots' humanization: from a group that desires mainly the technical features to a group that imagines a humanized robot in the future. Finally, the analysis of emerging adults' knowledge about robots and their negative attitudes toward robots allowed us to understand how these affect their expectations.


Assuntos
Atitude , Robótica , Adolescente , Adulto , Feminino , Humanos , Itália , Masculino , Adulto Jovem
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA
...