RESUMEN
Determining the spatial relation between objects and our location in the surroundings is essential for survival. Vestibular inputs provide key information about the position and movement of our head in the three-dimensional space, contributing to spatial navigation. Yet, their role in encoding spatial localisation of environmental targets remains to be fully understood. We probed the accuracy and precision of healthy participants' representations of environmental space by measuring their ability to encode the spatial location of visual targets (Experiment 1). Participants were asked to detect a visual light and then walk towards it. Vestibular signalling was artificially disrupted using stochastic galvanic vestibular stimulation (sGVS) applied selectively during encoding targets' location. sGVS impaired the accuracy and precision of locating the environmental visual targets. Importantly, this effect was specific to the visual modality. The location of acoustic targets was not influenced by vestibular alterations (Experiment 2). Our findings indicate that the vestibular system plays a role in localising visual targets in the surrounding environment, suggesting a crucial functional interaction between vestibular and visual signals for the encoding of the spatial relationship between our body position and the surrounding objects.
Asunto(s)
Percepción Espacial , Vestíbulo del Laberinto , Humanos , Percepción Espacial/fisiología , Vestíbulo del Laberinto/fisiología , Sensación , MovimientoRESUMEN
BACKGROUND: Learning of a visuomotor task not only leads to changes in motor performance but also improves proprioceptive function of the trained joint/limb system. Such sensorimotor learning may show intra-joint transfer that is observable at a previously untrained degrees of freedom of the trained joint. OBJECTIVE: Here, we examined if and to what extent such learning transfers to neighboring joints of the same limb and whether such transfer is observable in the motor as well as in the proprioceptive domain. Documenting such intra-limb transfer of sensorimotor learning holds promise for the neurorehabilitation of an impaired joint by training the neighboring joints. METHODS: Using a robotic exoskeleton, 15 healthy young adults (18-35 years) underwent a visuomotor training that required them to make continuous, increasingly precise, small amplitude wrist movements. Wrist and elbow position sense just-noticeable-difference (JND) thresholds and spatial movement accuracy error (MAE) at wrist and elbow in an untrained pointing task were assessed before and immediately after, as well as 24 h after training. RESULTS: First, all participants showed evidence of proprioceptive and motor learning in both trained and untrained joints. The mean JND threshold decreased significantly by 30% in trained wrist (M: 1.26° to 0.88°) and by 35% in untrained elbow (M: 1.96° to 1.28°). Second, mean MAE in untrained pointing task reduced by 20% in trained wrist and the untrained elbow. Third, after 24 h the gains in proprioceptive learning persisted at both joints, while transferred motor learning gains had decayed to such extent that they were no longer significant at the group level. CONCLUSION: Our findings document that a one-time sensorimotor training induces rapid learning gains in proprioceptive acuity and untrained sensorimotor performance at the practiced joint. Importantly, these gains transfer almost fully to the neighboring, proximal joint/limb system.
Asunto(s)
Robótica , Muñeca , Adulto Joven , Humanos , Codo , Extremidad Superior , PropiocepciónRESUMEN
BACKGROUND: In this work, we present a novel sensory substitution system that enables to learn three dimensional digital information via touch when vision is unavailable. The system is based on a mouse-shaped device, designed to jointly perceive, with one finger only, local tactile height and inclination cues of arbitrary scalar fields. The device hosts a tactile actuator with three degrees of freedom: elevation, roll and pitch. The actuator approximates the tactile interaction with a plane tangential to the contact point between the finger and the field. Spatial information can therefore be mentally constructed by integrating local and global tactile cues: the actuator provides local cues, whereas proprioception associated with the mouse motion provides the global cues. METHODS: The efficacy of the system is measured by a virtual/real object-matching task. Twenty-four gender and age-matched participants (one blind and one blindfolded sighted group) matched a tactile dictionary of virtual objects with their 3D-printed solid version. The exploration of the virtual objects happened in three conditions, i.e., with isolated or combined height and inclination cues. We investigated the performance and the mental cost of approximating virtual objects in these tactile conditions. RESULTS: In both groups, elevation and inclination cues were sufficient to recognize the tactile dictionary, but their combination worked at best. The presence of elevation decreased a subjective estimate of mental effort. Interestingly, only visually impaired participants were aware of their performance and were able to predict it. CONCLUSIONS: The proposed technology could facilitate the learning of science, engineering and mathematics in absence of vision, being also an industrial low-cost solution to make graphical user interfaces accessible for people with vision loss.
Asunto(s)
Percepción del Tacto , Personas con Daño Visual , Animales , Ceguera , Humanos , Aprendizaje , Ratones , TactoRESUMEN
BACKGROUND: In recent years, many studies focused on the use of robotic devices for both the assessment and the neuro-motor reeducation of upper limb in subjects after stroke, spinal cord injuries or affected by neurological disorders. Contrarily, it is still hard to find examples of robot-aided assessment and rehabilitation after traumatic injuries in the orthopedic field. However, those benefits related to the use of robotic devices are expected also in orthopedic functional reeducation. METHODS: After a wrist injury occurred at their workplace, wrist functionality of twenty-three subjects was evaluated through a robot-based assessment and clinical measures (Patient Rated Wrist Evaluation, Jebsen-Taylor and Jamar Test), before and after a 3-week long rehabilitative treatment. Subjects were randomized in two groups: while the control group (n = 13) underwent a traditional rehabilitative protocol, the experimental group (n = 10) was treated replacing traditional exercises with robot-aided ones. RESULTS: Functionality, assessed through the function subscale of PRWE scale, improved in both groups (experimental p = 0.016; control p < 0.001) and was comparable between groups, both pre (U = 45.5, p = 0.355) and post (U = 47, p = 0.597) treatment. Additionally, even though groups' performance during the robotic assessment was comparable before the treatment (U = 36, p = 0.077), after rehabilitation the experimental group presented better results than the control one (U = 26, p = 0.015). CONCLUSIONS: This work can be considered a starting point for introducing the use of robotic devices in the orthopedic field. The robot-aided rehabilitative treatment was effective and comparable to the traditional one. Preserving efficacy and safety conditions, a systematic use of these devices could lead to decrease human therapists' effort, increase repeatability and accuracy of assessments, and promote subject's engagement and voluntary participation. Trial Registration ClinicalTrial.gov ID: NCT04739644. Registered on February 4, 2021-Retrospectively registered, https://www.clinicaltrials.gov/ct2/show/study/NCT04739644 .
Asunto(s)
Robótica , Rehabilitación de Accidente Cerebrovascular , Accidente Cerebrovascular , Humanos , Extremidad Superior , Muñeca , Articulación de la MuñecaRESUMEN
In this study, we recorded the pressure exerted onto an object by the index finger and the thumb of the preferred hand of 18 human subjects and either hand of two macaque monkeys during a precision grasping task. The to-be-grasped object was a custom-made device composed by two plates which could be variably oriented by a motorized system while keeping constant the size and thus grip dimension. The to-be-grasped plates were covered by an array of capacitive sensors to measure specific features of finger adaptation, namely pressure intensity and centroid location and displacement. Kinematic measurements demonstrated that for human subjects and for monkeys, different plate configurations did not affect wrist velocity and grip aperture during the reaching phase. Consistently, at the instant of fingers-plates contact, pressure centroids were clustered around the same point for all handle configurations. However, small pressure centroid displacements were specifically adopted for each configuration, indicating that both humans and monkeys can display finger adaptation during precision grip. Moreover, humans applied stronger thumb pressure intensity, performed less centroid displacement and required reduced adjustment time, as compared to monkeys. These pressure patterns remain similar when different load forces were required to pull the handle, as ascertained by additional measurements in humans. The present findings indicate that, although humans and monkeys share common features in motor control of grasping, they differ in the adjustment of fingertip pressure, probably because of skill and/or morphology divergences. Such a precision grip device may form the groundwork for future studies on prehension mechanisms.
Asunto(s)
Dedos/fisiología , Fuerza de Pellizco , Adulto , Animales , Fenómenos Biomecánicos , Femenino , Dedos/inervación , Humanos , Macaca fascicularis , Masculino , Destreza Motora , Percepción del TactoRESUMEN
Development of the motor system lags behind that of the visual system and might delay some visual properties more closely linked to action. We measured the developmental trajectory of the discrimination of object size from observation of the biological motion of a grasping action in egocentric and allocentric viewpoints (observing action of others or self), in children and adolescents from 5 to 18 years of age. Children of 5-7 years of age performed the task at chance, indicating a delayed ability to understand the goal of the action. We found a progressive improvement in the ability of discrimination from 9 to 18 years, which parallels the development of fine motor control. Only after 9 years of age did we observe an advantage for the egocentric view, as previously reported for adults. Given that visual and haptic sensitivity of size discrimination, as well as biological motion, are mature in early adolescence, we interpret our results as reflecting immaturity of the influence of the motor system on visual perception.
Asunto(s)
Envejecimiento/fisiología , Desarrollo Infantil/fisiología , Retroalimentación Sensorial/fisiología , Fuerza de la Mano/fisiología , Desempeño Psicomotor/fisiología , Percepción Visual/fisiología , Adolescente , Niño , Preescolar , Discriminación en Psicología , Femenino , Humanos , Masculino , Estimulación Luminosa , Tiempo de Reacción/fisiologíaRESUMEN
It is well known that the motor and the sensory systems structure sensory data collection and cooperate to achieve an efficient integration and exchange of information. Increasing evidence suggests that both motor and sensory functions are regulated by rhythmic processes reflecting alternating states of neuronal excitability, and these may be involved in mediating sensory-motor interactions. Here we show an oscillatory fluctuation in early visual processing time locked with the execution of voluntary action, and, crucially, even for visual stimuli irrelevant to the motor task. Human participants were asked to perform a reaching movement toward a display and judge the orientation of a Gabor patch, near contrast threshold, briefly presented at random times before and during the reaching movement. When the data are temporally aligned to the onset of movement, visual contrast sensitivity oscillates with periodicity within the theta band. Importantly, the oscillations emerge during the motor planning stage, â¼500 ms before movement onset. We suggest that brain oscillatory dynamics may mediate an automatic coupling between early motor planning and early visual processing, possibly instrumental in linking and closing up the visual-motor control loop.
Asunto(s)
Sensibilidad de Contraste/fisiología , Fuerza de la Mano/fisiología , Movimiento/fisiología , Periodicidad , Desempeño Psicomotor/fisiología , Estimulación Acústica/métodos , Femenino , Humanos , Masculino , Estimulación Luminosa/métodos , Tiempo de Reacción/fisiologíaRESUMEN
Saccades cause compression of visual space around the saccadic target, and also a compression of time, both phenomena thought to be related to the problem of maintaining saccadic stability (Morrone et al., 2005; Burr and Morrone, 2011). Interestingly, similar phenomena occur at the time of hand movements, when tactile stimuli are systematically mislocalized in the direction of the movement (Dassonville, 1995; Watanabe et al., 2009). In this study, we measured whether hand movements also cause an alteration of the perceived timing of tactile signals. Human participants compared the temporal separation between two pairs of tactile taps while moving their right hand in response to an auditory cue. The first pair of tactile taps was presented at variable times with respect to movement with a fixed onset asynchrony of 150 ms. Two seconds after test presentation, when the hand was stationary, the second pair of taps was delivered with a variable temporal separation. Tactile stimuli could be delivered to either the right moving or left stationary hand. When the tactile stimuli were presented to the motor effector just before and during movement, their perceived temporal separation was reduced. The time compression was effector-specific, as perceived time was veridical for the left stationary hand. The results indicate that time intervals are compressed around the time of hand movements. As for vision, the mislocalizations of time and space for touch stimuli may be consequences of a mechanism attempting to achieve perceptual stability during tactile exploration of objects, suggesting common strategies within different sensorimotor systems.
Asunto(s)
Mano/fisiología , Movimiento/fisiología , Percepción del Tiempo/fisiología , Percepción del Tacto/fisiología , Estimulación Acústica , Señales (Psicología) , HumanosRESUMEN
Prolonged adaptation to delayed sensory feedback to a simple motor act (such as pressing a key) causes recalibration of sensory-motor synchronization, so instantaneous feedback appears to precede the motor act that caused it (Stetson, Cui, Montague & Eagleman, 2006). We investigated whether similar recalibration occurs in school-age children. Although plasticity may be expected to be even greater in children than in adults, we found no evidence of recalibration in children aged 8-11 years. Subjects adapted to delayed feedback for 100 trials, intermittently pressing a key that caused a tone to sound after a 200 ms delay. During the test phase, subjects responded to a visual cue by pressing a key, which triggered a tone to be played at variable intervals before or after the keypress. Subjects judged whether the tone preceded or followed the keypress, yielding psychometric functions estimating the delay when they perceived the tone to be synchronous with the action. The psychometric functions also gave an estimate of the precision of the temporal order judgment. In agreement with previous studies, adaptation caused a shift in perceived synchrony in adults, so the keypress appeared to trail behind the auditory feedback, implying sensory-motor recalibration. However, school children of 8 to 11 years showed no measureable adaptation of perceived simultaneity, even after adaptation with 500 ms lags. Importantly, precision in the simultaneity task also improved with age, and this developmental trend correlated strongly with the magnitude of recalibration. This suggests that lack of recalibration of sensory-motor simultaneity after adaptation in school-age children is related to their poor precision in temporal order judgments. To test this idea we measured recalibration in adult subjects with auditory noise added to the stimuli (which hampered temporal precision). Under these conditions, recalibration was greatly reduced, with the magnitude of recalibration strongly correlating with temporal precision.
Asunto(s)
Adaptación Fisiológica , Retroalimentación Sensorial/fisiología , Desempeño Psicomotor/fisiología , Percepción del Tiempo/fisiología , Estimulación Acústica , Adulto , Factores de Edad , Análisis de Varianza , Niño , Desarrollo Infantil , Femenino , Humanos , Masculino , PsicometríaRESUMEN
Several studies have demonstrated enhanced auditory processing in the blind, suggesting that they compensate their visual impairment in part with greater sensitivity of the other senses. However, several physiological studies show that early visual deprivation can impact negatively on auditory spatial localization. Here we report for the first time severely impaired auditory localization in the congenitally blind: thresholds for spatially bisecting three consecutive, spatially-distributed sound sources were seriously compromised, on average 4.2-fold typical thresholds, and half performing at random. In agreement with previous studies, these subjects showed no deficits on simpler auditory spatial tasks or with auditory temporal bisection, suggesting that the encoding of Euclidean auditory relationships is specifically compromised in the congenitally blind. It points to the importance of visual experience in the construction and calibration of auditory spatial maps, with implications for rehabilitation strategies for the congenitally blind.
Asunto(s)
Ceguera/congénito , Ceguera/psicología , Localización de Sonidos/fisiología , Percepción Espacial/fisiología , Estimulación Acústica , Adaptación Fisiológica/fisiología , Adulto , Femenino , Humanos , MasculinoRESUMEN
Observing an action performed by another person to learn a new movement is a frequent experience in adult daily life, such as in sports. However, it is an especially common circumstance during the development of motor skills in childhood. Studies on healthy humans indicate that action observation induces a facilitation in the observer's motor system. This effect is supported by an action-perception matching mechanism available both in adults and in children. Because of the simplicity of action observation, it has been proposed to apply this method in clinical contexts. After a brief, non-exhaustive introduction of the essential features underlying action observation in healthy people, we review recent studies reporting beneficial effects of rehabilitative training based on a combination of action perception and execution. We focus on therapeutic interventions for patients with upper-limb motor disabilities such as adults after stroke or children with hemiplegia due to cerebral palsy. Further, we consider data from basic science demonstrating that the facilitation induced by visual perception of the action can be modulated by the combination of multimodal stimuli related to the movement (e.g. visual and acoustic action-related inputs). In line with this, we discuss possible new directions to improve basic knowledge and therapeutic applications of action observation.
Asunto(s)
Parálisis Cerebral/rehabilitación , Conducta Imitativa/fisiología , Actividad Motora/fisiología , Percepción Visual/fisiología , Adulto , Niño , HumanosRESUMEN
Cumulatively developing robots offer a unique opportunity to reenact the constant interplay between neural mechanisms related to learning, memory, prospection, and abstraction from the perspective of an integrated system that acts, learns, remembers, reasons, and makes mistakes. Situated within such interplay lie some of the computationally elusive and fundamental aspects of cognitive behavior: the ability to recall and flexibly exploit diverse experiences of one's past in the context of the present to realize goals, simulate the future, and keep learning further. This article is an adventurous exploration in this direction using a simple engaging scenario of how the humanoid iCub learns to construct the tallest possible stack given an arbitrary set of objects to play with. The learning takes place cumulatively, with the robot interacting with different objects (some previously experienced, some novel) in an open-ended fashion. Since the solution itself depends on what objects are available in the "now," multiple episodes of past experiences have to be remembered and creatively integrated in the context of the present to be successful. Starting from zero, where the robot knows nothing, we explore the computational basis of organization episodic memory in a cumulatively learning humanoid and address (1) how relevant past experiences can be reconstructed based on the present context, (2) how multiple stored episodic memories compete to survive in the neural space and not be forgotten, (3) how remembered past experiences can be combined with explorative actions to learn something new, and (4) how multiple remembered experiences can be recombined to generate novel behaviors (without exploration). Through the resulting behaviors of the robot as it builds, breaks, learns, and remembers, we emphasize that mechanisms of episodic memory are fundamental design features necessary to enable the survival of autonomous robots in a real world where neither everything can be known nor can everything be experienced.
Asunto(s)
Aprendizaje/fisiología , Sistemas Hombre-Máquina , Memoria Episódica , Modelos Neurológicos , Redes Neurales de la Computación , Neuronas/fisiología , Asociación , Simulación por Computador , Humanos , Dinámicas no Lineales , RobóticaRESUMEN
Perception is a complex process, where prior knowledge exerts a fundamental influence over what we see. The use of priors is at the basis of the well-known phenomenon of central tendency: Judgments of almost all quantities (such as length, duration, and number) tend to gravitate toward their mean magnitude. Although such context dependency is universal in adult perceptual judgments, how it develops with age remains unknown. We asked children from 7 to 14 years of age and adults to reproduce lengths of stimuli drawn from different distributions and evaluated whether judgments were influenced by stimulus context. All participants reproduced the presented length differently depending on the context: The same stimulus was reproduced as shorter, when on average stimuli were short, and as longer, when on average stimuli were long. Interestingly, the relative importance given to the current sensory signal and to priors was almost constant during childhood. This strategy, which in adults is optimal in Bayesian terms, is apparently successful in holding the sensory noise at bay even during development. Hence, the influence of previous knowledge on perception is present already in young children, suggesting that context dependency is established early in the developing brain.
Asunto(s)
Juicio/fisiología , Percepción Espacial/fisiología , Percepción Visual/fisiología , Adolescente , Adulto , Teorema de Bayes , Niño , Femenino , Humanos , Masculino , Estimulación LuminosaRESUMEN
The trend in industrial/service robotics is to develop robots that can cooperate with people, interacting with them in an autonomous, safe and purposive way. These are the fundamental elements characterizing the fourth and the fifth industrial revolutions (4IR, 5IR): the crucial innovation is the adoption of intelligent technologies that can allow the development of cyber-physical systems, similar if not superior to humans. The common wisdom is that intelligence might be provided by AI (Artificial Intelligence), a claim that is supported more by media coverage and commercial interests than by solid scientific evidence. AI is currently conceived in a quite broad sense, encompassing LLMs and a lot of other things, without any unifying principle, but self-motivating for the success in various areas. The current view of AI robotics mostly follows a purely disembodied approach that is consistent with the old-fashioned, Cartesian mind-body dualism, reflected in the software-hardware distinction inherent to the von Neumann computing architecture. The working hypothesis of this position paper is that the road to the next generation of autonomous robotic agents with cognitive capabilities requires a fully brain-inspired, embodied cognitive approach that avoids the trap of mind-body dualism and aims at the full integration of Bodyware and Cogniware. We name this approach Artificial Cognition (ACo) and ground it in Cognitive Neuroscience. It is specifically focused on proactive knowledge acquisition based on bidirectional human-robot interaction: the practical advantage is to enhance generalization and explainability. Moreover, we believe that a brain-inspired network of interactions is necessary for allowing humans to cooperate with artificial cognitive agents, building a growing level of personal trust and reciprocal accountability: this is clearly missing, although actively sought, in current AI. The ACo approach is a work in progress that can take advantage of a number of research threads, some of them antecedent the early attempts to define AI concepts and methods. In the rest of the paper we will consider some of the building blocks that need to be re-visited in a unitary framework: the principles of developmental robotics, the methods of action representation with prospection capabilities, and the crucial role of social interaction.
RESUMEN
Immersive technology, such as extended reality, holds promise as a tool for educating ophthalmologists about the effects of low vision and for enhancing visual rehabilitation protocols. However, immersive simulators have not been evaluated for their ability to induce changes in the oculomotor system, which is crucial for understanding the visual experiences of visually impaired individuals. This study aimed to assess the REALTER (Wearable Egocentric Altered Reality Simulator) system's capacity to induce specific alterations in healthy individuals' oculomotor systems under simulated low-vision conditions. We examined task performance, eye movements, and head movements in healthy participants across various simulated scenarios. Our findings suggest that REALTER can effectively elicit behaviors in healthy individuals resembling those observed in individuals with low vision. Participants with simulated binocular maculopathy demonstrated unstable fixations and a high frequency of wide saccades. Individuals with simulated homonymous hemianopsia showed a tendency to maintain a fixed head position while executing wide saccades to survey their surroundings. Simulation of tubular vision resulted in a significant reduction in saccade amplitudes. REALTER holds promise as both a training tool for ophthalmologists and a research instrument for studying low vision conditions. The simulator has the potential to enhance ophthalmologists' comprehension of the limitations imposed by visual disabilities, thereby facilitating the development of new rehabilitation protocols.
RESUMEN
This study investigated how Parkinson's disease alters haptic perception and the underlying mechanisms of somatosensory and sensorimotor integration. Changes in haptic sensitivity and acuity (the abilities to detect and to discriminate between haptic stimuli) due to Parkinson's disease were systematically quantified and contrasted to the performance of healthy older and young adults. Using a robotic force environment, virtual contours of various curvatures were presented. Participants explored these contours with their hands and indicated verbally whether they could detect or discriminate between two contours. To understand what aspects of sensory or sensorimotor integration are altered by ageing and disease, we manipulated the sensorimotor aspect of the task: the robot either guided the hand along the contour or the participant actively moved the hand. Active exploration relies on multimodal sensory and sensorimotor integration, while passive guidance only requires sensory integration of proprioceptive and tactile information. The main findings of the study are as follows: first, a decline in haptic precision can already be observed in adults before the age of 70 years. Parkinson's disease may lead to an additional decrease in haptic sensitivity well beyond the levels typically seen in middle-aged and older adults. Second, the haptic deficit in Parkinson's disease is general in nature. It becomes manifest as a decrease in sensitivity and acuity (i.e. a smaller perceivable range and a diminished ability to discriminate between two perceivable haptic stimuli). Third, thresholds during both active and passive exploration are elevated, but not significantly different from each other. That is, active exploration did not enhance the haptic deficit when compared to passive hand motion. This implies that Parkinson's disease affects early stages of somatosensory integration that ultimately have an impact on processes of sensorimotor integration. Our results suggest that the known motor problems in Parkinson's disease that are generally characterized as a failure of sensorimotor integration may, in fact, have a sensory origin.
Asunto(s)
Envejecimiento/fisiología , Percepción de Forma/fisiología , Enfermedad de Parkinson/fisiopatología , Percepción del Tacto/fisiología , Adulto , Anciano , Estudios de Casos y Controles , Femenino , Humanos , Masculino , Persona de Mediana Edad , Desempeño Psicomotor , Robótica/métodos , Umbral Sensorial/fisiologíaRESUMEN
Primary visual cortex (V1) retains a form of plasticity in adult humans: a brief period of monocular deprivation induces an enhanced response to the deprived eye, which can stabilize into a consolidated plastic change1,2 despite unaltered thalamic input3. This form of homeostatic plasticity in adults is thought to act through neuronal competition between the representations of the two eyes, which are still separate in primary visual cortex4,5. During monocular occlusion, neurons of the deprived eye are thought to increase response gain given the absence of visual input, leading to the post-deprivation enhancement. If the decrease of reliability of the monocular response is crucial to establish homeostatic plasticity, this could be induced in several different ways. There is increasing evidence that V1 processing is affected by voluntary action, allowing it to take into account the visual effects of self-motion6, important for efficient active vision7. Here we asked whether ocular dominance homeostatic plasticity could be elicited without degrading the quality of monocular visual images but simply by altering their role in visuomotor control by introducing a visual delay in one eye while participants actively performed a visuomotor task; this causes a discrepancy between what the subject sees and what he/she expects to see. Our results show that homeostatic plasticity is gated by the consistency between the monocular visual inputs and a person's actions, suggesting that action not only shapes visual processing but may also be essential for plasticity in adults.
Asunto(s)
Predominio Ocular , Corteza Visual , Femenino , Humanos , Adulto , Reproducibilidad de los Resultados , Visión Monocular/fisiología , Corteza Visual/fisiología , Plasticidad Neuronal/fisiología , Privación Sensorial/fisiologíaRESUMEN
Frequently in rehabilitation, visually impaired persons are passive agents of exercises with fixed environmental constraints. In fact, a printed tactile map, i.e. a particular picture with a specific spatial arrangement, can usually not be edited. Interaction with map content, instead, facilitates the learning of spatial skills because it exploits mental imagery, manipulation and strategic planning simultaneously. However, it has rarely been applied to maps, mainly because of technological limitations. This study aims to understand if visually impaired people can autonomously build objects that are completely virtual. Specifically, we investigated if a group of twelve blind persons, with a wide age range, could exploit mental imagery to interact with virtual content and actively manipulate it by means of a haptic device. The device is mouse-shaped and designed to jointly perceive, with one finger only, local tactile height and inclination cues of arbitrary scalar fields. Spatial information can be mentally constructed by integrating local tactile cues, given by the device, with global proprioceptive cues, given by hand and arm motion. The experiment consisted of a bi-manual task, in which one hand explored some basic virtual objects and the other hand acted on a keyboard to change the position of one object in real-time. The goal was to merge basic objects into more complex objects, like a puzzle. The experiment spanned different resolutions of the tactile information. We measured task accuracy, efficiency, usability and execution time. The average accuracy in solving the puzzle was 90.5%. Importantly, accuracy was linearly predicted by efficiency, measured as the number of moves needed to solve the task. Subjective parameters linked to usability and spatial resolutions did not predict accuracy; gender modulated the execution time, with men being faster than women. Overall, we show that building purely virtual tactile objects is possible in absence of vision and that the process is measurable and achievable in partial autonomy. Introducing virtual tactile graphics in rehabilitation protocols could facilitate the stimulation of mental imagery, a basic element for the ability to orient in space. The behavioural variable introduced in the current study can be calculated after each trial and therefore could be used to automatically measure and tailor protocols to specific user needs. In perspective, our experimental setup can inspire remote rehabilitation scenarios for visually impaired people.
Asunto(s)
Personas con Daño Visual , Femenino , Humanos , Masculino , Identidad de Género , Aprendizaje , Tacto/fisiología , Visión Ocular , Personas con Daño Visual/rehabilitaciónRESUMEN
While navigating through the surroundings, we constantly rely on inertial vestibular signals for self-motion along with visual and acoustic spatial references from the environment. However, the interaction between inertial cues and environmental spatial references is not yet fully understood. Here we investigated whether vestibular self-motion sensitivity is influenced by sensory spatial references. Healthy participants were administered a Vestibular Self-Motion Detection Task in which they were asked to detect vestibular self-motion sensations induced by low-intensity Galvanic Vestibular Stimulation. Participants performed this detection task with or without an external visual or acoustic spatial reference placed directly in front of them. We computed the d prime ( d ' ) as a measure of participants' vestibular sensitivity and the criterion as an index of their response bias. Results showed that the visual spatial reference increased sensitivity to detect vestibular self-motion. Conversely, the acoustic spatial reference did not influence self-motion sensitivity. Both visual and auditory spatial references did not cause changes in response bias. Environmental visual spatial references provide relevant information to enhance our ability to perceive inertial self-motion cues, suggesting a specific interaction between visual and vestibular systems in self-motion perception.
Asunto(s)
Percepción de Movimiento , Percepción Espacial , Vestíbulo del Laberinto , Humanos , Percepción de Movimiento/fisiología , Masculino , Vestíbulo del Laberinto/fisiología , Femenino , Adulto , Adulto Joven , Percepción Espacial/fisiología , Señales (Psicología) , Percepción Visual/fisiología , Estimulación Acústica , Percepción Auditiva/fisiologíaRESUMEN
Growing evidence suggests that time in the subsecond range is tightly linked to sensory processing. Event-time can be distorted by sensory adaptation, and many temporal illusions can accompany action execution. In this study, we show that adaptation to tactile motion causes a strong contraction of the apparent duration of tactile stimuli. However, when subjects make a voluntary motor act before judging the duration, it annuls the adaptation-induced temporal distortion, reestablishing veridical event-time. The movement needs to be performed actively by the subject: passive movement of similar magnitude and dynamics has no effect on adaptation, showing that it is the motor commands themselves, rather than reafferent signals from body movement, which reset the adaptation for tactile duration. No other concomitant perceptual changes were reported (such as apparent speed or enhanced temporal discrimination), ruling out a generalized effect of body movement on somatosensory processing. We suggest that active movement resets timing mechanisms in preparation for the new scenario that the movement will cause, eliminating inappropriate biases in perceived time. Our brain seems to utilize the intention-to-move signals to retune its perceptual machinery appropriately, to prepare to extract new temporal information.