RESUMEN
Electrotactile stimulation has been commonly used in human-machine interfaces to provide feedback to the user, thereby closing the control loop and improving performance. The encoding approach, which defines the mapping of the feedback information into stimulation profiles, is a critical component of an electrotactile interface. Ideally, the encoding will provide a high-fidelity representation of the feedback variable while being easy to perceive and interpret by the subject. In the present study, we performed a closed-loop experiment wherein discrete and continuous coding schemes are combined to exploit the benefits of both techniques. Subjects performed a muscle activation-matching task relying solely on electrotactile feedback representing the generated myoelectric signal (EMG). In particular, we investigated the performance of two different coding schemes (spatial and spatial combined with frequency) at two feedback resolutions (low: 3 and high: 5 intervals). In both schemes, the stimulation electrodes were placed circumferentially around the upper arm. The magnitude of the normalized EMG was divided into intervals, and each electrode was associated with one interval. When the generated EMG entered one of the intervals, the associated electrode started stimulating. In the combined encoding, the additional frequency modulation of the active electrode also indicated the momentary magnitude of the signal within the interval. The results showed that combined coding decreased the undershooting rate, variability and absolute deviation when the resolution was low but not when the resolution was high, where it actually worsened the performance. This demonstrates that combined coding can improve the effectiveness of EMG feedback, but that this effect is limited by the intrinsic variability of myoelectric control. Our findings, therefore, provide important insights as well as elucidate limitations of the information encoding methods when using electrotactile stimulation to convey a feedback signal characterized by high variability (EMG biofeedback).
Asunto(s)
Miembros Artificiales , Retroalimentación Sensorial , Brazo , Electromiografía/métodos , Retroalimentación , Retroalimentación Sensorial/fisiología , Humanos , Tacto/fisiologíaRESUMEN
BACKGROUND: In this work, we present a novel sensory substitution system that enables to learn three dimensional digital information via touch when vision is unavailable. The system is based on a mouse-shaped device, designed to jointly perceive, with one finger only, local tactile height and inclination cues of arbitrary scalar fields. The device hosts a tactile actuator with three degrees of freedom: elevation, roll and pitch. The actuator approximates the tactile interaction with a plane tangential to the contact point between the finger and the field. Spatial information can therefore be mentally constructed by integrating local and global tactile cues: the actuator provides local cues, whereas proprioception associated with the mouse motion provides the global cues. METHODS: The efficacy of the system is measured by a virtual/real object-matching task. Twenty-four gender and age-matched participants (one blind and one blindfolded sighted group) matched a tactile dictionary of virtual objects with their 3D-printed solid version. The exploration of the virtual objects happened in three conditions, i.e., with isolated or combined height and inclination cues. We investigated the performance and the mental cost of approximating virtual objects in these tactile conditions. RESULTS: In both groups, elevation and inclination cues were sufficient to recognize the tactile dictionary, but their combination worked at best. The presence of elevation decreased a subjective estimate of mental effort. Interestingly, only visually impaired participants were aware of their performance and were able to predict it. CONCLUSIONS: The proposed technology could facilitate the learning of science, engineering and mathematics in absence of vision, being also an industrial low-cost solution to make graphical user interfaces accessible for people with vision loss.
Asunto(s)
Percepción del Tacto , Personas con Daño Visual , Animales , Ceguera , Humanos , Aprendizaje , Ratones , TactoRESUMEN
Vision of the body has been reported to improve tactile acuity even when vision is not informative about the actual tactile stimulation. However, it is currently unclear whether this effect is limited to body parts such as hand, forearm or foot that can be normally viewed, or it also generalizes to body locations, such as the shoulder, that are rarely before our own eyes. In this study, subjects consecutively performed a detection threshold task and a numerosity judgment task of tactile stimuli on the shoulder. Meanwhile, they watched either a real-time video showing their shoulder or simply a fixation cross as control condition. We show that non-informative vision improves tactile numerosity judgment which might involve tactile acuity, but not tactile sensitivity. Furthermore, the improvement in tactile accuracy modulated by vision seems to be due to an enhanced ability in discriminating the number of adjacent active electrodes. These results are consistent with the view that bimodal visuotactile neurons sharp tactile receptive fields in an early somatosensory map, probably via top-down modulation of lateral inhibition.
Asunto(s)
Hombro , Tacto , Mano , Cuerpo Humano , Humanos , JuicioRESUMEN
BACKGROUND: The estimation of relative distance is a perceptual task used extensively in everyday life. This important skill suffers from biases that may be more pronounced when estimation is based on haptics. This is especially true for the blind and visually impaired, for which haptic estimation of distances is paramount but not systematically trained. We investigated whether a programmable tactile display, used autonomously, can improve distance discrimination ability in blind and severely visually impaired youngsters between 7 and 22 years-old. METHODS: Training consisted of four weekly sessions in which participants were asked to haptically find, on the programmable tactile display, the pairs of squares which were separated by the shortest and longest distance in tactile images with multiple squares. A battery of haptic tests with raised-line drawings was administered before and after training, and scores were compared to those of a control group that did only the haptic battery, without doing the distance discrimination training on the tactile display. RESULTS: Both blind and severely impaired youngsters became more accurate and faster at the task during training. In haptic battery results, blind and severely impaired youngsters who used the programmable display improved in three and two tests, respectively. In contrast, in the control groups, the blind control group improved in only one test, and the severely visually impaired in no tests. CONCLUSIONS: Distance discrimination skills can be trained equally well in both blind and severely impaired participants. More importantly, autonomous training with the programmable tactile display had generalized effects beyond the trained task. Participants improved not only in the size discrimination test but also in memory span tests. Our study shows that tactile stimulation training that requires minimal human assistance can effectively improve generic spatial skills.
Asunto(s)
Percepción de Distancia , Percepción Espacial , Trastornos de la Visión/rehabilitación , Adolescente , Ceguera/rehabilitación , Estudios de Casos y Controles , Niño , Femenino , Humanos , Aprendizaje , Masculino , Memoria , Desempeño Psicomotor , Tiempo de Reacción , Percepción del Tamaño , Tacto , Adulto JovenRESUMEN
Frequently in rehabilitation, visually impaired persons are passive agents of exercises with fixed environmental constraints. In fact, a printed tactile map, i.e. a particular picture with a specific spatial arrangement, can usually not be edited. Interaction with map content, instead, facilitates the learning of spatial skills because it exploits mental imagery, manipulation and strategic planning simultaneously. However, it has rarely been applied to maps, mainly because of technological limitations. This study aims to understand if visually impaired people can autonomously build objects that are completely virtual. Specifically, we investigated if a group of twelve blind persons, with a wide age range, could exploit mental imagery to interact with virtual content and actively manipulate it by means of a haptic device. The device is mouse-shaped and designed to jointly perceive, with one finger only, local tactile height and inclination cues of arbitrary scalar fields. Spatial information can be mentally constructed by integrating local tactile cues, given by the device, with global proprioceptive cues, given by hand and arm motion. The experiment consisted of a bi-manual task, in which one hand explored some basic virtual objects and the other hand acted on a keyboard to change the position of one object in real-time. The goal was to merge basic objects into more complex objects, like a puzzle. The experiment spanned different resolutions of the tactile information. We measured task accuracy, efficiency, usability and execution time. The average accuracy in solving the puzzle was 90.5%. Importantly, accuracy was linearly predicted by efficiency, measured as the number of moves needed to solve the task. Subjective parameters linked to usability and spatial resolutions did not predict accuracy; gender modulated the execution time, with men being faster than women. Overall, we show that building purely virtual tactile objects is possible in absence of vision and that the process is measurable and achievable in partial autonomy. Introducing virtual tactile graphics in rehabilitation protocols could facilitate the stimulation of mental imagery, a basic element for the ability to orient in space. The behavioural variable introduced in the current study can be calculated after each trial and therefore could be used to automatically measure and tailor protocols to specific user needs. In perspective, our experimental setup can inspire remote rehabilitation scenarios for visually impaired people.
Asunto(s)
Personas con Daño Visual , Femenino , Humanos , Masculino , Identidad de Género , Aprendizaje , Tacto/fisiología , Visión Ocular , Personas con Daño Visual/rehabilitaciónRESUMEN
Stroke patients suffer from impairments of both motor and somatosensory functions. The functional recovery of upper extremities is one of the primary goals of rehabilitation programs. Additional somatosensory deficits limit sensorimotor function and significantly affect its recovery after the neuromotor injury. Sensory substitution systems, providing tactile feedback, might facilitate manipulation capability, and improve patient's dexterity during grasping movements. As a first step toward this aim, we evaluated the ability of healthy subjects in exploiting electrotactile feedback on the shoulder to determine the number of perceived stimuli in numerosity judgment tasks. During the experiment, we compared four different stimulation patterns (two simultaneous: short and long, intermittent and sequential) differing in total duration, total energy, or temporal synchrony. The experiment confirmed that the subject ability to enumerate electrotactile stimuli decreased with increasing the number of active electrodes. Furthermore, we found that, in electrotactile stimulation, the temporal coding schemes, and not total energy or duration modulated the accuracy in numerosity judgment. More precisely, the sequential condition resulted in significantly better numerosity discrimination than intermittent and simultaneous stimulation. These findings, together with the fact that the shoulder appeared to be a feasible stimulation site to communicate tactile information via electrotactile feedback, can serve as a guide to deliver tactile feedback to proximal areas in stroke survivors who lack sensory integrity in distal areas of their affected arm, but retain motor skills.
RESUMEN
This study investigated the influence of body motion on an echolocation task. We asked a group of blindfolded novice sighted participants to walk along a corridor, made with plastic sound-reflecting panels. By self-generating mouth clicks, the participants attempted to understand some spatial properties of the corridor, i.e. a left turn, a right turn or a dead end. They were asked to explore the corridor and stop whenever they were confident about the corridor shape. Their body motion was captured by a camera system and coded. Most participants were able to accomplish the task with the percentage of correct guesses above the chance level. We found a mutual interaction between some kinematic variables that can lead to optimal echolocation skills. These variables are head motion, accounting for spatial exploration, the motion stop-point of the person and the amount of correct guesses about the spatial structure. The results confirmed that sighted people are able to use self-generated echoes to navigate in a complex environment. The inter-individual variability and the quality of echolocation tasks seems to depend on how and how much the space is explored.
Asunto(s)
Localización de Sonidos/fisiología , Caminata/fisiología , Adulto , Fenómenos Biomecánicos , Percepción de Distancia , Femenino , Movimientos de la Cabeza , Humanos , Masculino , Movimiento (Física) , Factores de Tiempo , Visión Ocular , Adulto JovenRESUMEN
Autonomous navigation in novel environments still represents a challenge for people with visual impairment (VI). Pin array matrices (PAM) are an effective way to display spatial information to VI people in educative/rehabilitative contexts, as they provide high flexibility and versatility. Here, we tested the effectiveness of a PAM in VI participants in an orientation and mobility task. They haptically explored a map showing a scaled representation of a real room on the PAM. The map further included a symbol indicating a virtual target position. Then, participants entered the room and attempted to reach the target three times. While a control group only reviewed the same, unchanged map on the PAM between trials, an experimental group also received an updated map representing, in addition, the position they previously reached in the room. The experimental group significantly improved across trials by having both reduced self-location errors and reduced completion time, unlike the control group. We found that learning spatial layouts through updated tactile feedback on programmable displays outperforms conventional procedures on static tactile maps. This could represent a powerful tool for navigation, both in rehabilitation and everyday life contexts, improving spatial abilities and promoting independent living for VI people.
RESUMEN
We present a fully latching and scalable 4 × 4 haptic display with 4 mm pitch, 5 s refresh time, 400 mN holding force, and 650 µm displacement per taxel. The display serves to convey dynamic graphical information to blind and visually impaired users. Combining significant holding force with high taxel density and large amplitude motion in a very compact overall form factor was made possible by exploiting the reversible, fast, hundred-fold change in the stiffness of a thin shape memory polymer (SMP) membrane when heated above its glass transition temperature. Local heating is produced using an addressable array of 3 mm in diameter stretchable microheaters patterned on the SMP. Each taxel is selectively and independently actuated by synchronizing the local Joule heating with a single pressure supply. Switching off the heating locks each taxel into its position (up or down), enabling holding any array configuration with zero power consumption. A 3D-printed pin array is mounted over the SMP membrane, providing the user with a smooth and room temperature array of movable pins to explore by touch. Perception tests were carried out with 24 blind users resulting in 70 percent correct pattern recognition over a 12-word tactile dictionary.
Asunto(s)
Ceguera/psicología , Sistemas Hombre-Máquina , Patrones de Reconocimiento Fisiológico , Percepción del Tacto , Tacto , Interfaz Usuario-Computador , Personas con Daño Visual/psicología , Adolescente , Adulto , Discriminación en Psicología , Diseño de Equipo , Femenino , Humanos , Masculino , Persona de Mediana Edad , Estimulación Física , Polímeros , Adulto JovenRESUMEN
OBJECTIVE: To investigate whether training with tactile matrices displayed with a programmable tactile display improves recalling performance of spatial images in blind, low-vision and sighted youngsters. To code and understand the behavioral underpinnings of learning two-dimensional tactile dispositions, in terms of spontaneous exploration strategies. METHODS: Three groups of blind, low-vision and sighted youngsters between 6 and 18 years old performed four training sessions with a weekly schedule in which they were asked to memorize single or double spatial layouts, featured as two-dimensional matrices. RESULTS: Results showed that all groups of participants significantly improved their recall performance compared to the first session baseline in the single-matrix task. No statistical difference in performance between groups emerged in this task. Instead, the learning effect in visually impaired participants is reduced in the double-matrix task, whereas it is still robust in blindfolded sighted controls. We also coded tactile exploration strategies in both tasks and their correlation with performance. Sighted youngsters, in particular, favored a proprioceptive exploration strategy. Finally, performance in the double-matrix task negatively correlated with using one hand and positively correlated with a proprioceptive strategy. CONCLUSION: The results of our study indicate that blind persons do not easily process two separate spatial layouts. However, rehabilitation programs promoting bi-manual and proprioceptive approaches to tactile exploration might help improve spatial abilities. Finally, programmable tactile displays are an effective way to make spatial and graphical configurations accessible to visually impaired youngsters and they can be profitably exploited in rehabilitation.
RESUMEN
Vision loss has severe impacts on physical, social and emotional well-being. The education of blind children poses issues as many scholar disciplines (e.g., geometry, mathematics) are normally taught by heavily relying on vision. Touch-based assistive technologies are potential tools to provide graphical contents to blind users, improving learning possibilities and social inclusion. Raised-lines drawings are still the golden standard, but stimuli cannot be reconfigured or adapted and the blind person constantly requires assistance. Although much research concerns technological development, little work concerned the assessment of programmable tactile graphics, in educative and rehabilitative contexts. Here we designed, on programmable tactile displays, tests aimed at assessing spatial memory skills and shapes recognition abilities. Tests involved a group of blind and a group of low vision children and adolescents in a four-week longitudinal schedule. After establishing subject-specific difficulty levels, we observed a significant enhancement of performance across sessions and for both groups. Learning effects were comparable to raised paper control tests: however, our setup required minimal external assistance. Overall, our results demonstrate that programmable maps are an effective way to display graphical contents in educative/rehabilitative contexts. They can be at least as effective as traditional paper tests yet providing superior flexibility and versatility.
Asunto(s)
Ceguera/fisiopatología , Ceguera/rehabilitación , Auxiliares Sensoriales , Memoria Espacial , Análisis y Desempeño de Tareas , Tacto , Interfaz Usuario-Computador , Adolescente , Niño , Femenino , Humanos , Masculino , Reproducibilidad de los Resultados , Sensibilidad y Especificidad , Aprendizaje Espacial , Adulto JovenRESUMEN
Due to the perceptual characteristics of the head, vibrotactile Head-mounted Displays are built with low actuator density. Therefore, vibrotactile guidance is mostly assessed by pointing towards objects in the azimuthal plane. When it comes to multisensory interaction in 3D environments, it is also important to convey information about objects in the elevation plane. In this paper, we design and assess a haptic guidance technique for 3D environments. First, we explore the modulation of vibration frequency to indicate the position of objects in the elevation plane. Then, we assessed a vibrotactile HMD made to render the position of objects in a 3D space around the subject by varying both stimulus loci and vibration frequency. Results have shown that frequencies modulated with a quadratic growth function allowed a more accurate, precise, and faster target localization in an active head pointing task. The technique presented high usability and a strong learning effect for a haptic search across different scenarios in an immersive VR setup.
RESUMEN
We have recently shown that vision is important to improve spatial auditory cognition. In this study, we investigate whether touch is as effective as vision to create a cognitive map of a soundscape. In particular, we tested whether the creation of a mental representation of a room, obtained through tactile exploration of a 3D model, can influence the perception of a complex auditory task in sighted people. We tested two groups of blindfolded sighted people - one experimental and one control group - in an auditory space bisection task. In the first group, the bisection task was performed three times: specifically, the participants explored with their hands the 3D tactile model of the room and were led along the perimeter of the room between the first and the second execution of the space bisection. Then, they were allowed to remove the blindfold for a few minutes and look at the room between the second and third execution of the space bisection. Instead, the control group repeated for two consecutive times the space bisection task without performing any environmental exploration in between. Considering the first execution as a baseline, we found an improvement in the precision after the tactile exploration of the 3D model. Interestingly, no additional gain was obtained when room observation followed the tactile exploration, suggesting that no additional gain was obtained by vision cues after spatial tactile cues were internalized. No improvement was found between the first and the second execution of the space bisection without environmental exploration in the control group, suggesting that the improvement was not due to task learning. Our results show that tactile information modulates the precision of an ongoing space auditory task as well as visual information. This suggests that cognitive maps elicited by touch may participate in cross-modal calibration and supra-modal representations of space that increase implicit knowledge about sound propagation.
RESUMEN
Some blind people have developed a unique technique, called echolocation, to orient themselves in unknown environments. More specifically, by self-generating a clicking noise with the tongue, echolocators gain knowledge about the external environment by perceiving more detailed object features. It is not clear to date whether sighted individuals can also develop such an extremely useful technique. To investigate this, here we test the ability of novice sighted participants to perform a depth echolocation task. Moreover, in order to evaluate whether the type of room (anechoic or reverberant) and the type of clicking sound (with the tongue or with the hands) influences the learning of this technique, we divided the entire sample into four groups. Half of the participants produced the clicking sound with their tongue, the other half with their hands. Half of the participants performed the task in an anechoic chamber, the other half in a reverberant room. Subjects stood in front of five bars, each of a different size, and at five different distances from the subject. The dimension of the bars ensured a constant subtended angle for the five distances considered. The task was to identify the correct distance of the bar. We found that, even by the second session, the participants were able to judge the correct depth of the bar at a rate greater than chance. Improvements in both precision and accuracy were observed in all experimental sessions. More interestingly, we found significantly better performance in the reverberant room than in the anechoic chamber. The type of clicking did not modulate our results. This suggests that the echolocation technique can also be learned by sighted individuals and that room reverberation can influence this learning process. More generally, this study shows that total loss of sight is not a prerequisite for echolocation skills this suggests important potential implications on rehabilitation settings for persons with residual vision.
Asunto(s)
Percepción de Movimiento , Localización de Sonidos , Procesamiento Espacial , Visión Ocular , Estimulación Acústica , Adulto , Ceguera/fisiopatología , Femenino , Humanos , Aprendizaje , Masculino , Orientación Espacial , Percepción del TactoRESUMEN
Visual information is paramount to space perception. Vision influences auditory space estimation. Many studies show that simultaneous visual and auditory cues improve precision of the final multisensory estimate. However, the amount or the temporal extent of visual information, that is sufficient to influence auditory perception, is still unknown. It is therefore interesting to know if vision can improve auditory precision through a short-term environmental observation preceding the audio task and whether this influence is task-specific or environment-specific or both. To test these issues we investigate possible improvements of acoustic precision with sighted blindfolded participants in two audio tasks [minimum audible angle (MAA) and space bisection] and two acoustically different environments (normal room and anechoic room). With respect to a baseline of auditory precision, we found an improvement of precision in the space bisection task but not in the MAA after the observation of a normal room. No improvement was found when performing the same task in an anechoic chamber. In addition, no difference was found between a condition of short environment observation and a condition of full vision during the whole experimental session. Our results suggest that even short-term environmental observation can calibrate auditory spatial performance. They also suggest that echoes can be the cue that underpins visual calibration. Echoes may mediate the transfer of information from the visual to the auditory system.
RESUMEN
Tactile maps are efficient tools to improve spatial understanding and mobility skills of visually impaired people. Their limited adaptability can be compensated with haptic devices which display graphical information, but their assessment is frequently limited to performance-based metrics only which can hide potential spatial abilities in O&M protocols. We assess a low-tech tactile mouse able to deliver three-dimensional content considering how performance, mental workload, behavior, and anxiety status vary with task difficulty and gender in congenitally blind, late blind, and sighted subjects. Results show that task difficulty coherently modulates the efficiency and difficulty to build mental maps, regardless of visual experience. Although exhibiting attitudes that were similar and gender-independent, the females had lower performance and higher cognitive load, especially when congenitally blind. All groups showed a significant decrease in anxiety after using the device. Tactile graphics with our device seems therefore to be applicable with different visual experiences, with no negative emotional consequences of mentally demanding spatial tasks. Going beyond performance-based assessment, our methodology can help with better targeting technological solutions in orientation and mobility protocols.
Asunto(s)
Ceguera/rehabilitación , Periféricos de Computador , Emociones , Identidad de Género , Dispositivos de Autoayuda , Tacto , Personas con Daño Visual/rehabilitación , Adulto , Conducta/fisiología , Ceguera/psicología , Cognición/fisiología , Femenino , Humanos , Masculino , Persona de Mediana Edad , Visión Ocular/fisiología , Personas con Daño Visual/psicología , Adulto JovenRESUMEN
Improving spatial ability of blind and visually impaired people is the main target of orientation and mobility (O&M) programs. In this study, we use a minimalistic mouse-shaped haptic device to show a new approach aimed at evaluating devices providing tactile representations of virtual objects. We consider psychophysical, behavioral, and subjective parameters to clarify under which circumstances mental representations of spaces (cognitive maps) can be efficiently constructed with touch by blindfolded sighted subjects. We study two complementary processes that determine map construction: low-level perception (in a passive stimulation task) and high-level information integration (in an active exploration task). We show that jointly considering a behavioral measure of information acquisition and a subjective measure of cognitive load can give an accurate prediction and a practical interpretation of mapping performance. Our simple TActile MOuse (TAMO) uses haptics to assess spatial ability: this may help individuals who are blind or visually impaired to be better evaluated by O&M practitioners or to evaluate their own performance.