Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 19 de 19
Filter
1.
Sensors (Basel) ; 23(15)2023 Aug 03.
Article in English | MEDLINE | ID: mdl-37571667

ABSTRACT

Soft robots are interesting examples of hyper-redundancy in robotics. However, the nonlinear continuous dynamics of these robots and the use of hyper-elastic and visco-elastic materials make modeling these robots more complicated. This study presents a geometric inverse kinematics (IK) model for trajectory tracking of multi-segment extensible soft robots, where each segment of the soft actuator is geometrically approximated with a rigid links model to reduce the complexity. In this model, the links are connected with rotary and prismatic joints, which enable both the extension and rotation of the robot. Using optimization methods, the desired configuration variables of the soft actuator for the desired end-effector positions were obtained. Furthermore, the redundancy of the robot is applied for second task applications, such as tip angle control. The model's performance was investigated through kinematics and dynamics simulations and numerical benchmarks on multi-segment soft robots. The results showed lower computational costs and higher accuracy compared to most existing models. The method is easy to apply to multi-segment soft robots in both 2D and 3D, and it was experimentally validated on 3D-printed soft robotic manipulators. The results demonstrated the high accuracy in path following using this technique.

2.
Perception ; 49(11): 1252-1259, 2020 Nov.
Article in English | MEDLINE | ID: mdl-33086914

ABSTRACT

People often look at objects that they are holding in their hands. It is therefore reasonable to expect them to be able to direct their gaze precisely with respect to their fingers. However, we know that people make reproducible idiosyncratic errors of up to a few centimetres when they try to align a visible cursor to their own finger hidden below a surface. To find out whether they also make idiosyncratic errors when they try to look at their finger, we asked participants to hold their finger in front of their head in the dark, and look at it. Participants made idiosyncratic errors of a similar magnitude to those previously found when matching a visual cursor to their hidden finger. This shows that proprioceptive position sense of finger and gaze are not aligned, suggesting that people rely on vision to guide their gaze to their own finger.


Subject(s)
Fingers , Psychomotor Performance , Hand , Humans , Proprioception
3.
Psychol Sci ; 30(6): 822-829, 2019 06.
Article in English | MEDLINE | ID: mdl-30917092

ABSTRACT

When lifting an object, it takes time to decide how heavy it is. How does this weight judgment develop? To answer this question, we examined when visual size information has to be present to induce a size-weight illusion. We found that a short glimpse (200 ms) of size information is sufficient to induce a size-weight illusion. The illusion occurred not only when the glimpse was before the onset of lifting but also when the object's weight could already be felt. Only glimpses more than 300 ms after the onset of lifting did not influence the judged weight. This suggests that it takes about 300 ms to reach a perceptual decision about the weight.


Subject(s)
Illusions/psychology , Size Perception , Weight Perception , Adult , Female , Humans , Male , Mechanical Phenomena , Weight Lifting/psychology , Young Adult
4.
Exp Brain Res ; 237(10): 2585-2594, 2019 Oct.
Article in English | MEDLINE | ID: mdl-31372687

ABSTRACT

Grip force has been studied widely in a variety of interaction and movement tasks, however, not much is known about the timing of the grip force control in preparation for interaction with objects. For example, it is unknown whether and how the temporal preparation for a collision is related to (the prediction of) the impact load. To study this question, we examined the anticipative timing of the grip force in preparation for impact loads. We designed a collision task with different types of load forces in a controlled virtual environment. Participants interacted with a robotic device (KINARM, BKIN Technologies, Kingston) whose handles were equipped with force sensors which the participants held in precision grip. Representations of the hand and objects were visually projected on a virtual reality display and forces were applied onto the participant's hand to simulate a collision with the virtual objects. The collisions were alternating between the two hands to allow transfer and learning between the hands. The results show that there is immediate transfer of object information between the two hands, since the grip force levels are (almost) fully adjusted after one collision with the opposite hand. The results also show that the grip force levels are nicely adjusted based on the mass and stiffness of the object. Surprisingly, the temporal onset of the grip force build up did not depend on the impact load, so that participants avoid slippage by adjusting the other grip force characteristics (e.g., grip force level and rate of change), therefore considering these self-imposed timing constraints. With the use of catch trials, for which no impact occurred, we further analyzed the temporal profile of the grip force. The catch trial data showed that the timing of the grip force peak is also independent of the impact load and its timing, which suggests a time-locked planning of the complete grip force profile.


Subject(s)
Behavior/physiology , Hand Strength/physiology , Mechanical Phenomena , Movement/physiology , Adult , Female , Hand/physiology , Humans , Male , Task Performance and Analysis , Weight-Bearing/physiology
5.
Exp Brain Res ; 237(3): 735-741, 2019 Mar.
Article in English | MEDLINE | ID: mdl-30560507

ABSTRACT

When asked to move their unseen hand-to-visual targets, people exhibit idiosyncratic but reliable visuo-proprioceptive matching errors. Unsurprisingly, vision and proprioception quickly align when these errors are made apparent by providing visual feedback of the position of the hand. However, retention of this learning is limited, such that the original matching errors soon reappear when visual feedback is removed. Several recent motor learning studies have shown that reward feedback can improve retention relative to error feedback. Here, using a visuo-proprioceptive position-matching task, we examined whether binary reward feedback can be effectively exploited to reduce matching errors and, if so, whether this learning leads to improved retention relative to learning based on error feedback. The results show that participants were able to adjust the visuo-proprioceptive mapping with reward feedback, but that the level of retention was similar to that observed when the adjustment was accomplished with error feedback. Therefore, similar to error feedback, reward feedback allows for temporary recalibration, but does not support long-lasting retention of this recalibration.


Subject(s)
Feedback, Sensory/physiology , Proprioception/physiology , Psychomotor Performance/physiology , Retention, Psychology/physiology , Reward , Visual Perception/physiology , Adolescent , Adult , Female , Humans , Male , Young Adult
6.
Exp Brain Res ; 235(2): 533-545, 2017 02.
Article in English | MEDLINE | ID: mdl-27807607

ABSTRACT

People make systematic errors when matching locations of an unseen index finger with the index finger of the other hand, or with a visual target. In this study, we present two experiments that test the consistency of such matching errors across different combinations of matching methods. In the first experiment, subjects had to move their unseen index fingers to visually presented targets. We examined the consistency between matching errors for the two hands and for different postures (hand above a board or below it). We found very little consistency: The matching error depends on the posture and differs between the hands. In the second experiment, we designed sets of tasks that involved the same matching configurations. For example, we compared matching errors when moving with the unseen index finger to a visual target, with errors when moving a visual target to the unseen index finger. We found that matching errors are not invertible. Furthermore, moving both index fingers to the same visual target results in a different mismatch between the hands than directly matching the two index fingers. We conclude that the errors that we make when matching locations cannot only arise from systematic mismatches between sensory representations of the positions of the fingers and of visually perceived space. We discuss how these results can be interpreted in terms of sensory transformations that depend on the movement that needs to be made.


Subject(s)
Fingers , Movement/physiology , Proprioception/physiology , Psychomotor Performance/physiology , Visual Perception/physiology , Adult , Female , Functional Laterality , Humans , Male , Middle Aged , Photic Stimulation , Statistics, Nonparametric , Touch
8.
IEEE Trans Haptics ; PP2023 Aug 23.
Article in English | MEDLINE | ID: mdl-37610891

ABSTRACT

Soft pneumatic displays have shown to provide compelling soft haptic feedback. However, they have rarely been tested in Virtual Reality applications, while we are interested in their potential for haptic feedback in the metaverse. Therefore, we designed a fully soft Pneumatic Unit Cell (PUC) and implemented it in a VR button task, in which users could directly use their hands for interaction. Twelve participants were asked to enter six-digit sequences, while being presented with PUC feedback, vibration feedback (VT), or no haptic feedback. Metrics on task performance, kinematics and cognitive load were collected. The results show that both vibration and PUC feedback resulted in participants pressing through the back of buttons less. The kinematic data showed that participants moved more smoothly during PUC feedback compared to vibration feedback. These effects were also reflected in the questionnaire data: participants felt more successful when using either PUCs or VTs, but they perceived the lowest level of stress when using PUCs. Feedback preference ratings also showed that PUC was the most preferred kind of feedback. Concluding, our array of metrics confirm that PUCs are good alternatives for haptic feedback in VR tasks in which electromechanical vibration motors typically excel: creating virtual button clicks.

9.
IEEE Trans Haptics ; 13(2): 259-269, 2020.
Article in English | MEDLINE | ID: mdl-30762567

ABSTRACT

The proprioceptive sense provides somatosensory information about positions of parts of the body, information that is essential for guiding behavior and monitoring the body. Few studies have investigated the perceptual localization of individual fingers, despite their importance for tactile exploration and fine manipulation. We present two experiments assessing the performance of proprioceptive localization of multiple fingers, either alone or in combination with visual cues. In the first experiment, we used a virtual reality paradigm to assess localization of multiple fingers. Surprisingly, the errors averaged 3.7 cm per digit, which represents a significant fraction of the range of motion of any finger. Both random and systematic errors were large. The latter included participant-specific biases and participant-independent distortions that evoked similar observations from prior studies of perceptual representations of hand shape. In a second experiment, we introduced visual cues about positions of nearby fingers, and observed that this contextual information could greatly decrease localization errors. The results suggest that only coarse proprioceptive information is available through somatosensation, and that finer information may not be necessary for fine motor behavior. These findings may help elucidate human hand function, and inform new applications to the design of human-computer interfaces or interactions in virtual reality.


Subject(s)
Fingers/physiology , Proprioception/physiology , Space Perception/physiology , Visual Perception/physiology , Adult , Cues , Female , Humans , Male , Virtual Reality , Young Adult
10.
Front Robot AI ; 7: 14, 2020.
Article in English | MEDLINE | ID: mdl-33501183

ABSTRACT

Telerobotics aims to transfer human manipulation skills and dexterity over an arbitrary distance and at an arbitrary scale to a remote workplace. A telerobotic system that is transparent enables a natural and intuitive interaction. We postulate that embodiment (with three sub-components: sense of ownership, agency, and self-location) of the robotic system leads to optimal perceptual transparency and increases task performance. However, this has not yet been investigated directly. We reason along four premises and present findings from the literature that substantiate each of them: (1) the brain can embody non-bodily objects (e.g., robotic hands), (2) embodiment can be elicited with mediated sensorimotor interaction, (3) embodiment is robust against inconsistencies between the robotic system and the operator's body, and (4) embodiment positively correlates to dexterous task performance. We use the predictive encoding theory as a framework to interpret and discuss the results reported in the literature. Numerous previous studies have shown that it is possible to induce embodiment over a wide range of virtual and real extracorporeal objects (including artificial limbs, avatars, and android robots) through mediated sensorimotor interaction. Also, embodiment can occur for non-human morphologies including for elongated arms and a tail. In accordance with the predictive encoding theory, none of the sensory modalities is critical in establishing ownership, and discrepancies in multisensory signals do not necessarily lead to loss of embodiment. However, large discrepancies in terms of multisensory synchrony or visual likeness can prohibit embodiment from occurring. The literature provides less extensive support for the link between embodiment and (dexterous) task performance. However, data gathered with prosthetic hands do indicate a positive correlation. We conclude that all four premises are supported by direct or indirect evidence in the literature, suggesting that embodiment of a remote manipulator may improve dexterous performance in telerobotics. This warrants further implementation testing of embodiment in telerobotics. We formulate a first set of guidelines to apply embodiment in telerobotics and identify some important research topics.

11.
J Mot Behav ; 51(5): 572-579, 2019.
Article in English | MEDLINE | ID: mdl-30375949

ABSTRACT

People make systematic errors when matching the location of an unseen index finger with that of a visual target. These errors are consistent over time, but idiosyncratic and surprisingly task-specific. The errors that are made when moving the unseen index finger to a visual target are not consistent with the errors when moving a visual target to the unseen index finger. To test whether such inconsistencies arise because a large part of the matching errors originate during movement execution, we compared errors in moving the unseen finger to a target with biases in deciding which of two visual targets was closer to the index finger before the movement. We found that the judgment as to which is the closest target was consistent with the matching errors. This means that inconsistencies in visuo-proprioceptive matching errors are not caused by systematic errors in movement execution, but are likely to be related to biases in sensory transformations.


Subject(s)
Distance Perception/physiology , Judgment/physiology , Psychomotor Performance/physiology , Adult , Female , Fingers , Humans , Male , Proprioception/physiology , Young Adult
12.
Iperception ; 9(3): 2041669518781877, 2018.
Article in English | MEDLINE | ID: mdl-29977492

ABSTRACT

It has been proposed that haptic spatial perception depends on one's visual abilities. We tested spatial perception in the workspace using a combination of haptic matching and line drawing tasks. There were 132 participants with varying degrees of visual ability ranging from congenitally blind to normally sighted. Each participant was blindfolded and asked to match a haptic target position felt under a table with their nondominant hand using a pen in their dominant hand. Once the pen was in position on the tabletop, they had to draw a line of equal length to a previously felt reference object by moving the pen laterally. We used targets at three different locations to evaluate whether different starting positions relative to the body give rise to different matching errors, drawn line lengths, or drawn line angles. We found no influence of visual ability on matching error, drawn line length, or line angle, but we found that early-blind participants are slightly less consistent in their matching errors across space. We conclude that the elementary haptic abilities tested in these tasks do not depend on visual experience.

13.
Acta Psychol (Amst) ; 166: 31-6, 2016 May.
Article in English | MEDLINE | ID: mdl-27043253

ABSTRACT

People make systematic errors when they move their unseen dominant hand to a visual target (visuo-haptic matching) or to their other unseen hand (haptic-haptic matching). Why they make such errors is still unknown. A key question in determining the reason is to what extent individual participants' errors are stable over time. To examine this, we developed a method to quantify the consistency. With this method, we studied the stability of systematic matching errors across time intervals of at least a month. Within this time period, individual subjects' matches were as consistent as one could expect on the basis of the variability in the individual participants' performance within each session. Thus individual participants make quite different systematic errors, but in similar circumstances they make the same errors across long periods of time.


Subject(s)
Functional Laterality , Orientation/physiology , Psychomotor Performance/physiology , Visual Perception , Adult , Female , Hand , Humans , Male , Repetition Priming , Task Performance and Analysis , Time Factors
14.
Front Psychol ; 7: 1620, 2016.
Article in English | MEDLINE | ID: mdl-27818638

ABSTRACT

Cutaneous information has been shown to influence proprioceptive position sense when subjects had to judge or match the posture of their limbs. In the present study, we tested whether cutaneous information also affects proprioceptive localization of the hand when moving it to a target. In an explorative study, we manipulated the skin stretch around the elbow by attaching elastic sports tape to one side of the arm. Subjects were asked to move the unseen manipulated arm to visually presented targets. We found that the tape induced a significant shift of the end-points of these hand movements. Surprisingly, this shift corresponded with an increase in elbow extension, irrespective of the side of the arm that was taped. A control experiment showed that this cannot be explained by how the skin stretches, because the skin near the elbow stretches to a similar extent on the inside and outside of the arm when the elbow angle increases and decreases, respectively. A second control experiment reproduced and extended the results of the main experiment for tape on the inside of the arm, and showed that the asymmetry was not just a consequence of the tape originally being applied slightly differently to the outside of the arm. However, the way in which the tape was applied does appear to matter, because applying the tape in the same way to the outside of the arm as to the inside of the arm influenced different subjects quite differently, suggesting that the relationship between skin stretch and sensed limb posture is quite complex. We conclude that the way the skin is stretched during a goal-directed movement provides information that helps guide the hand toward the target.

15.
PLoS One ; 11(3): e0150912, 2016.
Article in English | MEDLINE | ID: mdl-26982481

ABSTRACT

Humans make both random and systematic errors when reproducing learned movements. Intuitive haptic guidance that assists one to make the movements reduces such errors. Our study examined whether any additional haptic information about the location of the target reduces errors in a position reproduction task, or whether the haptic guidance needs to be assistive to do so. Holding a haptic device, subjects made reaches to visible targets without time constraints. They did so in a no-guidance condition, and in guidance conditions in which the direction of the force with respect to the target differed, but the force scaled with the distance to the target in the same way. We examined whether guidance forces directed towards the target would reduce subjects' errors in reproducing a prior position to the same extent as do forces rotated by 90 degrees or 180 degrees, as it might because the forces provide the same information in all three cases. Without vision of the arm, both the accuracy and precision were significantly better with guidance directed towards the target than in all other conditions. The errors with rotated guidance did not differ from those without guidance. Not surprisingly, the movements tended to be faster when guidance forces directed the reaches to the target. This study shows that haptic guidance significantly improved motor performance when using it was intuitive, while non-intuitively presented information did not lead to any improvements and seemed to be ignored even in our simple paradigm with static targets and no time constraints.


Subject(s)
Intuition , Movement , Adult , Female , Humans , Male , Proprioception , Vision, Ocular , Young Adult
16.
PLoS One ; 11(7): e0158709, 2016.
Article in English | MEDLINE | ID: mdl-27438009

ABSTRACT

Visuo-haptic biases are observed when bringing your unseen hand to a visual target. The biases are different between, but consistent within participants. We investigated the usefulness of adjusting haptic guidance to these user-specific biases in aligning haptic and visual perception. By adjusting haptic guidance according to the biases, we aimed to reduce the conflict between the modalities. We first measured the biases using an adaptive procedure. Next, we measured performance in a pointing task using three conditions: 1) visual images that were adjusted to user-specific biases, without haptic guidance, 2) veridical visual images combined with haptic guidance, and 3) shifted visual images combined with haptic guidance. Adding haptic guidance increased precision. Combining haptic guidance with user-specific visual information yielded the highest accuracy and the lowest level of conflict with the guidance at the end point. These results show the potential of correcting for user-specific perceptual biases when designing haptic guidance.


Subject(s)
Ocular Physiological Phenomena , Psychomotor Performance/physiology , Vision, Ocular/physiology , Visual Perception/physiology , Adult , Female , Hand/physiology , Humans , Imaging, Three-Dimensional , Male , Photic Stimulation , Touch Perception/physiology
17.
PLoS One ; 10(9): e0138023, 2015.
Article in English | MEDLINE | ID: mdl-26361353

ABSTRACT

In an admittance-controlled haptic device, input forces are used to calculate the movement of the device. Although developers try to minimize delays, there will always be delays between the applied force and the corresponding movement in such systems, which might affect what the user of the device perceives. In this experiment we tested whether these delays in a haptic human-robot interaction influence the perception of mass. In the experiment an admittance-controlled manipulator was used to simulate various masses. In a staircase design subjects had to decide which of two virtual masses was heavier after gently pushing them leftward with the right hand in mid-air (no friction, no gravity). The manipulator responded as quickly as possible or with an additional delay (25 or 50 ms) to the forces exerted by the subject on the handle of the haptic device. The perceived mass was ~10% larger for a delay of 25 ms and ~20% larger for a delay of 50 ms. Based on these results, we estimated that the delays that are present in nowadays admittance-controlled haptic devices (up to 20ms) will give an increase in perceived mass which is smaller than the Weber fraction for mass (~10% for inertial mass). Additional analyses showed that the subjects' decision on mass when the perceptual differences were small did not correlate with intuitive variables such as force, velocity or a combination of these, nor with any other measured variable, suggesting that subjects did not have a consistent strategy during guessing or used other sources of information, for example the efference copy of their pushes.


Subject(s)
Hand/physiology , Weight Perception/physiology , Adult , Biomechanical Phenomena , Electrical Equipment and Supplies , Female , Humans , Male , User-Computer Interface
18.
Atten Percept Psychophys ; 75(7): 1583-99, 2013 Oct.
Article in English | MEDLINE | ID: mdl-23868564

ABSTRACT

The integration of visual and auditory inputs in the human brain works properly only if the components are perceived in close temporal proximity. In the present study, we quantified cross-modal interactions in the human brain for audiovisual stimuli with temporal asynchronies, using a paradigm from rhythm perception. In this method, participants had to align the temporal position of a target in a rhythmic sequence of four markers. In the first experiment, target and markers consisted of a visual flash or an auditory noise burst, and all four combinations of target and marker modalities were tested. In the same-modality conditions, no temporal biases and a high precision of the adjusted temporal position of the target were observed. In the different-modality conditions, we found a systematic temporal bias of 25-30 ms. In the second part of the first and in a second experiment, we tested conditions in which audiovisual markers with different stimulus onset asynchronies (SOAs) between the two components and a visual target were used to quantify temporal ventriloquism. The adjusted target positions varied by up to about 50 ms and depended in a systematic way on the SOA and its proximity to the point of subjective synchrony. These data allowed testing different quantitative models. The most satisfying model, based on work by Maij, Brenner, and Smeets (Journal of Neurophysiology 102, 490-495, 2009), linked temporal ventriloquism and the percept of synchrony and was capable of adequately describing the results from the present study, as well as those of some earlier experiments.


Subject(s)
Auditory Perception/physiology , Models, Theoretical , Sound Localization/physiology , Time Perception/physiology , Visual Perception/physiology , Acoustic Stimulation/methods , Adult , Aged , Association , Brain/physiology , Female , Humans , Judgment/physiology , Male , Middle Aged , Photic Stimulation/methods , Time Factors , Young Adult
19.
PLoS One ; 8(9): e74236, 2013.
Article in English | MEDLINE | ID: mdl-24019959

ABSTRACT

Information from cutaneous, muscle and joint receptors is combined with efferent information to create a reliable percept of the configuration of our body (proprioception). We exposed the hand to several horizontal force fields to examine whether external forces influence this percept. In an end-point task subjects reached visually presented positions with their unseen hand. In a vector reproduction task, subjects had to judge a distance and direction visually and reproduce the corresponding vector by moving the unseen hand. We found systematic individual errors in the reproduction of the end-points and vectors, but these errors did not vary systematically with the force fields. This suggests that human proprioception accounts for external forces applied to the hand when sensing the position of the hand in the horizontal plane.


Subject(s)
Proprioception , Adolescent , Adult , Female , Humans , Male , Young Adult
SELECTION OF CITATIONS
SEARCH DETAIL