Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 20 de 30
Filter
1.
IEEE Trans Haptics ; 13(2): 325-333, 2020.
Article in English | MEDLINE | ID: mdl-31603801

ABSTRACT

This paper describes a prototype guidance system, "FingerSight," to help people without vision locate and reach to objects in peripersonal space. It consists of four evenly spaced tactors embedded into a ring worn on the index finger, with a small camera mounted on top. Computer-vision analysis of the camera image controls vibrotactile feedback, leading users to move their hand to near targets. Two experiments tested the functionality of the prototype system. The first found that participants could discriminate between five different vibrotactile sites (four individual tactors and all simultaneously) with a mean accuracy of 88.8% after initial training. In the second experiment, participants were blindfolded and instructed to move their hand wearing the device to one of four locations within arm's reach, while hand trajectories were tracked. The tactors were controlled using two different strategies: (1) repeatedly signal axis with largest error, and (2) signal both axes in alternation. Participants demonstrated essentially straight-line trajectories toward the target under both instructions, but the temporal parameters (rate of approach, duration) showed an advantage for correction on both axes in sequence.


Subject(s)
Artificial Intelligence , Blindness/rehabilitation , Motor Activity , Personal Space , Self-Help Devices , Space Perception , Touch Perception , User-Computer Interface , Wearable Electronic Devices , Adult , Humans , Motor Activity/physiology , Space Perception/physiology , Touch Perception/physiology
2.
Cogn Res Princ Implic ; 2(1): 34, 2017.
Article in English | MEDLINE | ID: mdl-28890919

ABSTRACT

This paper describes a novel method for displaying data obtained by three-dimensional medical imaging, by which the position and orientation of a freely movable screen are optically tracked and used in real time to select the current slice from the data set for presentation. With this method, which we call a "freely moving in-situ medical image", the screen and imaged data are registered to a common coordinate system in space external to the user, at adjustable scale, and are available for free exploration. The three-dimensional image data occupy empty space, as if an invisible patient is being sliced by the moving screen. A behavioral study using real computed tomography lung vessel data established the superiority of the in situ display over a control condition with the same free exploration, but displaying data on a fixed screen (ex situ), with respect to accuracy in the task of tracing along a vessel and reporting spatial relations between vessel structures. A "freely moving in-situ medical image" display appears from these measures to promote spatial navigation and understanding of medical data.

3.
IEEE Trans Haptics ; 10(4): 545-554, 2017.
Article in English | MEDLINE | ID: mdl-28436890

ABSTRACT

Surgeons routinely perform surgery with noisy, sub-threshold, or obscured visual and haptic feedback, either due to the necessary surgical approach, or because the systems on which they are operating are exceedingly delicate. Technological solutions incorporating haptic feedback augmentation have been proposed to address these difficulties, but the consequences for motor control have not been directly investigated and quantified. In this paper, we present two isometric force generation tasks performed with a hand-held robotic tool that provides in-situ augmentation of force sensation. An initial study indicated that magnification helps the operator maintain a desired supra-threshold target force in the absence of visual feedback. We further found that such force magnification reduces the mean and standard deviation of applied forces, and reduces the magnitude of power in the 4 to 7 Hz band corresponding to tremor. Specific benefits to stability, voluntary control, and tremor were observed in the pull direction, which has been previously identified as more dexterous compared to push.


Subject(s)
Feedback , Motor Skills , Robotics , Touch Perception , Electrical Equipment and Supplies , Equipment Design , Female , Hand , Humans , Male , Psychophysics , Robotic Surgical Procedures/instrumentation
4.
Hum Factors ; 57(3): 523-37, 2015 May.
Article in English | MEDLINE | ID: mdl-25875439

ABSTRACT

OBJECTIVE: This study investigated the effectiveness of force augmentation in haptic perception tasks. BACKGROUND: Considerable engineering effort has been devoted to developing force augmented reality (AR) systems to assist users in delicate procedures like microsurgery. In contrast, far less has been done to characterize the behavioral outcomes of these systems, and no research has systematically examined the impact of sensory and perceptual processes on force augmentation effectiveness. METHOD: Using a handheld force magnifier as an exemplar haptic AR, we conducted three experiments to characterize its utility in the perception of force and stiffness. Experiments 1 and 2 measured, respectively, the user's ability to detect and differentiate weak force (<0.5 N) with or without the assistance of the device and compared it to direct perception. Experiment 3 examined the perception of stiffness through the force augmentation. RESULTS: The user's ability to detect and differentiate small forces was significantly improved by augmentation at both threshold and suprathreshold levels. The augmentation also enhanced stiffness perception. However, although perception of augmented forces matches that of the physical equivalent for weak forces, it falls off with increasing intensity. CONCLUSION: The loss in the effectiveness reflects the nature of sensory and perceptual processing. Such perceptual limitations should be taken into consideration in the design and development of haptic AR systems to maximize utility. APPLICATION: The findings provide useful information for building effective haptic AR systems, particularly for use in microsurgery.


Subject(s)
Psychophysics/instrumentation , Psychophysics/methods , Touch Perception/physiology , Touch/physiology , Adult , Equipment Design , Female , Humans , Male , User-Computer Interface , Young Adult
5.
Appl Opt ; 53(24): 5421-4, 2014 Aug 20.
Article in English | MEDLINE | ID: mdl-25321114

ABSTRACT

This paper describes a projection system for augmenting a scanned laser projector to create very small, very bright images for use in a microsurgical augmented reality system. Normal optical design approaches are insufficient because the laser beam profile differs optically from the aggregate image. We propose a novel arrangement of two lens groups working together to simultaneously adjust both the laser beam of the projector (individual pixels) and the spatial envelope containing them (the entire image) to the desired sizes. The present work models such a system using paraxial beam equations and ideal lenses to demonstrate that there is an "in-focus" range, or depth of field, defined by the intersection of the resulting beam-waist radius curve and the ideal pixel radius for a given image size. Images within this depth of field are in focus and can be adjusted to the desired size by manipulating the lenses.


Subject(s)
Imaging, Three-Dimensional/instrumentation , Lasers , Lighting/instrumentation , Microsurgery/instrumentation , Ophthalmologic Surgical Procedures/instrumentation , Surgery, Computer-Assisted/instrumentation , Tomography, Optical Coherence/instrumentation , Equipment Design , Equipment Failure Analysis
6.
IEEE J Transl Eng Health Med ; 2: 2700109, 2014.
Article in English | MEDLINE | ID: mdl-27170882

ABSTRACT

We present a novel device mounted on the fingertip for acquiring and transmitting visual information through haptic channels. In contrast to previous systems in which the user interrogates an intermediate representation of visual information, such as a tactile display representing a camera generated image, our device uses a fingertip-mounted camera and haptic stimulator to allow the user to feel visual features directly from the environment. Visual features ranging from simple intensity or oriented edges to more complex information identified automatically about objects in the environment may be translated in this manner into haptic stimulation of the finger. Experiments using an initial prototype to trace a continuous straight edge have quantified the user's ability to discriminate the angle of the edge, a potentially useful feature for higher levels analysis of the visual scene.

7.
Exp Brain Res ; 230(2): 251-60, 2013 Oct.
Article in English | MEDLINE | ID: mdl-23873494

ABSTRACT

The act of puncturing a surface with a hand-held tool is a ubiquitous but complex motor behavior that requires precise force control to avoid potentially severe consequences. We present a detailed model of puncture over a time course of approximately 1,000 ms, which is fit to kinematic data from individual punctures, obtained via a simulation with high-fidelity force feedback. The model describes puncture as proceeding from purely physically determined interactions between the surface and tool, through decline of force due to biomechanical viscosity, to cortically mediated voluntary control. When fit to the data, it yields parameters for the inertial mass of the tool/person coupling, time characteristic of force decline, onset of active braking, stopping time and distance, and late oscillatory behavior, all of which the analysis relates to physical variables manipulated in the simulation. While the present data characterize distinct phases of motor performance in a group of healthy young adults, the approach could potentially be extended to quantify the performance of individuals from other populations, e.g., with sensory-motor impairments. Applications to surgical force control devices are also considered.


Subject(s)
Hand Strength/physiology , Models, Biological , Movement/physiology , Physics , Psychomotor Performance/physiology , Adult , Analysis of Variance , Biomechanical Phenomena , Feedback , Female , Humans , Inhibition, Psychological , Linear Models , Male , Physical Stimulation , Time Factors , Weight-Bearing
8.
J Heart Valve Dis ; 22(2): 195-203, 2013 Mar.
Article in English | MEDLINE | ID: mdl-23798208

ABSTRACT

BACKGROUND AND AIM OF THE STUDY: The pulmonary trunk (PT) structure and function are abnormal in multiple congenital cardiovascular diseases. Existing surgical treatments of congenital malformations of the right ventricular outflow tract and PT do not provide a long-term replacement that can adapt to normal growth. Although there is strong interest in developing tissue-engineered approaches for PT conduit replacement, there remains an absence of any complete investigation of the native geometric growth patterns of the PT to serve as a necessary benchmark. METHODS: Eleven Dorset sheep (aged 4-12 months) underwent a single cardiac magnetic resonance imaging study, from which luminal arterial surface points were obtained using a novel semi-automated segmentation technique. The three-dimensional shapes of the PT and ascending aorta (AA) were measured over the same time period to gain insight into differences in the geometric changes between these two great vessels. RESULTS: The volumetric growth of the PT appeared to be a linear function of age, whereas its surface geometry demonstrated non-uniform growth patterns. While tortuosity was maintained with age, the cross-sectional shape of the main pulmonary artery (MPA) evolved from circular in young animals to elliptical at 12 months. In addition, the distal MPA near the pulmonary artery bifurcation tapered with age. CONCLUSION: It can be concluded that postnatal growth of the PT is not a simple proportionate (i.e. isotropic) size increase, but rather exhibits complex three-dimensional geometric features during somatic growth.


Subject(s)
Aorta/growth & development , Pulmonary Artery/growth & development , Pulmonary Valve/growth & development , Animals , Image Processing, Computer-Assisted , Magnetic Resonance Imaging , Models, Animal , Organ Size , Sheep
9.
Cognition ; 123(1): 33-49, 2012 Apr.
Article in English | MEDLINE | ID: mdl-22217386

ABSTRACT

We extended the classic anorthoscopic viewing procedure to test a model of visualization of 3D structures from 2D cross-sections. Four experiments were conducted to examine key processes described in the model, localizing cross-sections within a common frame of reference and spatiotemporal integration of cross sections into a hierarchical object representation. Participants used a hand-held device to reveal a hidden object as a sequence of cross-sectional images. The process of localization was manipulated by contrasting two displays, in situ vs. ex situ, which differed in whether cross sections were presented at their source locations or displaced to a remote screen. The process of integration was manipulated by varying the structural complexity of target objects and their components. Experiments 1 and 2 demonstrated visualization of 2D and 3D line-segment objects and verified predictions about display and complexity effects. In Experiments 3 and 4, the visualized forms were familiar letters and numbers. Errors and orientation effects showed that displacing cross-sectional images to a remote display (ex situ viewing) impeded the ability to determine spatial relationships among pattern components, a failure of integration at the object level.


Subject(s)
Visual Perception/physiology , Female , Humans , Male , Orientation/physiology , Pattern Recognition, Visual/physiology , Photic Stimulation , Psychomotor Performance/physiology , Reaction Time/physiology , Space Perception/physiology , Transducers , Young Adult
10.
J Endourol ; 25(5): 743-5, 2011 May.
Article in English | MEDLINE | ID: mdl-21480789

ABSTRACT

BACKGROUND AND PURPOSE: Real-time tomographic reflection is a novel technique that uses a geometrically fixed arrangement of a conventional ultrasound transducer, a transducer-incorporated monitor, and a half-silvered mirror. This device, dubbed the Sonic Flashlight, generates a virtual anatomically scaled image, obviating the need for a separate monitor. It may therefore facilitate invasive procedures, such as percutaneous access to the kidney. This proof-of-concept study assesses the feasibility of this technique for renal imaging and concomitant needle puncture guidance. MATERIALS AND METHODS: In a swine model with induced hydronephrosis, the Sonic Flashlight was used to visualize and guide needle access to the renal pelvis. Passage of a 7-inch, 18-gauge spinal needle was performed. Entry into the collecting system was confirmed by the aspiration of urine. RESULTS: The anechoic renal pelvis and hyperechoic needle tip could be seen with the Sonic Flashlight device. Successful access to the collecting system was obtained twice without dificulty. The sonographic image, appearing to emanate from the tip of the transducer, makes visualization and manipulation more intuitive. Furthermore, by placing the operator's eyes and hands in the same field as the sonogram, image-guided procedures are potentially easier to learn. CONCLUSION: The relatively shallow depth of penetration of the current device limits its clinical usefulness. A new Sonic Flashlight with a greater depth of penetration is in development.


Subject(s)
Kidney Tubules, Collecting/surgery , Tomography/methods , Animals , Kidney Pelvis/surgery , Sus scrofa/surgery , Time Factors
11.
Vision Res ; 50(24): 2618-26, 2010 Dec.
Article in English | MEDLINE | ID: mdl-20801143

ABSTRACT

In our research, people use actions to expose hidden targets as planar images displayed either in situ or ex situ (displaced remotely). We show that because ex situ viewing impedes relating actions to their perceptual consequences, it impairs localizing targets, including compensating for surface deformation, and directing movement toward them. Using a 3D analogue of anorthoscopic perception, we demonstrate that spatio-temporal integration of contiguous planar slices is possible when action and perception are co-located, but not when they are separated. Ex situ viewing precludes the formation of a spatial frame of reference that supports complex visualization from action.


Subject(s)
Imaging, Three-Dimensional , Psychomotor Performance , Visual Perception , Echocardiography, Doppler , Humans
12.
Opt Lett ; 35(14): 2352-4, 2010 Jul 15.
Article in English | MEDLINE | ID: mdl-20634827

ABSTRACT

The concept and instantiation of real-time tomographic holography (RTTH) for augmented reality is presented. RTTH enables natural hand-eye coordination to guide invasive medical procedures without requiring tracking or a head-mounted device. It places a real-time virtual image of an object's cross section into its actual location, without noticeable viewpoint dependence (e.g., parallax error). The virtual image is viewed through a flat narrowband holographic optical element (HOE) with optical power that generates an in-situ virtual image (within 1 m of the HOE) from a small spatial light modulator display without obscuring a direct view of the physical world. Rigidly fixed upon a medical ultrasound probe, an RTTH device could show the scan in its actual location inside the patient, even as the probe was moved relative to the patient.


Subject(s)
Diagnostic Imaging/methods , Fetal Development/physiology , Holography/methods , Ultrasonography, Prenatal , Computer Simulation , Female , Head , Humans , Pregnancy , Reproducibility of Results , Tomography , Tomography, X-Ray Computed
13.
Int J Biomed Imaging ; 2010: 980872, 2010.
Article in English | MEDLINE | ID: mdl-20634912

ABSTRACT

We have developed a method for extracting anatomical shape models from n-dimensional images using an image analysis framework we call Shells and Spheres. This framework utilizes a set of spherical operators centered at each image pixel, grown to reach, but not cross, the nearest object boundary by incorporating "shells" of pixel intensity values while analyzing intensity mean, variance, and first-order moment. Pairs of spheres on opposite sides of putative boundaries are then analyzed to determine boundary reflectance which is used to further constrain sphere size, establishing a consensus as to boundary location. The centers of a subset of spheres identified as medial (touching at least two boundaries) are connected to identify the interior of a particular anatomical structure. For the automated 3D algorithm, the only manual interaction consists of tracing a single contour on a 2D slice to optimize parameters, and identifying an initial point within the target structure.

14.
J Exp Psychol Appl ; 16(1): 45-59, 2010 Mar.
Article in English | MEDLINE | ID: mdl-20350043

ABSTRACT

The present research investigates how mental visualization of a 3D object from 2D cross sectional images is influenced by displacing the images from the source object, as is customary in medical imaging. Three experiments were conducted to assess people's ability to integrate spatial information over a series of cross sectional images in order to visualize an object posed in 3D space. Participants used a hand-held tool to reveal a virtual rod as a sequence of cross-sectional images, which were displayed either directly in the space of exploration (in-situ) or displaced to a remote screen (ex-situ). They manipulated a response stylus to match the virtual rod's pitch (vertical slant), yaw (horizontal slant), or both. Consistent with the hypothesis that spatial colocation of image and source object facilitates mental visualization, we found that although single dimensions of slant were judged accurately with both displays, judging pitch and yaw simultaneously produced differences in systematic error between in-situ and ex-situ displays. Ex-situ imaging also exhibited errors such that the magnitude of the response was approximately correct but the direction was reversed. Regression analysis indicated that the in-situ judgments were primarily based on spatiotemporal visualization, while the ex-situ judgments relied on an ad hoc, screen-based heuristic. These findings suggest that in-situ displays may be useful in clinical practice by reducing error and facilitating the ability of radiologists to visualize 3D anatomy from cross sectional images.


Subject(s)
Echocardiography, Doppler , Fixation, Ocular , Imaging, Three-Dimensional , Radiology/methods , Humans , Professional Competence , Visual Perception
15.
J Vasc Interv Radiol ; 20(10): 1380-3, 2009 Oct.
Article in English | MEDLINE | ID: mdl-19699661

ABSTRACT

The Sonic Flashlight is an ultrasound (US) device that projects real-time US images into patients with use of a semireflective/transparent mirror. The present study evaluated the feasibility of use of the Sonic Flashlight for clinical peripherally inserted central catheter placements, originally with the mirror located inside a sterile cover (n = 15), then with the mirror outside (n = 11). Successful access was obtained in all cases. Results show that this new design improved visibility, as judged subjectively firsthand and in photographs. The study demonstrated the feasibility of the Sonic Flashlight and the new design to help assure sterility without degrading visibility, allowing further clinical trials involving physicians and nurses.


Subject(s)
Catheterization, Central Venous/instrumentation , Catheterization, Central Venous/methods , Ultrasonography, Interventional/instrumentation , Equipment Design , Equipment Failure Analysis , Humans , Reproducibility of Results , Sensitivity and Specificity
16.
J Ultrasound Med ; 28(5): 651-6, 2009 May.
Article in English | MEDLINE | ID: mdl-19389904

ABSTRACT

OBJECTIVE: We describe a case series constituting the first clinical trial by intravenous (IV) team nurses using the sonic flashlight (SF) for ultrasound guidance of peripherally inserted central catheter (PICC) placement. METHODS: Two IV team nurses with more than 10 years of experience with placing PICCs and 3 to 6 years of experience with ultrasound attempted to place PICCs under ultrasound guidance in patients requiring long-term IV access. One of two methods of ultrasound guidance was used: conventional ultrasound (CUS; 60 patients) or a new device called the SF (44 patients). The number of needle punctures required to gain IV access was recorded for each patient. RESULTS: In both methods, 87% of the cases resulted in successful venous access on the first attempt. The average number of needle sticks per patient was 1.18 for SF-guided procedures compared with 1.20 for CUS-guided procedures. No significant difference was found in the distribution of the number of attempts between the two methods. Anecdotal comments by the nurses indicated the comparative ease of use of the SF display, although the relatively small scale of the SF image compared with the CUS image was also noted. CONCLUSIONS: We have shown that the SF is a safe and effective device for guidance of PICC placement in the hands of experienced IV team nurses. The advantage of placing the ultrasound image at its actual location must be balanced against the relatively small scale of the SF image.


Subject(s)
Catheterization, Central Venous/instrumentation , Lighting/instrumentation , Nursing/instrumentation , Ultrasonography/instrumentation , Adult , Aged , Catheterization, Central Venous/methods , Female , Humans , Male , Middle Aged , Pilot Projects
17.
IEEE Trans Biomed Eng ; 56(6): 1691-9, 2009 Jun.
Article in English | MEDLINE | ID: mdl-19272982

ABSTRACT

We describe a fully automated ultrasound analysis system that tracks and identifies the common carotid artery (CCA) and the internal jugular vein (IJV). Our goal is to prevent inadvertent damage to the CCA when targeting the IJV for catheterization. The automated system starts by identifying and fitting ellipses to all the regions that look like major arteries or veins throughout each B-mode ultrasound image frame. The spokes ellipse algorithm described in this paper tracks these putative vessels and calculates their characteristics, which are then weighted and summed to identify the vessels. The optimum subset of characteristics and their weights were determined from a training set of 38 subjects, whose necks were scanned with a portable 10 MHz ultrasound system at 10 frames per second. Stepwise linear discriminant analysis (LDA) narrowed the characteristics to the five that best distinguish between the CCA and IJV. A paired version of Fisher's LDA was used to calculate the weights for each of the five parameters. Leave-one-out validation studies showed that the system could track and identify the CCA and IJV with 100% accuracy in this dataset.


Subject(s)
Carotid Artery, Common/diagnostic imaging , Image Processing, Computer-Assisted/methods , Jugular Veins/diagnostic imaging , Ultrasonography/methods , Adult , Algorithms , Data Interpretation, Statistical , Discriminant Analysis , Fourier Analysis , Humans , Jugular Veins/anatomy & histology , Middle Aged , Reproducibility of Results
18.
Int J Geriatr Psychiatry ; 24(8): 837-46, 2009 Aug.
Article in English | MEDLINE | ID: mdl-19085964

ABSTRACT

OBJECTIVE: The amygdalae have been a focus of mood disorder research due to their key role in processing emotional information. It has been long known that depressed individuals demonstrate impaired functional performance while engaged in emotional tasks. The structural basis for these functional differences has been investigated via volumetric analysis with mixed findings. In this study, we examined the morphometric basis for these functional changes in late-life depression (LLD) by analyzing both the size and shape of the amygdalae with the hypothesis that shape differences may be apparent even when overall volume differences are inconsistent. METHODS: Magnetic resonance imaging data were acquired from 11 healthy, elderly individuals and 14 depressed, elderly individuals. Amygdalar size was quantified by computing total volume and amygdalar shape was quantified with a shape analysis method that we have developed. RESULTS: No significant volumetric differences were found for either amygdala. Nevertheless, localized regions of significant shape variation were detected for the left and right amygdalae. The most significant difference was contraction (LLD subjects as compared to control subjects) in a region typically associated with the basolateral nucleus, which plays a key role in emotion recognition in neurobiologic models of depression. CONCLUSIONS: In this LLD study, we have shown that, despite insignificant amygdalar volumetric findings, variations of amygdalar shape can be detected and localized. With further investigation, morphometric analysis of various brain structures may help elucidate the neurobiology associated with LLD and other mood disorders.


Subject(s)
Amygdala/pathology , Depressive Disorder/pathology , Aged , Brain Mapping , Female , Humans , Magnetic Resonance Imaging , Male , Middle Aged
19.
Spat Cogn Comput ; 8(4): 333-356, 2008 Oct.
Article in English | MEDLINE | ID: mdl-19177177

ABSTRACT

The present study investigated how people learn to correct errors in actions directed toward cognitively encoded spatial locations. Subjects inserted a stylus to reach a hidden target localized by means of ultrasound imaging and portrayed with a scaled graph. As was found previously (Wu et al., 2005), subjects initially underestimated the target location but corrected their responses when given training with feedback. Three experiments were conducted to examine whether the error correction occurred at (1) the mapping from the input to a mental representation of target location; (2) the mapping from the representation of target location to the intended insertion response, or (3) the mapping from intended response to action. Experiment 1 and Experiment 3 disconfirmed Mappings 1 and 3, respectively, by showing that training did not alter independent measures of target localization or the action of aiming. Experiment 2 showed that the output of Mapping 2, the planned response -- measured as the initial insertion angle -was corrected over trials, and the correction magnitude predicted the response to a transfer stimulus with a new represented location.

20.
Curr Dir Psychol Sci ; 17(6): 359-364, 2008 Dec.
Article in English | MEDLINE | ID: mdl-20448831

ABSTRACT

Spatial representations can be derived not only by direct perception, but also through cognitive mediation. Conventional or ex-situ ultrasound displays, which displace imaged data to a remote screen, require both types of process. To determine the depth of a target hidden beneath a surface, ultrasound users must both perceive how deeply the ultrasound transducer indents the surface and interpret the on-screen image to visualize how deeply the target lies below the transducer. Combining these perceptual and cognitive depth components requires a spatial representation that has been called amodal. We report experiments measuring errors in perceptual and cognitively mediated depth estimates and show that these estimates can be concatenated (linked) without further error, providing evidence for an amodal representation. We further contrast conventional ultrasound with an in-situ display whereby an ultrasound image appears to float at the precise location being imaged, enabling the depth of a target to be directly perceived. The research has the potential to enhance ultrasound-guided surgical intervention.

SELECTION OF CITATIONS
SEARCH DETAIL
...