Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 20 de 27
Filter
Add more filters

Country/Region as subject
Publication year range
1.
Sensors (Basel) ; 22(14)2022 Jul 17.
Article in English | MEDLINE | ID: mdl-35891016

ABSTRACT

Developing image-guided robotic systems requires access to flexible, open-source software. For image guidance, the open-source medical imaging platform 3D Slicer is one of the most adopted tools that can be used for research and prototyping. Similarly, for robotics, the open-source middleware suite robot operating system (ROS) is the standard development framework. In the past, there have been several "ad hoc" attempts made to bridge both tools; however, they are all reliant on middleware and custom interfaces. Additionally, none of these attempts have been successful in bridging access to the full suite of tools provided by ROS or 3D Slicer. Therefore, in this paper, we present the SlicerROS2 module, which was designed for the direct use of ROS2 packages and libraries within 3D Slicer. The module was developed to enable real-time visualization of robots, accommodate different robot configurations, and facilitate data transfer in both directions (between ROS and Slicer). We demonstrate the system on multiple robots with different configurations, evaluate the system performance and discuss an image-guided robotic intervention that can be prototyped with this module. This module can serve as a starting point for clinical system development that reduces the need for custom interfaces and time-intensive platform setup.


Subject(s)
Robotics , Diagnostic Imaging , Reactive Oxygen Species , Software
2.
Int J Comput Assist Radiol Surg ; 19(6): 1147-1155, 2024 Jun.
Article in English | MEDLINE | ID: mdl-38598140

ABSTRACT

PURPOSE: This paper evaluates user performance in telesurgical tasks with the da Vinci Research Kit (dVRK), comparing unilateral teleoperation, bilateral teleoperation with force sensors and sensorless force estimation. METHODS: A four-channel teleoperation system with disturbance observers and sensorless force estimation with learning-based dynamic compensation was developed. Palpation experiments were conducted with 12 users who tried to locate tumors hidden in tissue phantoms with their fingers or through handheld or teleoperated laparoscopic instruments with visual, force sensor, or sensorless force estimation feedback. In a peg transfer experiment with 10 users, the contribution of sensorless haptic feedback with/without learning-based dynamic compensation was assessed using NASA TLX surveys, measured free motion speeds and forces, environment interaction forces as well as experiment completion times. RESULTS: The first study showed a 30% increase in accuracy in detecting tumors with sensorless haptic feedback over visual feedback with only a 5-10% drop in accuracy when compared with sensor feedback or direct instrument contact. The second study showed that sensorless feedback can help reduce interaction forces due to incidental contacts by about 3 times compared with unilateral teleoperation. The cost is an increase in free motion forces and physical effort. We show that it is possible to improve this with dynamic compensation. CONCLUSION: We demonstrate the benefits of sensorless haptic feedback in teleoperated surgery systems, especially with dynamic compensation, and that it can improve surgical performance without hardware modifications.


Subject(s)
Robotic Surgical Procedures , Humans , Robotic Surgical Procedures/methods , Robotic Surgical Procedures/instrumentation , Phantoms, Imaging , Equipment Design , Telemedicine/instrumentation , Palpation/methods , Palpation/instrumentation , User-Computer Interface , Feedback , Robotics/instrumentation , Robotics/methods , Laparoscopy/methods , Laparoscopy/instrumentation
3.
IEEE Robot Autom Lett ; 8(3): 1287-1294, 2023 Mar.
Article in English | MEDLINE | ID: mdl-37997605

ABSTRACT

This paper introduces the first integrated real-time intraoperative surgical guidance system, in which an endoscope camera of da Vinci surgical robot and a transrectal ultrasound (TRUS) transducer are co-registered using photoacoustic markers that are detected in both fluorescence (FL) and photoacoustic (PA) imaging. The co-registered system enables the TRUS transducer to track the laser spot illuminated by a pulsed-laser-diode attached to the surgical instrument, providing both FL and PA images of the surgical region-of-interest (ROI). As a result, the generated photoacoustic marker is visualized and localized in the da Vinci endoscopic FL images, and the corresponding tracking can be conducted by rotating the TRUS transducer to display the PA image of the marker. A quantitative evaluation revealed that the average registration and tracking errors were 0.84 mm and 1.16°, respectively. This study shows that the co-registered photoacoustic marker tracking can be effectively deployed intraoperatively using TRUS+PA imaging providing functional guidance of the surgical ROI.

4.
Front Robot AI ; 8: 747917, 2021.
Article in English | MEDLINE | ID: mdl-34926590

ABSTRACT

Approaches to robotic manufacturing, assembly, and servicing of in-space assets range from autonomous operation to direct teleoperation, with many forms of semi-autonomous teleoperation in between. Because most approaches require one or more human operators at some level, it is important to explore the control and visualization interfaces available to those operators, taking into account the challenges due to significant telemetry time delay. We consider one motivating application of remote teleoperation, which is ground-based control of a robot on-orbit for satellite servicing. This paper presents a model-based architecture that: 1) improves visualization and situation awareness, 2) enables more effective human/robot interaction and control, and 3) detects task failures based on anomalous sensor feedback. We illustrate elements of the architecture by drawing on 10 years of our research in this area. The paper further reports the results of several multi-user experiments to evaluate the model-based architecture, on ground-based test platforms, for satellite servicing tasks subject to round-trip communication latencies of several seconds. The most significant performance gains were obtained by enhancing the operators' situation awareness via improved visualization and by enabling them to precisely specify intended motion. In contrast, changes to the control interface, including model-mediated control or an immersive 3D environment, often reduced the reported task load but did not significantly improve task performance. Considering the challenges of fully autonomous intervention, we expect that some form of teleoperation will continue to be necessary for robotic in-situ servicing, assembly, and manufacturing tasks for the foreseeable future. We propose that effective teleoperation can be enabled by modeling the remote environment, providing operators with a fused view of the real environment and virtual model, and incorporating interfaces and control strategies that enable interactive planning, precise operation, and prompt detection of errors.

5.
Front Robot AI ; 8: 612964, 2021.
Article in English | MEDLINE | ID: mdl-34250025

ABSTRACT

Since the first reports of a novel coronavirus (SARS-CoV-2) in December 2019, over 33 million people have been infected worldwide and approximately 1 million people worldwide have died from the disease caused by this virus, COVID-19. In the United States alone, there have been approximately 7 million cases and over 200,000 deaths. This outbreak has placed an enormous strain on healthcare systems and workers. Severe cases require hospital care, and 8.5% of patients require mechanical ventilation in an intensive care unit (ICU). One major challenge is the necessity for clinical care personnel to don and doff cumbersome personal protective equipment (PPE) in order to enter an ICU unit to make simple adjustments to ventilator settings. Although future ventilators and other ICU equipment may be controllable remotely through computer networks, the enormous installed base of existing ventilators do not have this capability. This paper reports the development of a simple, low cost telerobotic system that permits adjustment of ventilator settings from outside the ICU. The system consists of a small Cartesian robot capable of operating a ventilator touch screen with camera vision control via a wirelessly connected tablet master device located outside the room. Engineering system tests demonstrated that the open-loop mechanical repeatability of the device was 7.5 mm, and that the average positioning error of the robotic finger under visual servoing control was 5.94 mm. Successful usability tests in a simulated ICU environment were carried out and are reported. In addition to enabling a significant reduction in PPE consumption, the prototype system has been shown in a preliminary evaluation to significantly reduce the total time required for a respiratory therapist to perform typical setting adjustments on a commercial ventilator, including donning and doffing PPE, from 271 to 109 s.

6.
IEEE Trans Med Robot Bionics ; 2(2): 176-187, 2020 May.
Article in English | MEDLINE | ID: mdl-32699833

ABSTRACT

High-resolution real-time intraocular imaging of retina at the cellular level is very challenging due to the vulnerable and confined space within the eyeball as well as the limited availability of appropriate modalities. A probe-based confocal laser endomicroscopy (pCLE) system, can be a potential imaging modality for improved diagnosis. The ability to visualize the retina at the cellular level could provide information that may predict surgical outcomes. The adoption of intraocular pCLE scanning is currently limited due to the narrow field of view and the micron-scale range of focus. In the absence of motion compensation, physiological tremors of the surgeons' hand and patient movements also contribute to the deterioration of the image quality. Therefore, an image-based hybrid control strategy is proposed to mitigate the above challenges. The proposed hybrid control strategy enables a shared control of the pCLE probe between surgeons and robots to scan the retina precisely, with the absence of hand tremors and with the advantages of an image-based auto-focus algorithm that optimizes the quality of pCLE images. The hybrid control strategy is deployed on two frameworks - cooperative and teleoperated. Better image quality, smoother motion, and reduced workload are all achieved in a statistically significant manner with the hybrid control frameworks.

7.
Int J Med Robot ; 15(4): e1999, 2019 Aug.
Article in English | MEDLINE | ID: mdl-30970387

ABSTRACT

BACKGROUND: It was suggested that the lack of haptic feedback, formerly considered a limitation for the da Vinci robotic system, does not affect robotic surgeons because of training and compensation based on visual feedback. However, conclusive studies are still missing, and the interest in force reflection is rising again. METHODS: We integrated a seven-DoF master into the da Vinci Research Kit. We designed tissue grasping, palpation, and incision tasks with robotic surgeons, to be performed by three groups of users (expert surgeons, medical residents, and nonsurgeons, five users/group), either with or without haptic feedback. Task-specific quantitative metrics and a questionnaire were used for assessment. RESULTS: Force reflection made a statistically significant difference for both palpation (improved inclusion detection rate) and incision (decreased tissue damage). CONCLUSIONS: Haptic feedback can improve key surgical outcomes for tasks requiring a pronounced cognitive burden for the surgeon, to be possibly negotiated with longer completion times.


Subject(s)
Hand Strength , Robotic Surgical Procedures/education , Robotic Surgical Procedures/instrumentation , Surgeons , Adult , Equipment Design , Feedback, Sensory , Female , Humans , Male , Palpation , Software , Surgical Wound , Surveys and Questionnaires , Touch
8.
Healthc Technol Lett ; 5(5): 194-200, 2018 Oct.
Article in English | MEDLINE | ID: mdl-30800322

ABSTRACT

In robot-assisted laparoscopic surgery, the first assistant (FA) is responsible for tasks such as robot docking, passing necessary materials, manipulating hand-held instruments, and helping with trocar planning and placement. The performance of the FA is critical for the outcome of the surgery. The authors introduce ARssist, an augmented reality application based on an optical see-through head-mounted display, to help the FA perform these tasks. ARssist offers (i) real-time three-dimensional rendering of the robotic instruments, hand-held instruments, and endoscope based on a hybrid tracking scheme and (ii) real-time stereo endoscopy that is configurable to suit the FA's hand-eye coordination when operating based on endoscopy feedback. ARssist has the potential to help the FA perform his/her task more efficiently, and hence improve the outcome of robot-assisted laparoscopic surgeries.

9.
Comput Aided Surg ; 12(1): 2-14, 2007 Jan.
Article in English | MEDLINE | ID: mdl-17364654

ABSTRACT

Magnetic Resonance Imaging (MRI) offers great potential for planning, guiding, monitoring and controlling interventions. MR arthrography (MRAr) is the imaging gold standard for assessing small ligament and fibrocartilage injury in joints. In contemporary practice, MRAr consists of two consecutive sessions: (1) an interventional session where a needle is driven to the joint space and MR contrast is injected under fluoroscopy or CT guidance; and (2) a diagnostic MRI imaging session to visualize the distribution of contrast inside the joint space and evaluate the condition of the joint. Our approach to MRAr is to eliminate the separate radiologically guided needle insertion and contrast injection procedure by performing those tasks on conventional high-field closed MRI scanners. We propose a 2D augmented reality image overlay device to guide needle insertion procedures. This approach makes diagnostic high-field magnets available for interventions without a complex and expensive engineering entourage. In preclinical trials, needle insertions have been performed in the joints of porcine and human cadavers using MR image overlay guidance; in all cases, insertions successfully reached the joint space on the first attempt.


Subject(s)
Arthrography/instrumentation , Joint Diseases/diagnosis , Magnetic Resonance Imaging , Needles , Cartilage/injuries , Contrast Media , Humans , Ligaments/injuries
10.
Stud Health Technol Inform ; 125: 130-5, 2007.
Article in English | MEDLINE | ID: mdl-17377250

ABSTRACT

In order to develop accurate and effective augmented reality (AR) systems used in MR and CT guided needle placement procedures, a comparative validation environment is necessary. Clinical equipment is prohibitively expensive and often inadequate for precise measurement. Therefore, we have developed a laboratory validation system for measuring operator performance using different assistance techniques. Electromagnetically tracked needles are registered with the preoperative plan to measure placement accuracy and the insertion path. The validation system provides an independent measure of accuracy that can be applied to varying methods of assistance ranging from augmented reality guidance methods to tracked navigation systems and autonomous robots. In preliminary studies, this validation system is used to evaluate the performance of the image overlay, bi-plane laser guide, and traditional freehand techniques.


Subject(s)
Computer Simulation , Magnetic Resonance Imaging , Needles , Reproducibility of Results , Tomography, X-Ray Computed , United States
11.
Sci Robot ; 2(11)2017 10 25.
Article in English | MEDLINE | ID: mdl-33157887

ABSTRACT

Robot Operating System (ROS) celebrates its 10th anniversary on 7 November 2017.

12.
Stud Health Technol Inform ; 119: 150-5, 2006.
Article in English | MEDLINE | ID: mdl-16404035

ABSTRACT

Magnetic Resonance Imaging (MRI) has unmatched potential for planning, guiding, monitoring and controlling interventions. MR arthrography (MRA) is the imaging gold standard to assess small ligament and fibrocartilage injury in joints. In contemporary practice, MRA consists of two consecutive sessions: 1) an interventional session where a needle is driven to the joint space and gadolinium contrast is injected under fluoroscopy or CT guidance. 2) A diagnostic MRI imaging session to visualize the distribution of contrast inside the joint space and evaluate the condition of the joint. Our approach to MRA is to eliminate the separate radiologically guided needle insertion and contrast injection procedure by performing those tasks on conventional high-field closed MRI scanners. We propose a 2D augmented reality image overlay device to guide needle insertion procedures. This approach makes diagnostic high-field magnets available for interventions without a complex and expensive engineering entourage.


Subject(s)
Arthrography , Magnetic Resonance Imaging , Needles , Humans , United States , User-Computer Interface
13.
IEEE Trans Biomed Eng ; 52(8): 1415-24, 2005 Aug.
Article in English | MEDLINE | ID: mdl-16119237

ABSTRACT

We present an image overlay system to aid needle insertion procedures in computed tomography (CT) scanners. The device consists of a display and a semitransparent mirror that is mounted on the gantry. Looking at the patient through the mirror, the CT image appears to be floating inside the patient with correct size and position, thereby providing the physician with two-dimensional (2-D) "X-ray vision" to guide needle insertions. The physician inserts the needle following the optimal path identified in the CT image rendered on the display and, thus, reflected in the mirror. The system promises to reduce X-ray dose, patient discomfort, and procedure time by significantly reducing faulty insertion attempts. It may also increase needle placement accuracy. We report the design and implementation of the image overlay system followed by the results of phantom and cadaver experiments in several clinical applications.


Subject(s)
Biopsy/instrumentation , Data Display , Needles , Radiographic Image Interpretation, Computer-Assisted/instrumentation , Surgery, Computer-Assisted/instrumentation , Tomography, X-Ray Computed/instrumentation , User-Computer Interface , Biopsy/methods , Cadaver , Equipment Design , Equipment Failure Analysis , Humans , Phantoms, Imaging , Radiographic Image Interpretation, Computer-Assisted/methods , Surgery, Computer-Assisted/methods , Tomography, X-Ray Computed/methods
14.
Comput Aided Surg ; 10(4): 241-55, 2005 Jul.
Article in English | MEDLINE | ID: mdl-16393793

ABSTRACT

OBJECTIVE: We present a 2D image overlay device to assist needle placement on computed tomography (CT) scanners. MATERIALS AND METHODS: The system consists of a flat display and a semitransparent mirror mounted on the gantry. When the physician looks at the patient through the mirror, the CT image appears to be floating inside the body with correct size and position as if the physician had 2D 'X-ray vision'. The physician draws the optimal path on the CT image. The composite image is rendered on the display and thus reflected in the mirror. The reflected image is used to guide the physician in the procedure. In this article, we describe the design and various embodiments of the 2D image overlay system, followed by the results of phantom and cadaver experiments in multiple clinical applications. RESULTS: Multiple skeletal targets were successfully accessed with one insertion attempt. Generally, successful access was recorded on liver targets when a clear path opened, but the number of attempts and accuracy showed variability because of occasional lack of access. Soft tissue deformation further reduced the accuracy and consistency in comparison to skeletal targets. CONCLUSION: The system demonstrated strong potential for reducing faulty needle insertion attempts, thereby reducing X-ray dose and patient discomfort.


Subject(s)
Biopsy, Needle , Image Processing, Computer-Assisted , Radiography, Interventional/instrumentation , Tomography, X-Ray Computed , User-Computer Interface , Animals , Cadaver , Calibration , Equipment Design , Humans , Phantoms, Imaging , Swine
15.
Med Phys ; 41(9): 091712, 2014 Sep.
Article in English | MEDLINE | ID: mdl-25186387

ABSTRACT

PURPOSE: Brachytherapy is a standard option of care for prostate cancer patients but may be improved by dynamic dose calculation based on localized seed positions. The American Brachytherapy Society states that the major current limitation of intraoperative treatment planning is the inability to localize the seeds in relation to the prostate. An image-guidance system was therefore developed to localize seeds for dynamic dose calculation. METHODS: The proposed system is based on transrectal ultrasound (TRUS) and mobile C-arm fluoroscopy, while using a simple fiducial with seed-like markers to compute pose from the nonencoded C-arm. Three or more fluoroscopic images and an ultrasound volume are acquired and processed by a pipeline of algorithms: (1) seed segmentation, (2) fiducial detection with pose estimation, (3) seed matching with reconstruction, and (4) fluoroscopy-to-TRUS registration. RESULTS: The system was evaluated on ten phantom cases, resulting in an overall mean error of 1.3 mm. The system was also tested on 37 patients and each algorithm was evaluated. Seed segmentation resulted in a 1% false negative rate and 2% false positive rate. Fiducial detection with pose estimation resulted in a 98% detection rate. Seed matching with reconstruction had a mean error of 0.4 mm. Fluoroscopy-to-TRUS registration had a mean error of 1.3 mm. Moreover, a comparison of dose calculations between the authors' intraoperative method and an independent postoperative method shows a small difference of 7% and 2% forD90 and V100, respectively. Finally, the system demonstrated the ability to detect cold spots and required a total processing time of approximately 1 min. CONCLUSIONS: The proposed image-guidance system is the first practical approach to dynamic dose calculation, outperforming earlier solutions in terms of robustness, ease of use, and functional completeness.


Subject(s)
Brachytherapy/methods , Fluoroscopy/methods , Prostatic Neoplasms/radiotherapy , Radiometry/methods , Radiotherapy, Image-Guided/methods , Ultrasonography/methods , Algorithms , Fiducial Markers , Fluoroscopy/instrumentation , Humans , Image Processing, Computer-Assisted/methods , Male , Phantoms, Imaging , Prostate/diagnostic imaging , Prostatic Neoplasms/diagnostic imaging , Radiometry/instrumentation , Radiotherapy Planning, Computer-Assisted/instrumentation , Radiotherapy Planning, Computer-Assisted/methods , Radiotherapy, Image-Guided/instrumentation , Time , Ultrasonography/instrumentation
16.
Proc SPIE Int Soc Opt Eng ; 86712013 Mar 08.
Article in English | MEDLINE | ID: mdl-24392207

ABSTRACT

The lack of dynamic dosimetry tools for permanent prostate brachytherapy causes otherwise avoidable problems in prostate cancer patient care. The goal of this work is to satisfy this need in a readily adoptable manner. Using the ubiquitous ultrasound scanner and mobile non-isocentric C-arm, we show that dynamic dosimetry is now possible with only the addition of an arbitrarily configured marker-based fiducial. Not only is the system easily configured from accessible hardware, but it is also simple and convenient, requiring little training from technicians. Furthermore, the proposed system is built upon robust algorithms of seed segmentation, fiducial detection, seed reconstruction, and image registration. All individual steps of the pipeline have been thoroughly tested, and the system as a whole has been validated on a study of 25 patients. The system has shown excellent results of accurately computing dose, and does so with minimal manual intervention, therefore showing promise for widespread adoption of dynamic dosimetry.

17.
J Robot Surg ; 7(3): 217-25, 2013 Sep.
Article in English | MEDLINE | ID: mdl-25525474

ABSTRACT

This paper presents the development and evaluation of video augmentation on the stereoscopic da Vinci S system with intraoperative image guidance for base of tongue tumor resection in transoral robotic surgery (TORS). Proposed workflow for image-guided TORS begins by identifying and segmenting critical oropharyngeal structures (e.g., the tumor and adjacent arteries and nerves) from preoperative computed tomography (CT) and/or magnetic resonance (MR) imaging. These preoperative planned data can be deformably registered to the intraoperative endoscopic view using mobile C-arm cone-beam computed tomography (CBCT) [1, 2]. Augmentation of TORS endoscopic video defining surgical targets and critical structures has the potential to improve navigation, spatial orientation, and confidence in tumor resection. Experiments in animal specimens achieved statistically significant improvement in target localization error when comparing the proposed image guidance system to simulated current practice.

18.
Med Eng Phys ; 34(1): 64-77, 2012 Jan.
Article in English | MEDLINE | ID: mdl-21802975

ABSTRACT

Prostate brachytherapy guided by transrectal ultrasound is a common treatment option for early stage prostate cancer. Prostate cancer accounts for 28% of cancer cases and 11% of cancer deaths in men with 217,730 estimated new cases and 32,050 estimated deaths in 2010 in the United States alone. The major current limitation is the inability to reliably localize implanted radiation seeds spatially in relation to the prostate. Multimodality approaches that incorporate X-ray for seed localization have been proposed, but they require both accurate tracking of the imaging device and segmentation of the seeds. Some use image-based radiographic fiducials to track the X-ray device, but manual intervention is needed to select proper regions of interest for segmenting both the tracking fiducial and the seeds, to evaluate the segmentation results, and to correct the segmentations in the case of segmentation failure, thus requiring a significant amount of extra time in the operating room. In this paper, we present an automatic segmentation algorithm that simultaneously segments the tracking fiducial and brachytherapy seeds, thereby minimizing the need for manual intervention. In addition, through the innovative use of image processing techniques such as mathematical morphology, Hough transforms, and RANSAC, our method can detect and separate overlapping seeds that are common in brachytherapy implant images. Our algorithm was validated on 55 phantom and 206 patient images, successfully segmenting both the fiducial and seeds with a mean seed segmentation rate of 96% and sub-millimeter accuracy.


Subject(s)
Brachytherapy , Fiducial Markers , Image Processing, Computer-Assisted/methods , Prostatic Neoplasms/diagnostic imaging , Prostatic Neoplasms/radiotherapy , Tomography, X-Ray Computed/standards , Automation , Humans , Image Processing, Computer-Assisted/instrumentation , Male , Microspheres , Phantoms, Imaging , Tomography, X-Ray Computed/instrumentation
19.
Med Image Anal ; 16(7): 1347-58, 2012 Oct.
Article in English | MEDLINE | ID: mdl-22784870

ABSTRACT

Prostate brachytherapy is a treatment for prostate cancer using radioactive seeds that are permanently implanted in the prostate. The treatment success depends on adequate coverage of the target gland with a therapeutic dose, while sparing the surrounding tissue. Since seed implantation is performed under transrectal ultrasound (TRUS) imaging, intraoperative localization of the seeds in ultrasound can provide physicians with dynamic dose assessment and plan modification. However, since all the seeds cannot be seen in the ultrasound images, registration between ultrasound and fluoroscopy is a practical solution for intraoperative dosimetry. In this manuscript, we introduce a new image-based nonrigid registration method that obviates the need for manual seed segmentation in TRUS images and compensates for the prostate displacement and deformation due to TRUS probe pressure. First, we filter the ultrasound images for subsequent registration using thresholding and Gaussian blurring. Second, a computationally efficient point-to-volume similarity metric, an affine transformation and an evolutionary optimizer are used in the registration loop. A phantom study showed final registration errors of 0.84 ± 0.45 mm compared to ground truth. In a study on data from 10 patients, the registration algorithm showed overall seed-to-seed errors of 1.7 ± 1.0 mm and 1.5 ± 0.9 mm for rigid and nonrigid registration methods, respectively, performed in approximately 30s per patient.


Subject(s)
Brachytherapy/methods , Prostatic Neoplasms/diagnosis , Prostatic Neoplasms/radiotherapy , Radiotherapy Planning, Computer-Assisted/methods , Radiotherapy, Image-Guided/methods , Tomography, X-Ray Computed/methods , Ultrasonography/methods , Humans , Male , Radiometry/methods , Radiotherapy Dosage , Reproducibility of Results , Sensitivity and Specificity , Subtraction Technique
20.
Med Image Comput Comput Assist Interv ; 14(Pt 2): 615-22, 2011.
Article in English | MEDLINE | ID: mdl-21995080

ABSTRACT

Ultrasound-Fluoroscopy fusion is a key step toward intraoperative dosimetry for prostate brachytherapy. We propose a method for intensity-based registration of fluoroscopy to ultrasound that obviates the need for seed segmentation required for seed-based registration. We employ image thresholding and morphological and Gaussian filtering to enhance the image intensity distribution of ultrasound volume. Finally, we find the registration parameters by maximizing a point-to-volume similarity metric. We conducted an experiment on a ground truth phantom and achieved registration error of 0.7 +/- 0.2 mm. Our clinical results on 5 patient data sets show excellent visual agreement between the registered seeds and the ultrasound volume with a seed-to-seed registration error of 1.8 +/- 0.9mm. With low registration error, high computational speed and no need for manual seed segmentation, our method is promising for clinical application.


Subject(s)
Brachytherapy/methods , Prostate/pathology , Prostatic Neoplasms/radiotherapy , Radiotherapy, Computer-Assisted/methods , Algorithms , Computers , Humans , Male , Models, Statistical , Normal Distribution , Phantoms, Imaging , Reproducibility of Results , Software
SELECTION OF CITATIONS
SEARCH DETAIL