Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 15 de 15
Filtrar
1.
J Vasc Interv Radiol ; 34(8): 1319-1323, 2023 08.
Artículo en Inglés | MEDLINE | ID: mdl-37142215

RESUMEN

This study assessed the feasibility and functionality of the use of a high-speed image fusion technology to generate and display positron emission tomography (PET)/computed tomography (CT) fluoroscopic images during PET/CT-guided tumor ablation procedures. Thirteen patients underwent 14 PET/CT-guided ablations for the treatment of 20 tumors. A Food and Drug Administration-cleared multimodal image fusion platform received images pushed from a scanner, followed by near-real-time, nonrigid image registration. The most recent intraprocedural PET dataset was fused to each single-rotation CT fluoroscopy dataset as it arrived, and the fused images were displayed on an in-room monitor. PET/CT fluoroscopic images were generated and displayed in all procedures and enabled more confident targeting in 3 procedures. The mean lag time from CT fluoroscopic image acquisition to the in-room display of the fused PET/CT fluoroscopic image was 21 seconds ± 8. The registration accuracy was visually satisfactory in 13 of 14 procedures. In conclusion, PET/CT fluoroscopy was feasible and may have the potential to facilitate PET/CT-guided procedures.


Asunto(s)
Neoplasias , Tomografía Computarizada por Tomografía de Emisión de Positrones , Humanos , Tomografía Computarizada por Rayos X/métodos , Fluoroscopía , Tomografía de Emisión de Positrones/métodos
2.
Int J Comput Assist Radiol Surg ; 17(2): 385-391, 2022 Feb.
Artículo en Inglés | MEDLINE | ID: mdl-34817764

RESUMEN

PURPOSE: Microsoft HoloLens is a pair of augmented reality (AR) smart glasses that could improve the intraprocedural visualization of ultrasound-guided procedures. With the wearable HoloLens headset, an ultrasound image can be virtually rendered and registered with the ultrasound transducer and placed directly in the practitioner's field of view. METHODS: A custom application, called HoloUS, was developed using the HoloLens and a portable ultrasound machine connected through a wireless network. A custom 3D-printed case with an AR-pattern for the ultrasound transducer permitted ultrasound image tracking and registration. Voice controls on the HoloLens supported the scaling and movement of the ultrasound image as desired. The ultrasound images were streamed and displayed in real-time. A user study was performed to assess the effectiveness of the HoloLens as an alternative display platform. Novices and experts were timed on tasks involving targeting simulated vessels using a needle in a custom phantom. RESULTS: Technical characterization of the HoloUS app was conducted using frame rate, tracking accuracy, and latency as performance metrics. The app ran at 25 frames/s, had an 80-ms latency, and could track the transducer with an average reprojection error of 0.0435 pixels. With AR visualization, the novices' times improved by 17% but the experts' times decreased slightly by 5%, which may reflect the experts' training and experience bias. CONCLUSION: The HoloUS application was found to enhance user experience and simplify hand-eye coordination. By eliminating the need to alternately observe the patient and the ultrasound images presented on a separate monitor, the proposed AR application has the potential to improve efficiency and effectiveness of ultrasound-guided procedures.


Asunto(s)
Realidad Aumentada , Humanos , Agujas , Fantasmas de Imagen , Ultrasonografía , Ultrasonografía Intervencional
3.
J Digit Imaging ; 34(6): 1376-1386, 2021 12.
Artículo en Inglés | MEDLINE | ID: mdl-34647199

RESUMEN

When preprocedural images are overlaid on intraprocedural images, interventional procedures benefit in that more structures are revealed in intraprocedural imaging. However, image artifacts, respiratory motion, and challenging scenarios could limit the accuracy of multimodality image registration necessary before image overlay. Ensuring the accuracy of registration during interventional procedures is therefore critically important. The goal of this study was to develop a novel framework that has the ability to assess the quality (i.e., accuracy) of nonrigid multimodality image registration accurately in near real time. We constructed a solution using registration quality metrics that can be computed rapidly and combined to form a single binary assessment of image registration quality as either successful or poor. Based on expert-generated quality metrics as ground truth, we used a supervised learning method to train and test this system on existing clinical data. Using the trained quality classifier, the proposed framework identified successful image registration cases with an accuracy of 81.5%. The current implementation produced the classification result in 5.5 s, fast enough for typical interventional radiology procedures. Using supervised learning, we have shown that the described framework could enable a clinician to obtain confirmation or caution of registration results during clinical procedures.


Asunto(s)
Diagnóstico por Imagen , Aprendizaje Automático Supervisado , Algoritmos , Humanos , Procesamiento de Imagen Asistido por Computador , Movimiento (Física)
4.
J Med Imaging (Bellingham) ; 8(1): 015001, 2021 Jan.
Artículo en Inglés | MEDLINE | ID: mdl-33585664

RESUMEN

Purpose: The purpose of this work was to develop a new method of tracking a laparoscopic ultrasound (LUS) transducer in laparoscopic video by combining the hardware [e.g., electromagnetic (EM)] and the computer vision-based (e.g., ArUco) tracking methods. Approach: We developed a special tracking mount for the imaging tip of the LUS transducer. The mount incorporated an EM sensor and an ArUco pattern registered to it. The hybrid method used ArUco tracking for ArUco-success frames (i.e., frames where ArUco succeeds in detecting the pattern) and used corrected EM tracking for the ArUco-failure frames. The corrected EM tracking result was obtained by applying correction matrices to the original EM tracking result. The correction matrices were calculated in previous ArUco-success frames by comparing the ArUco result and the original EM tracking result. Results: We performed phantom and animal studies to evaluate the performance of our hybrid tracking method. The corrected EM tracking results showed significant improvements over the original EM tracking results. In the animal study, 59.2% frames were ArUco-success frames. For the ArUco-failure frames, mean reprojection errors for the original EM tracking method and for the corrected EM tracking method were 30.8 pixel and 10.3 pixel, respectively. Conclusions: The new hybrid method is more reliable than using ArUco tracking alone and more accurate and practical than using EM tracking alone for tracking the LUS transducer in the laparoscope camera image. The proposed method has the potential to significantly improve tracking performance for LUS-based augmented reality applications.

5.
Int J Comput Assist Radiol Surg ; 15(5): 803-810, 2020 May.
Artículo en Inglés | MEDLINE | ID: mdl-32323211

RESUMEN

PURPOSE: For laparoscopic ablation to be successful, accurate placement of the needle to the tumor is essential. Laparoscopic ultrasound is an essential tool to guide needle placement, but the ultrasound image is generally presented separately from the laparoscopic image. We aim to evaluate an augmented reality (AR) system which combines laparoscopic ultrasound image, laparoscope video, and the needle trajectory in a unified view. METHODS: We created a tissue phantom made of gelatin. Artificial tumors represented by plastic spheres were secured in the gelatin at various depths. The top point of the sphere surface was our target, and its 3D coordinates were known. The participants were invited to perform needle placement with and without AR guidance. Once the participant reported that the needle tip had reached the target, the needle tip location was recorded and compared to the ground truth location of the target, and the difference was the target localization error (TLE). The time of the needle placement was also recorded. We further tested the technical feasibility of the AR system in vivo on a 40-kg swine. RESULTS: The AR guidance system was evaluated by two experienced surgeons and two surgical fellows. The users performed needle placement on a total of 26 targets, 13 with AR and 13 without (i.e., the conventional approach). The average TLE for the conventional and the AR approaches was 14.9 mm and 11.1 mm, respectively. The average needle placement time needed for the conventional and AR approaches was 59.4 s and 22.9 s, respectively. For the animal study, ultrasound image and needle trajectory were successfully fused with the laparoscopic video in real time and presented on a single screen for the surgeons. CONCLUSION: By providing projected needle trajectory, we believe our AR system can assist the surgeon with more efficient and precise needle placement.


Asunto(s)
Realidad Aumentada , Laparoscopía/métodos , Neoplasias Hepáticas/cirugía , Ablación por Radiofrecuencia/métodos , Ultrasonografía Intervencional/métodos , Animales , Fantasmas de Imagen , Porcinos
6.
J Laparoendosc Adv Surg Tech A ; 29(1): 88-93, 2019 Jan.
Artículo en Inglés | MEDLINE | ID: mdl-30192172

RESUMEN

INTRODUCTION: Intraoperative imaging, such as ultrasound, provides subsurface anatomical information not seen by standard laparoscopy. Currently, information from the two modalities may only be integrated in the surgeon's mind, an often distracting and inefficient task. The desire to improve intraoperative efficiency has guided the development of a novel, augmented reality (AR) laparoscopic system that integrates, in real time, laparoscopic ultrasound (LUS) images with the laparoscopic video. This study shows the initial application of this system for laparoscopic hepatic wedge resection in a porcine model. MATERIALS AND METHODS: The AR system consists of a standard laparoscopy setup, LUS scanner, electromagnetic tracking system, and a laptop computer for image fusion. Two liver lesions created in a 40-kg swine by radiofrequency ablation (RFA) were resected using the novel AR system and under standard laparoscopy. RESULTS: Anatomical details from the LUS were successfully fused with the laparoscopic video in real time and presented on a single screen for the surgeons. The RFA lesions created were 2.5 and 1 cm in diameter. The 2.5 cm lesion was resected under AR guidance, taking about 7 minutes until completion, while the 1 cm lesion required 3 minutes using standard laparoscopy and ultrasound. Resection margins of both lesions grossly showed noncoagulated liver parenchyma, indicating a negative-margin resection. CONCLUSIONS: The use of our AR system in laparoscopic hepatic wedge resection in a swine provided real-time integration of ultrasound image with standard laparoscopy. With more experience and testing, this system can be used for other laparoscopic procedures.


Asunto(s)
Hepatectomía/métodos , Procesamiento de Imagen Asistido por Computador , Laparoscopía/métodos , Ultrasonografía , Animales , Femenino , Márgenes de Escisión , Imagen Multimodal , Tempo Operativo , Porcinos
7.
Healthc Technol Lett ; 6(6): 231-236, 2019 Dec.
Artículo en Inglés | MEDLINE | ID: mdl-32038863

RESUMEN

Surgical tool tracking has a variety of applications in different surgical scenarios. Electromagnetic (EM) tracking can be utilised for tool tracking, but the accuracy is often limited by magnetic interference. Vision-based methods have also been suggested; however, tracking robustness is limited by specular reflection, occlusions, and blurriness observed in the endoscopic image. Recently, deep learning-based methods have shown competitive performance on segmentation and tracking of surgical tools. The main bottleneck of these methods lies in acquiring a sufficient amount of pixel-wise, annotated training data, which demands substantial labour costs. To tackle this issue, the authors propose a weakly supervised method for surgical tool segmentation and tracking based on hybrid sensor systems. They first generate semantic labellings using EM tracking and laparoscopic image processing concurrently. They then train a light-weight deep segmentation network to obtain a binary segmentation mask that enables tool tracking. To the authors' knowledge, the proposed method is the first to integrate EM tracking and laparoscopic image processing for generation of training labels. They demonstrate that their framework achieves accurate, automatic tool segmentation (i.e. without any manual labelling of the surgical tool to be tracked) and robust tool tracking in laparoscopic image sequences.

8.
Phys Med Biol ; 62(3): 927-947, 2017 02 07.
Artículo en Inglés | MEDLINE | ID: mdl-28074785

RESUMEN

Cone-beam CT (CBCT) is a widely used intra-operative imaging modality in image-guided radiotherapy and surgery. A short scan followed by a filtered-backprojection is typically used for CBCT reconstruction. While data on the mid-plane (plane of source-detector rotation) is complete, off-mid-planes undergo different information deficiency and the computed reconstructions are approximate. This causes different reconstruction artifacts at off-mid-planes depending on slice locations, and therefore impedes accurate registration between CT and CBCT. In this paper, we propose a method to accurately register CT and CBCT by iteratively matching local CT and CBCT intensities. We correct CBCT intensities by matching local intensity histograms slice by slice in conjunction with intensity-based deformable registration. The correction-registration steps are repeated in an alternating way until the result image converges. We integrate the intensity matching into three different deformable registration methods, B-spline, demons, and optical flow that are widely used for CT-CBCT registration. All three registration methods were implemented on a graphics processing unit for efficient parallel computation. We tested the proposed methods on twenty five head and neck cancer cases and compared the performance with state-of-the-art registration methods. Normalized cross correlation (NCC), structural similarity index (SSIM), and target registration error (TRE) were computed to evaluate the registration performance. Our method produced overall NCC of 0.96, SSIM of 0.94, and TRE of 2.26 → 2.27 mm, outperforming existing methods by 9%, 12%, and 27%, respectively. Experimental results also show that our method performs consistently and is more accurate than existing algorithms, and also computationally efficient.


Asunto(s)
Tomografía Computarizada de Haz Cónico/métodos , Neoplasias de Cabeza y Cuello/radioterapia , Procesamiento de Imagen Asistido por Computador/métodos , Radioterapia Guiada por Imagen/métodos , Algoritmos , Artefactos , Humanos
9.
J Med Imaging (Bellingham) ; 3(4): 045001, 2016 Oct.
Artículo en Inglés | MEDLINE | ID: mdl-27752522

RESUMEN

The purpose of this work was to develop a clinically viable laparoscopic augmented reality (AR) system employing stereoscopic (3-D) vision, laparoscopic ultrasound (LUS), and electromagnetic (EM) tracking to achieve image registration. We investigated clinically feasible solutions to mount the EM sensors on the 3-D laparoscope and the LUS probe. This led to a solution of integrating an externally attached EM sensor near the imaging tip of the LUS probe, only slightly increasing the overall diameter of the probe. Likewise, a solution for mounting an EM sensor on the handle of the 3-D laparoscope was proposed. The spatial image-to-video registration accuracy of the AR system was measured to be [Formula: see text] and [Formula: see text] for the left- and right-eye channels, respectively. The AR system contributed 58-ms latency to stereoscopic visualization. We further performed an animal experiment to demonstrate the use of the system as a visualization approach for laparoscopic procedures. In conclusion, we have developed an integrated, compact, and EM tracking-based stereoscopic AR visualization system, which has the potential for clinical use. The system has been demonstrated to achieve clinically acceptable accuracy and latency. This work is a critical step toward clinical translation of AR visualization for laparoscopic procedures.

10.
Med Phys ; 43(10): 5339, 2016 Oct.
Artículo en Inglés | MEDLINE | ID: mdl-27782691

RESUMEN

PURPOSE: Accurate tracking of anatomical changes and computation of actually delivered dose to the patient are critical for successful adaptive radiation therapy (ART). Additionally, efficient data management and fast processing are practically important for the adoption in clinic as ART involves a large amount of image and treatment data. The purpose of this study was to develop an accurate and efficient Software platform for CUmulative Dose Assessment (scuda) that can be seamlessly integrated into the clinical workflow. METHODS: scuda consists of deformable image registration (DIR), segmentation, dose computation modules, and a graphical user interface. It is connected to our image PACS and radiotherapy informatics databases from which it automatically queries/retrieves patient images, radiotherapy plan, beam data, and daily treatment information, thus providing an efficient and unified workflow. For accurate registration of the planning CT and daily CBCTs, the authors iteratively correct CBCT intensities by matching local intensity histograms during the DIR process. Contours of the target tumor and critical structures are then propagated from the planning CT to daily CBCTs using the computed deformations. The actual delivered daily dose is computed using the registered CT and patient setup information by a superposition/convolution algorithm, and accumulated using the computed deformation fields. Both DIR and dose computation modules are accelerated by a graphics processing unit. RESULTS: The cumulative dose computation process has been validated on 30 head and neck (HN) cancer cases, showing 3.5 ± 5.0 Gy (mean±STD) absolute mean dose differences between the planned and the actually delivered doses in the parotid glands. On average, DIR, dose computation, and segmentation take 20 s/fraction and 17 min for a 35-fraction treatment including additional computation for dose accumulation. CONCLUSIONS: The authors developed a unified software platform that provides accurate and efficient monitoring of anatomical changes and computation of actually delivered dose to the patient, thus realizing an efficient cumulative dose computation workflow. Evaluation on HN cases demonstrated the utility of our platform for monitoring the treatment quality and detecting significant dosimetric variations that are keys to successful ART.


Asunto(s)
Dosis de Radiación , Radioterapia Guiada por Imagen/métodos , Programas Informáticos , Tomografía Computarizada de Haz Cónico , Neoplasias de Cabeza y Cuello/diagnóstico por imagen , Neoplasias de Cabeza y Cuello/radioterapia , Humanos , Procesamiento de Imagen Asistido por Computador , Dosificación Radioterapéutica , Planificación de la Radioterapia Asistida por Computador
11.
Pediatr Radiol ; 46(11): 1552-61, 2016 Oct.
Artículo en Inglés | MEDLINE | ID: mdl-27380195

RESUMEN

BACKGROUND: With the introduction of hybrid positron emission tomography/magnetic resonance imaging (PET/MRI), a new imaging option to acquire multimodality images with complementary anatomical and functional information has become available. Compared with hybrid PET/computed tomography (CT), hybrid PET/MRI is capable of providing superior anatomical detail while removing the radiation exposure associated with CT. The early adoption of hybrid PET/MRI, however, has been limited. OBJECTIVE: To provide a viable alternative to the hybrid PET/MRI hardware by validating a software-based solution for PET-MR image coregistration. MATERIALS AND METHODS: A fully automated, graphics processing unit-accelerated 3-D deformable image registration technique was used to align PET (acquired as PET/CT) and MR image pairs of 17 patients (age range: 10 months-21 years, mean: 10 years) who underwent PET/CT and body MRI (chest, abdomen or pelvis), which were performed within a 28-day (mean: 10.5 days) interval. MRI data for most of these cases included single-station post-contrast axial T1-weighted images. Following registration, maximum standardized uptake value (SUVmax) values observed in coregistered PET (cPET) and the original PET were compared for 82 volumes of interest. In addition, we calculated the target registration error as a measure of the quality of image coregistration, and evaluated the algorithm's performance in the context of interexpert variability. RESULTS: The coregistration execution time averaged 97±45 s. The overall relative SUVmax difference was 7% between cPET-MRI and PET/CT. The average target registration error was 10.7±6.6 mm, which compared favorably with the typical voxel size (diagonal distance) of 8.0 mm (typical resolution: 0.66 mm × 0.66 mm × 8 mm) for MRI and 6.1 mm (typical resolution: 3.65 mm × 3.65 mm × 3.27 mm) for PET. The variability in landmark identification did not show statistically significant differences between the algorithm and a typical expert. CONCLUSION: We have presented a software-based solution that achieves the many benefits of hybrid PET/MRI scanners without actually needing one. The method proved to be accurate and potentially clinically useful.


Asunto(s)
Interpretación de Imagen Asistida por Computador/métodos , Imagen por Resonancia Magnética/métodos , Imagen Multimodal , Tomografía de Emisión de Positrones/métodos , Programas Informáticos , Adolescente , Algoritmos , Niño , Preescolar , Femenino , Humanos , Lactante , Masculino , Estudios Retrospectivos , Tomografía Computarizada por Rayos X , Adulto Joven
12.
Int J Comput Assist Radiol Surg ; 11(6): 1163-71, 2016 Jun.
Artículo en Inglés | MEDLINE | ID: mdl-27250853

RESUMEN

PURPOSE: Common camera calibration methods employed in current laparoscopic augmented reality systems require the acquisition of multiple images of an entire checkerboard pattern from various poses. This lengthy procedure prevents performing laparoscope calibration in the operating room (OR). The purpose of this work was to develop a fast calibration method for electromagnetically (EM) tracked laparoscopes, such that the calibration can be performed in the OR on demand. METHODS: We designed a mechanical tracking mount to uniquely and snugly position an EM sensor to an appropriate location on a conventional laparoscope. A tool named fCalib was developed to calibrate intrinsic camera parameters, distortion coefficients, and extrinsic parameters (transformation between the scope lens coordinate system and the EM sensor coordinate system) using a single image that shows an arbitrary portion of a special target pattern. For quick evaluation of calibration results in the OR, we integrated a tube phantom with fCalib prototype and overlaid a virtual representation of the tube on the live video scene. RESULTS: We compared spatial target registration error between the common OpenCV method and the fCalib method in a laboratory setting. In addition, we compared the calibration re-projection error between the EM tracking-based fCalib and the optical tracking-based fCalib in a clinical setting. Our results suggest that the proposed method is comparable to the OpenCV method. However, changing the environment, e.g., inserting or removing surgical tools, might affect re-projection accuracy for the EM tracking-based approach. Computational time of the fCalib method averaged 14.0 s (range 3.5 s-22.7 s). CONCLUSIONS: We developed and validated a prototype for fast calibration and evaluation of EM tracked conventional (forward viewing) laparoscopes. The calibration method achieved acceptable accuracy and was relatively fast and easy to be performed in the OR on demand.


Asunto(s)
Diseño de Equipo , Laparoscopios , Calibración , Fenómenos Electromagnéticos , Humanos , Laparoscopía , Fantasmas de Imagen , Interfaz Usuario-Computador
13.
IEEE J Transl Eng Health Med ; 4: 4300311, 2016.
Artículo en Inglés | MEDLINE | ID: mdl-32520000

RESUMEN

The images generated during radiation oncology treatments provide a valuable resource to conduct analysis for personalized therapy, outcomes prediction, and treatment margin optimization. Deformable image registration (DIR) is an essential tool in analyzing these images. We are enhancing and examining DIR with the contributions of this paper: 1) implementing and investigating a cloud and graphic processing unit (GPU) accelerated DIR solution and 2) assessing the accuracy and flexibility of that solution on planning computed tomography (CT) with cone-beam CT (CBCT). Registering planning CTs and CBCTs aids in monitoring tumors, tracking body changes, and assuring that the treatment is executed as planned. This provides significant information not only on the level of a single patient, but also for an oncology department. However, traditional methods for DIR are usually time-consuming, and manual intervention is sometimes required even for a single registration. In this paper, we present a cloud-based solution in order to increase the data analysis throughput, so that treatment tracking results may be delivered at the time of care. We assess our solution in terms of accuracy and flexibility compared with a commercial tool registering CT with CBCT. The latency of a previously reported mutual information-based DIR algorithm was improved with GPUs for a single registration. This registration consists of rigid registration followed by volume subdivision-based nonrigid registration. In this paper, the throughput of the system was accelerated on the cloud for hundreds of data analysis pairs. Nine clinical cases of head and neck cancer patients were utilized to quantitatively evaluate the accuracy and throughput. Target registration error (TRE) and structural similarity index were utilized as evaluation metrics for registration accuracy. The total computation time consisting of preprocessing the data, running the registration, and analyzing the results was used to evaluate the system throughput. Evaluation showed that the average TRE for GPU-accelerated DIR for each of the nine patients was from 1.99 to 3.39 mm, which is lower than the voxel dimension. The total processing time for 282 pairs on an Amazon Web Services cloud consisting of 20 GPU enabled nodes took less than an hour. Beyond the original registration, the cloud resources also included automatic registration quality checks with minimal impact to timing. Clinical data were utilized in quantitative evaluations, and the results showed that the presented method holds great potential for many high-impact clinical applications in radiation oncology, including adaptive radio therapy, patient outcomes prediction, and treatment margin optimization.

14.
Acad Radiol ; 22(6): 722-33, 2015 Jun.
Artículo en Inglés | MEDLINE | ID: mdl-25784325

RESUMEN

RATIONALE AND OBJECTIVES: Accuracy and speed are essential for the intraprocedural nonrigid magnetic resonance (MR) to computed tomography (CT) image registration in the assessment of tumor margins during CT-guided liver tumor ablations. Although both accuracy and speed can be improved by limiting the registration to a region of interest (ROI), manual contouring of the ROI prolongs the registration process substantially. To achieve accurate and fast registration without the use of an ROI, we combined a nonrigid registration technique on the basis of volume subdivision with hardware acceleration using a graphics processing unit (GPU). We compared the registration accuracy and processing time of GPU-accelerated volume subdivision-based nonrigid registration technique to the conventional nonrigid B-spline registration technique. MATERIALS AND METHODS: Fourteen image data sets of preprocedural MR and intraprocedural CT images for percutaneous CT-guided liver tumor ablations were obtained. Each set of images was registered using the GPU-accelerated volume subdivision technique and the B-spline technique. Manual contouring of ROI was used only for the B-spline technique. Registration accuracies (Dice similarity coefficient [DSC] and 95% Hausdorff distance [HD]) and total processing time including contouring of ROIs and computation were compared using a paired Student t test. RESULTS: Accuracies of the GPU-accelerated registrations and B-spline registrations, respectively, were 88.3 ± 3.7% versus 89.3 ± 4.9% (P = .41) for DSC and 13.1 ± 5.2 versus 11.4 ± 6.3 mm (P = .15) for HD. Total processing time of the GPU-accelerated registration and B-spline registration techniques was 88 ± 14 versus 557 ± 116 seconds (P < .000000002), respectively; there was no significant difference in computation time despite the difference in the complexity of the algorithms (P = .71). CONCLUSIONS: The GPU-accelerated volume subdivision technique was as accurate as the B-spline technique and required significantly less processing time. The GPU-accelerated volume subdivision technique may enable the implementation of nonrigid registration into routine clinical practice.


Asunto(s)
Ablación por Catéter , Procesamiento de Imagen Asistido por Computador/métodos , Neoplasias Hepáticas/cirugía , Imagen por Resonancia Magnética , Radiografía Intervencional , Tomografía Computarizada por Rayos X , Anciano , Anciano de 80 o más Años , Femenino , Humanos , Hígado/diagnóstico por imagen , Hígado/patología , Neoplasias Hepáticas/diagnóstico por imagen , Neoplasias Hepáticas/patología , Masculino , Persona de Mediana Edad , Reproducibilidad de los Resultados , Estudios Retrospectivos
15.
Med Phys ; 34(7): 3054-66, 2007 Jul.
Artículo en Inglés | MEDLINE | ID: mdl-17822013

RESUMEN

Conventional radiotherapy is planned using free-breathing computed tomography (CT), ignoring the motion and deformation of the anatomy from respiration. New breath-hold-synchronized, gated, and four-dimensional (4D) CT acquisition strategies are enabling radiotherapy planning utilizing a set of CT scans belonging to different phases of the breathing cycle. Such 4D treatment planning relies on the availability of tumor and organ contours in all phases. The current practice of manual segmentation is impractical for 4D CT, because it is time consuming and tedious. A viable solution is registration-based segmentation, through which contours provided by an expert for a particular phase are propagated to all other phases while accounting for phase-to-phase motion and anatomical deformation. Deformable image registration is central to this task, and a free-form deformation-based nonrigid image registration algorithm will be presented. Compared with the original algorithm, this version uses novel, computationally simpler geometric constraints to preserve the topology of the dense control-point grid used to represent free-form deformation and prevent tissue fold-over. Using mean squared difference as an image similarity criterion, the inhale phase is registered to the exhale phase of lung CT scans of five patients and of characteristically low-contrast abdominal CT scans of four patients. In addition, using expert contours for the inhale phase, the corresponding contours were automatically generated for the exhale phase. The accuracy of the segmentation (and hence deformable image registration) was judged by comparing automatically segmented contours with expert contours traced directly in the exhale phase scan using three metrics: volume overlap index, root mean square distance, and Hausdorff distance. The accuracy of the segmentation (in terms of radial distance mismatch) was approximately 2 mm in the thorax and 3 mm in the abdomen, which compares favorably to the accuracies reported elsewhere. Unlike most prior work, segmentation of the tumor is also presented. The clinical implementation of 4D treatment planning is critically dependent on automatic segmentation, for which is offered one of the most accurate algorithms yet presented.


Asunto(s)
Algoritmos , Tomografía Computarizada por Rayos X , Espiración , Tomografía Computarizada Cuatridimensional , Humanos , Respiración
SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA