Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 9 de 9
Filtrar
1.
Crit Rev Biomed Eng ; 40(2): 135-54, 2012.
Artigo em Inglês | MEDLINE | ID: mdl-22668239

RESUMO

Minimal invasive tumor therapies are getting ever more sophisticated with novel treatment approaches and new devices allowing for improved targeting precision. Applying these effectively requires precise localization of the structures of interest. Vital processes, such as respiration and heartbeat, induce organ motion, which cannot be neglected during therapy. This review focuses on 4D organ models to compensate for respiratory motion during therapy. An overview is given on the effects of motion on the therapeutical outcome, methods required to capture and quantify respiratory motion, range of reported tumor motion, types of surrogates used when tumors are not directly observable, and methods for temporal prediction of surrogate motion. Organ motion models, which predict the location of structures of interest from surrogates measured during therapy, are discussed in detail.


Assuntos
Modelos Anatômicos , Movimento (Física) , Neoplasias/terapia , Técnicas de Imagem de Sincronização Respiratória/métodos , Algoritmos , Tomografia Computadorizada Quadridimensional/métodos , Coração/fisiologia , Humanos , Imagens de Fantasmas , Análise de Componente Principal , Planejamento da Radioterapia Assistida por Computador/métodos , Respiração
2.
Brachytherapy ; 19(5): 589-598, 2020.
Artigo em Inglês | MEDLINE | ID: mdl-32682777

RESUMO

PURPOSE: The purpose of the study was to assess the feasibility of performing intraoperative dosimetry for permanent prostate brachytherapy by combining transrectal ultrasound (TRUS) and fluoroscopy/cone beam CT [CBCT] images and accounting for the effect of prostate deformation. METHODS AND MATERIALS: 13 patients underwent TRUS and multiview two-dimensional fluoroscopic imaging partway through the implant, as well as repeat fluoroscopic imaging with the TRUS probe inserted and retracted, and finally three-dimensional CBCT imaging at the end of the implant. The locations of all the implanted seeds were obtained from the fluoroscopy/CBCT images and were registered to prostate contours delineated on the TRUS images based on a common subset of seeds identified on both image sets. Prostate contours were also deformed, using a finite-element model, to take into account the effect of the TRUS probe pressure. Prostate dosimetry parameters were obtained for fluoroscopic and CBCT-dosimetry approaches and compared with the standard-of-care Day-0 postimplant CT dosimetry. RESULTS: High linear correlation (R2 > 0.8) was observed in the measured values of prostate D90%, V100%, and V150%, between the two intraoperative dosimetry approaches. The prostate D90% and V100% obtained from intraoperative dosimetry methods were in agreement with the postimplant CT dosimetry. Only the prostate V150% was on average 4.1% (p-value <0.05) higher in the CBCT-dosimetry approach and 6.7% (p-value <0.05) higher in postimplant CT dosimetry compared with the fluoroscopic dosimetry approach. Deformation of the prostate by the ultrasound probe appeared to have a minimal effect on prostate dosimetry. CONCLUSIONS: The results of this study have shown that both of the proposed dosimetric evaluation approaches have potential for real-time intraoperative dosimetry.


Assuntos
Braquiterapia/métodos , Fluoroscopia/métodos , Neoplasias da Próstata/radioterapia , Radiometria/métodos , Ultrassonografia/métodos , Tomografia Computadorizada de Feixe Cônico , Estudos de Viabilidade , Humanos , Cuidados Intraoperatórios , Masculino , Neoplasias da Próstata/diagnóstico por imagem , Dosagem Radioterapêutica , Planejamento da Radioterapia Assistida por Computador/métodos
3.
Med Image Anal ; 60: 101588, 2020 02.
Artigo em Inglês | MEDLINE | ID: mdl-31739281

RESUMO

We propose an image guidance system for robot assisted laparoscopic radical prostatectomy (RALRP). A virtual 3D reconstruction of the surgery scene is displayed underneath the endoscope's feed on the surgeon's console. This scene consists of an annotated preoperative Magnetic Resonance Image (MRI) registered to intraoperative 3D Trans-rectal Ultrasound (TRUS) as well as real-time sagittal 2D TRUS images of the prostate, 3D models of the prostate, the surgical instrument and the TRUS transducer. We display these components with accurate real-time coordinates with respect to the robot system. Since the scene is rendered from the viewpoint of the endoscope, given correct parameters of the camera, an augmented scene can be overlaid on the video output. The surgeon can rotate the ultrasound transducer and determine the position of the projected axial plane in the MRI using one of the registered da Vinci instruments. This system was tested in the laboratory on custom-made agar prostate phantoms. We achieved an average total registration accuracy of 3.2 â€¯±â€¯ 1.3 mm. We also report on the successful application of this system in the operating room in 12 patients. The average registration error between the TRUS and the da Vinci system for the last 8 patients was 1.4 â€¯±â€¯ 0.3 mm and average target registration error of 2.1 â€¯±â€¯ 0.8 mm, resulting in an in vivo overall robot system to MRI mean registration error of 3.5 mm or less, which is consistent with our laboratory studies.


Assuntos
Realidade Aumentada , Laparoscopia/métodos , Prostatectomia , Procedimentos Cirúrgicos Robóticos/métodos , Cirurgia Assistida por Computador/métodos , Ultrassonografia/métodos , Desenho de Equipamento , Humanos , Imageamento Tridimensional , Imageamento por Ressonância Magnética , Masculino , Imagens de Fantasmas
4.
Int J Comput Assist Radiol Surg ; 14(6): 923-931, 2019 Jun.
Artigo em Inglês | MEDLINE | ID: mdl-30863982

RESUMO

PURPOSE: Prostate cancer is the most prevalent form of male-specific cancers. Robot-assisted laparoscopic radical prostatectomy (RALRP) using the da Vinci surgical robot has become the gold-standard treatment for organ-confined prostate cancer. To improve intraoperative visualization of anatomical structures, many groups have developed techniques integrating transrectal ultrasound (TRUS) into the surgical workflow. TRUS, however, is intrusive and does not provide real-time volumetric imaging. METHODS: We propose a proof-of-concept system offering an alternative noninvasive transperineal view of the prostate and surrounding structures using 3D ultrasound (US), allowing for full-volume imaging in any anatomical plane desired. The system aims to automatically track da Vinci surgical instruments and display a real-time US image registered to preoperative MRI. We evaluate the approach using a custom prostate phantom, an iU22 (Philips Healthcare, Bothell, WA) US machine with an xMATRIX X6-1 transducer, and a custom probe fixture. A novel registration method between the da Vinci kinematic frame and 3D US is presented. To evaluate the entire registration pipeline, we use a previously developed MRI to US deformable registration algorithm. RESULTS: Our US calibration technique yielded a registration error of 0.84 mm, compared to 1.76 mm with existing methods. We evaluated overall system error with a prostate phantom, achieving a target registration error of 2.55 mm. CONCLUSION: Transperineal imaging using 3D US is a promising approach for image guidance during RALRP. Preliminary results suggest this system is comparable to existing guidance systems using TRUS. With further development and testing, we believe our system has the potential to improve patient outcomes by imaging anatomical structures and prostate cancer in real time.


Assuntos
Próstata/cirurgia , Prostatectomia/métodos , Neoplasias da Próstata/cirurgia , Procedimentos Cirúrgicos Robóticos/métodos , Ultrassonografia de Intervenção/métodos , Calibragem , Estudos de Viabilidade , Humanos , Imageamento Tridimensional/métodos , Imageamento por Ressonância Magnética , Masculino , Imagens de Fantasmas , Próstata/diagnóstico por imagem , Neoplasias da Próstata/diagnóstico por imagem
5.
Int J Comput Assist Radiol Surg ; 13(8): 1211-1219, 2018 Aug.
Artigo em Inglês | MEDLINE | ID: mdl-29766373

RESUMO

PURPOSE: Most of the existing convolutional neural network (CNN)-based medical image segmentation methods are based on methods that have originally been developed for segmentation of natural images. Therefore, they largely ignore the differences between the two domains, such as the smaller degree of variability in the shape and appearance of the target volume and the smaller amounts of training data in medical applications. We propose a CNN-based method for prostate segmentation in MRI that employs statistical shape models to address these issues. METHODS: Our CNN predicts the location of the prostate center and the parameters of the shape model, which determine the position of prostate surface keypoints. To train such a large model for segmentation of 3D images using small data (1) we adopt a stage-wise training strategy by first training the network to predict the prostate center and subsequently adding modules for predicting the parameters of the shape model and prostate rotation, (2) we propose a data augmentation method whereby the training images and their prostate surface keypoints are deformed according to the displacements computed based on the shape model, and (3) we employ various regularization techniques. RESULTS: Our proposed method achieves a Dice score of 0.88, which is obtained by using both elastic-net and spectral dropout for regularization. Compared with a standard CNN-based method, our method shows significantly better segmentation performance on the prostate base and apex. Our experiments also show that data augmentation using the shape model significantly improves the segmentation results. CONCLUSIONS: Prior knowledge about the shape of the target organ can improve the performance of CNN-based segmentation methods, especially where image features are not sufficient for a precise segmentation. Statistical shape models can also be employed to synthesize additional training data that can ease the training of large CNNs.


Assuntos
Aprendizado de Máquina , Modelos Estatísticos , Redes Neurais de Computação , Próstata/diagnóstico por imagem , Humanos , Processamento de Imagem Assistida por Computador/métodos , Imageamento Tridimensional/métodos , Imageamento por Ressonância Magnética , Masculino
6.
IEEE Trans Med Imaging ; 37(8): 1877-1886, 2018 08.
Artigo em Inglês | MEDLINE | ID: mdl-29994583

RESUMO

We present a novel technique for real-time deformable registration of 3-D to 2.5-D transrectal ultrasound (TRUS) images for image-guided, robot-assisted laparoscopic radical prostatectomy (RALRP). For RALRP, a pre-operatively acquired 3-D TRUS image is registered to thin-volumes comprised of consecutive intra-operative 2-D TRUS images, where the optimal transformation is found using a gradient descent method based on analytical first and second order derivatives. Our method relies on an efficient algorithm for real-time extraction of arbitrary slices from a 3-D image deformed given a discrete mesh representation. We also propose and demonstrate an evaluation method that generates simulated models and images for RALRP by modeling tissue deformation through patient-specific finite-element models (FEM). We evaluated our method on in-vivo data from 11 patients collected during RALRP and focal therapy interventions. In the presence of an average landmark deformation of 3.89 and 4.62 mm, we achieved accuracies of 1.15 and 0.72 mm, respectively, on the synthetic and in-vivo data sets, with an average registration computation time of 264 ms, using MATLAB on a conventional PC. The results show that the real-time tracking of the prostate motion and deformation is feasible, enabling a real-time augmented reality-based guidance system for RALRP.].


Assuntos
Imageamento Tridimensional/métodos , Próstata/diagnóstico por imagem , Próstata/cirurgia , Prostatectomia/métodos , Cirurgia Assistida por Computador/métodos , Ultrassonografia/métodos , Algoritmos , Bases de Dados Factuais , Análise de Elementos Finitos , Humanos , Masculino , Movimento , Neoplasias da Próstata/tratamento farmacológico , Neoplasias da Próstata/cirurgia
7.
Int J Comput Assist Radiol Surg ; 13(6): 749-757, 2018 Jun.
Artigo em Inglês | MEDLINE | ID: mdl-29589259

RESUMO

PURPOSE: In the current standard of care, real-time transrectal ultrasound (TRUS) is commonly used for prostate brachytherapy guidance. As TRUS provides limited soft tissue contrast, segmenting the prostate gland in TRUS images is often challenging and subject to inter-observer and intra-observer variability, especially at the base and apex where the gland boundary is hard to define. Magnetic resonance imaging (MRI) has higher soft tissue contrast allowing the prostate to be contoured easily. In this paper, we aim to show that prostate segmentation in TRUS images informed by MRI priors can improve on prostate segmentation that relies only on TRUS images. METHODS: First, we compare the TRUS-based prostate segmentation used in the treatment of 598 patients with a high-quality MRI prostate atlas and observe inconsistencies at the apex and base. Second, motivated by this finding, we propose an alternative TRUS segmentation technique that is fully automatic and uses MRI priors. The algorithm uses a convolutional neural network to segment the prostate in TRUS images at mid-gland, where the gland boundary can be clearly seen. It then reconstructs the gland boundary at the apex and base with the aid of a statistical shape model built from an MRI atlas of 78 patients. RESULTS: Compared to the clinical TRUS segmentation, our method achieves similar mid-gland segmentation results in the 598-patient database. For the seven patients who had both TRUS and MRI, our method achieved more accurate segmentation of the base and apex with the MRI segmentation used as ground truth. CONCLUSION: Our results suggest that utilizing MRI priors in TRUS prostate segmentation could potentially improve the performance at base and apex.


Assuntos
Algoritmos , Endossonografia/métodos , Processamento de Imagem Assistida por Computador/métodos , Imageamento por Ressonância Magnética/métodos , Modelos Estatísticos , Próstata/diagnóstico por imagem , Neoplasias da Próstata/diagnóstico , Humanos , Masculino , Curva ROC , Reto
8.
Med Image Comput Comput Assist Interv ; 17(Pt 1): 146-53, 2014.
Artigo em Inglês | MEDLINE | ID: mdl-25333112

RESUMO

The reconstruction of 4D images from 2D navigator and data slices requires sufficient observations per motion state to avoid blurred images and motion artifacts between slices. Especially images from rare motion states, like deep inhalations during free-breathing, suffer from too few observations. To address this problem, we propose to actively generate more suitable images instead of only selecting from the available images. The method is based on learning the relationship between navigator and data-slice motion by linear regression after dimensionality reduction. This can then be used to predict new data slices for a given navigator by warping existing data slices by their predicted displacement field. The method was evaluated for 4D-MRIs of the liver under free-breathing, where sliding boundaries pose an additional challenge for image registration. Leave-one-out tests for five short sequences of ten volunteers showed that the proposed prediction method improved on average the residual mean (95%) motion between the ground truth and predicted data slice from 0.9mm (1.9mm) to 0.8mm (1.6mm) in comparison to the best selection method. The approach was particularly suited for unusual motion states, where the mean error was reduced by 40% (2.2mm vs. 1.3mm).


Assuntos
Abdome/anatomia & histologia , Artefatos , Aumento da Imagem/métodos , Interpretação de Imagem Assistida por Computador/métodos , Imageamento Tridimensional/métodos , Imageamento por Ressonância Magnética/métodos , Técnicas de Imagem de Sincronização Respiratória/métodos , Algoritmos , Humanos , Movimento (Física) , Reconhecimento Automatizado de Padrão/métodos , Reprodutibilidade dos Testes , Sensibilidade e Especificidade , Técnica de Subtração
9.
Med Image Comput Comput Assist Interv ; 17(Pt 1): 706-13, 2014.
Artigo em Inglês | MEDLINE | ID: mdl-25333181

RESUMO

Magnetic resonance guided high intensity focused ultrasound (MRgHIFU) is a new type of minimally invasive therapy for treating malignant liver tissues. Since the ribs on the beam path can compromise an effective therapy, detecting them and tracking their motion on MR images is of great importance. However, due to poor magnetic signal emission of bones, ribs cannot be entirely observed in MR. In the proposed method, we take advantage of the accuracy of CT in imaging the ribs to build a geometric ribcage model and combine it with an appearance model of the neighbouring structures of ribs in MR to reconstruct realistic centerlines in MRIs. We have improved our previous method by using a more sophisticated appearance model, a more flexible ribcage model, and a more effective optimization strategy. We decreased the mean error to 2.5 mm, making the method suitable for clinical application. Finally, we propose a rib registration method which conserves the shape and length of ribs, and imposes realistic constraints on their motions, achieving 2.7mm mean accuracy.


Assuntos
Aumento da Imagem/métodos , Imageamento por Ressonância Magnética/métodos , Modelos Biológicos , Reconhecimento Automatizado de Padrão/métodos , Costelas/anatomia & histologia , Costelas/diagnóstico por imagem , Técnica de Subtração , Algoritmos , Simulação por Computador , Humanos , Interpretação de Imagem Assistida por Computador/métodos , Reprodutibilidade dos Testes , Sensibilidade e Especificidade , Tomografia Computadorizada por Raios X/métodos
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA