Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 7 de 7
Filtrar
Mais filtros











Base de dados
Intervalo de ano de publicação
1.
Brachytherapy ; 22(2): 199-209, 2023.
Artigo em Inglês | MEDLINE | ID: mdl-36641305

RESUMO

PURPOSE: The purpose of this study was to evaluate and clinically implement a deformable surface-based magnetic resonance imaging (MRI) to three-dimensional ultrasound (US) image registration algorithm for prostate brachytherapy (BT) with the aim to reduce operator dependence and facilitate dose escalation to an MRI-defined target. METHODS AND MATERIALS: Our surface-based deformable image registration (DIR) algorithm first translates and scales to align the US- and MR-defined prostate surfaces, followed by deformation of the MR-defined prostate surface to match the US-defined prostate surface. The algorithm performance was assessed in a phantom using three deformation levels, followed by validation in three retrospective high-dose-rate BT clinical cases. For comparison, manual rigid registration and cognitive fusion by physician were also employed. Registration accuracy was assessed using the Dice similarity coefficient (DSC) and target registration error (TRE) for embedded spherical landmarks. The algorithm was then implemented intraoperatively in a prospective clinical case. RESULTS: In the phantom, our DIR algorithm demonstrated a mean DSC and TRE of 0.74 ± 0.08 and 0.94 ± 0.49 mm, respectively, significantly improving the performance compared to manual rigid registration with 0.64 ± 0.16 and 1.88 ± 1.24 mm, respectively. Clinical results demonstrated reduced variability compared to the current standard of cognitive fusion by physicians. CONCLUSIONS: We successfully validated a DIR algorithm allowing for translation of MR-defined target and organ-at-risk contours into the intraoperative environment. Prospective clinical implementation demonstrated the intraoperative feasibility of our algorithm, facilitating targeted biopsies and dose escalation to the MR-defined lesion. This method provides the potential to standardize the registration procedure between physicians, reducing operator dependence.


Assuntos
Braquiterapia , Próstata , Masculino , Humanos , Próstata/diagnóstico por imagem , Próstata/patologia , Braquiterapia/métodos , Estudos Retrospectivos , Estudos Prospectivos , Algoritmos , Imageamento por Ressonância Magnética/métodos , Processamento de Imagem Assistida por Computador/métodos
2.
Phys Med Biol ; 67(7)2022 03 29.
Artigo em Inglês | MEDLINE | ID: mdl-35240585

RESUMO

Three-dimensional (3D) transrectal ultrasound (TRUS) is utilized in prostate cancer diagnosis and treatment, necessitating time-consuming manual prostate segmentation. We have previously developed an automatic 3D prostate segmentation algorithm involving deep learning prediction on radially sampled 2D images followed by 3D reconstruction, trained on a large, clinically diverse dataset with variable image quality. As large clinical datasets are rare, widespread adoption of automatic segmentation could be facilitated with efficient 2D-based approaches and the development of an image quality grading method. The complete training dataset of 6761 2D images, resliced from 206 3D TRUS volumes acquired using end-fire and side-fire acquisition methods, was split to train two separate networks using either end-fire or side-fire images. Split datasets were reduced to 1000, 500, 250, and 100 2D images. For deep learning prediction, modified U-Net and U-Net++ architectures were implemented and compared using an unseen test dataset of 40 3D TRUS volumes. A 3D TRUS image quality grading scale with three factors (acquisition quality, artifact severity, and boundary visibility) was developed to assess the impact on segmentation performance. For the complete training dataset, U-Net and U-Net++ networks demonstrated equivalent performance, but when trained using split end-fire/side-fire datasets, U-Net++ significantly outperformed the U-Net. Compared to the complete training datasets, U-Net++ trained using reduced-size end-fire and side-fire datasets demonstrated equivalent performance down to 500 training images. For this dataset, image quality had no impact on segmentation performance for end-fire images but did have a significant effect for side-fire images, with boundary visibility having the largest impact. Our algorithm provided fast (<1.5 s) and accurate 3D segmentations across clinically diverse images, demonstrating generalizability and efficiency when employed on smaller datasets, supporting the potential for widespread use, even when data is scarce. The development of an image quality grading scale provides a quantitative tool for assessing segmentation performance.


Assuntos
Aprendizado Profundo , Neoplasias da Próstata , Humanos , Masculino , Pelve , Próstata/diagnóstico por imagem , Neoplasias da Próstata/diagnóstico por imagem , Ultrassonografia
3.
Med Phys ; 47(10): 4956-4970, 2020 Oct.
Artigo em Inglês | MEDLINE | ID: mdl-32767411

RESUMO

PURPOSE: Many interventional procedures require the precise placement of needles or therapy applicators (tools) to correctly achieve planned targets for optimal diagnosis or treatment of cancer, typically leveraging the temporal resolution of ultrasound (US) to provide real-time feedback. Identifying tools in two-dimensional (2D) images can often be time-consuming with the precise position difficult to distinguish. We have developed and implemented a deep learning method to segment tools in 2D US images in near real-time for multiple anatomical sites, despite the widely varying appearances across interventional applications. METHODS: A U-Net architecture with a Dice similarity coefficient (DSC) loss function was used to perform segmentation on input images resized to 256 × 256 pixels. The U-Net was modified by adding 50% dropouts and the use of transpose convolutions in the decoder section of the network. The proposed approach was trained with 917 images and manual segmentations from prostate/gynecologic brachytherapy, liver ablation, and kidney biopsy/ablation procedures, as well as phantom experiments. Real-time data augmentation was applied to improve generalizability and doubled the dataset for each epoch. Postprocessing to identify the tool tip and trajectory was performed using two different approaches, comparing the largest island with a linear fit to random sample consensus (RANSAC) fitting. RESULTS: Comparing predictions from 315 unseen test images to manual segmentations, the overall median [first quartile, third quartile] tip error, angular error, and DSC were 3.5 [1.3, 13.5] mm, 0.8 [0.3, 1.7]°, and 73.3 [56.2, 82.3]%, respectively, following RANSAC postprocessing. The predictions with the lowest median tip and angular errors were observed in the gynecologic images (median tip error: 0.3 mm; median angular error: 0.4°) with the highest errors in the kidney images (median tip error: 10.1 mm; median angular error: 2.9°). The performance on the kidney images was likely due to a reduction in acoustic signal associated with oblique insertions relative to the US probe and the increased number of anatomical interfaces with similar echogenicity. Unprocessed segmentations were performed with a mean time of approximately 50 ms per image. CONCLUSIONS: We have demonstrated that our proposed approach can accurately segment tools in 2D US images from multiple anatomical locations and a variety of clinical interventional procedures in near real-time, providing the potential to improve image guidance during a broad range of diagnostic and therapeutic cancer interventions.


Assuntos
Aprendizado Profundo , Feminino , Fígado/diagnóstico por imagem , Masculino , Agulhas , Imagens de Fantasmas , Ultrassonografia
4.
Med Phys ; 47(6): 2413-2426, 2020 Jun.
Artigo em Inglês | MEDLINE | ID: mdl-32166768

RESUMO

PURPOSE: Needle-based procedures for diagnosing and treating prostate cancer, such as biopsy and brachytherapy, have incorporated three-dimensional (3D) transrectal ultrasound (TRUS) imaging to improve needle guidance. Using these images effectively typically requires the physician to manually segment the prostate to define the margins used for accurate registration, targeting, and other guidance techniques. However, manual prostate segmentation is a time-consuming and difficult intraoperative process, often occurring while the patient is under sedation (biopsy) or anesthetic (brachytherapy). Minimizing procedure time with a 3D TRUS prostate segmentation method could provide physicians with a quick and accurate prostate segmentation, and allow for an efficient workflow with improved patient throughput to enable faster patient access to care. The purpose of this study was to develop a supervised deep learning-based method to segment the prostate in 3D TRUS images from different facilities, generated using multiple acquisition methods and commercial ultrasound machine models to create a generalizable algorithm for needle-based prostate cancer procedures. METHODS: Our proposed method for 3D segmentation involved prediction on two-dimensional (2D) slices sampled radially around the approximate central axis of the prostate, followed by reconstruction into a 3D surface. A 2D U-Net was modified, trained, and validated using images from 84 end-fire and 122 side-fire 3D TRUS images acquired during clinical biopsies and brachytherapy procedures. Modifications to the expansion section of the standard U-Net included the addition of 50% dropouts and the use of transpose convolutions instead of standard upsampling followed by convolution to reduce overfitting and improve performance, respectively. Manual contours provided the annotations needed for the training, validation, and testing datasets, with the testing dataset consisting of 20 end-fire and 20 side-fire unseen 3D TRUS images. Since predicting with 2D images has the potential to lose spatial and structural information, comparisons to 3D reconstruction and optimized 3D networks including 3D V-Net, Dense V-Net, and High-resolution 3D-Net were performed following an investigation into different loss functions. An extended selection of absolute and signed error metrics were computed, including pixel map comparisons [dice similarity coefficient (DSC), recall, and precision], volume percent differences (VPD), mean surface distance (MSD), and Hausdorff distance (HD), to assess 3D segmentation accuracy. RESULTS: Overall, our proposed reconstructed modified U-Net performed with a median [first quartile, third quartile] absolute DSC, recall, precision, VPD, MSD, and HD of 94.1 [92.6, 94.9]%, 96.0 [93.1, 98.5]%, 93.2 [88.8, 95.4]%, 5.78 [2.49, 11.50]%, 0.89 [0.73, 1.09] mm, and 2.89 [2.37, 4.35] mm, respectively. When compared to the best-performing optimized 3D network (i.e., 3D V-Net with a Dice plus cross-entropy loss function), our proposed method performed with a significant improvement across nearly all metrics. A computation time <0.7 s per prostate was observed, which is a sufficiently short segmentation time for intraoperative implementation. CONCLUSIONS: Our proposed algorithm was able to provide a fast and accurate 3D segmentation across variable 3D TRUS prostate images, enabling a generalizable intraoperative solution for needle-based prostate cancer procedures. This method has the potential to decrease procedure times, supporting the increasing interest in needle-based 3D TRUS approaches.


Assuntos
Braquiterapia , Aprendizado Profundo , Neoplasias da Próstata , Humanos , Processamento de Imagem Assistida por Computador , Imageamento Tridimensional , Masculino , Próstata/diagnóstico por imagem , Neoplasias da Próstata/diagnóstico por imagem , Neoplasias da Próstata/radioterapia , Ultrassonografia
5.
Artigo em Inglês | MEDLINE | ID: mdl-20879294

RESUMO

To ensure accurate targeting and repeatability, 3D TRUS-guided biopsies require registration to determine coordinate transformations to (1) incorporate pre-procedure biopsy plans and (2) compensate for inter-session prostate motion and deformation between repeat biopsy sessions. We evaluated prostate surface- and image-based 3D-to-3D TRUS registration by measuring the TRE of manually marked, corresponding, intrinsic fiducials in the whole gland and peripheral zone, and also evaluated the error anisotropy. The image-based rigid and non-rigid methods yielded the best results with mean TREs of 2.26 mm and 1.96 mm, respectively. These results compare favorably with a clinical need for an error of less than 2.5 mm.


Assuntos
Biópsia/métodos , Próstata/diagnóstico por imagem , Próstata/patologia , Técnica de Subtração , Cirurgia Assistida por Computador/métodos , Ultrassonografia de Intervenção/métodos , Ultrassonografia/métodos , Algoritmos , Humanos , Aumento da Imagem/métodos , Interpretação de Imagem Assistida por Computador/métodos , Masculino , Reprodutibilidade dos Testes , Sensibilidade e Especificidade
6.
Med Phys ; 34(11): 4109-25, 2007 Nov.
Artigo em Inglês | MEDLINE | ID: mdl-18072477

RESUMO

In this article a new slice-based 3D prostate segmentation method based on a continuity constraint, implemented as an autoregressive (AR) model is described. In order to decrease the propagated segmentation error produced by the slice-based 3D segmentation method, a continuity constraint was imposed in the prostate segmentation algorithm. A 3D ultrasound image was segmented using the slice-based segmentation method. Then, a cross-sectional profile of the resulting contours was obtained by intersecting the 2D segmented contours with a coronal plane passing through the midpoint of the manually identified rotational axis, which is considered to be the approximate center of the prostate. On the coronal cross-sectional plane, these intersections form a set of radial lines directed from the center of the prostate. The lengths of these radial lines were smoothed using an AR model. Slice-based 3D segmentations were performed in the clockwise and in the anticlockwise directions, where clockwise and anticlockwise are defined with respect to the propagation directions on the coronal view. This resulted in two different segmentations for each 2D slice. For each pair of unmatched segments, in which the distance between the contour generated clockwise and that generated anticlockwise was greater than 4 mm, a method was used to select the optimal contour. Experiments performed using 3D prostate ultrasound images of nine patients demonstrated that the proposed method produced accurate 3D prostate boundaries without manual editing. The average distance between the proposed method and manual segmentation was 1.29 mm. The average intraobserver coefficient of variation (i.e., the standard deviation divided by the average volume) of the boundaries segmented by the proposed method was 1.6%. The average segmentation time of a 352 x 379 x 704 image on a Pentium IV 2.8 GHz PC was 10 s.


Assuntos
Imageamento Tridimensional/métodos , Neoplasias da Próstata/diagnóstico , Neoplasias da Próstata/patologia , Algoritmos , Humanos , Processamento de Imagem Assistida por Computador/métodos , Masculino , Modelos Estatísticos , Próstata/metabolismo , Neoplasias da Próstata/diagnóstico por imagem , Planejamento da Radioterapia Assistida por Computador/métodos , Análise de Regressão , Reprodutibilidade dos Testes , Sensibilidade e Especificidade , Ultrassonografia/métodos
7.
Artigo em Inglês | MEDLINE | ID: mdl-17282269

RESUMO

In the diagnosis and therapy of prostate cancer, it is critical to measure the volume of the prostate and locate its boundary. Three-dimensional transrectal ultrasound (3D TRUS) imaging has been demonstrated to be a useful technique to perform such a task. Due to image speckle as well as low contrast in ultrasound images, segmentation of the prostate in 3D US images is challenging. In this paper, we report on the development of an improved slice-based 3D prostate segmentation method. First, we imposed a continuity constraint for the end points of the prostate boundaries in a cross-sectional plane so that a smooth prostate boundary in 2D is obtained. Then, in each 2D slice, we inserted the end points into the vertex list of the initial contour to obtain a new contour, which forces the evolving contour to be driven to the boundary of the prostate. Evaluation demonstrated that our method could segment the prostate in 3D TRUS images more quickly and accurately.

SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA