Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 7 de 7
Filtrar
1.
Phys Med Biol ; 67(7)2022 03 29.
Artigo em Inglês | MEDLINE | ID: mdl-35240585

RESUMO

Three-dimensional (3D) transrectal ultrasound (TRUS) is utilized in prostate cancer diagnosis and treatment, necessitating time-consuming manual prostate segmentation. We have previously developed an automatic 3D prostate segmentation algorithm involving deep learning prediction on radially sampled 2D images followed by 3D reconstruction, trained on a large, clinically diverse dataset with variable image quality. As large clinical datasets are rare, widespread adoption of automatic segmentation could be facilitated with efficient 2D-based approaches and the development of an image quality grading method. The complete training dataset of 6761 2D images, resliced from 206 3D TRUS volumes acquired using end-fire and side-fire acquisition methods, was split to train two separate networks using either end-fire or side-fire images. Split datasets were reduced to 1000, 500, 250, and 100 2D images. For deep learning prediction, modified U-Net and U-Net++ architectures were implemented and compared using an unseen test dataset of 40 3D TRUS volumes. A 3D TRUS image quality grading scale with three factors (acquisition quality, artifact severity, and boundary visibility) was developed to assess the impact on segmentation performance. For the complete training dataset, U-Net and U-Net++ networks demonstrated equivalent performance, but when trained using split end-fire/side-fire datasets, U-Net++ significantly outperformed the U-Net. Compared to the complete training datasets, U-Net++ trained using reduced-size end-fire and side-fire datasets demonstrated equivalent performance down to 500 training images. For this dataset, image quality had no impact on segmentation performance for end-fire images but did have a significant effect for side-fire images, with boundary visibility having the largest impact. Our algorithm provided fast (<1.5 s) and accurate 3D segmentations across clinically diverse images, demonstrating generalizability and efficiency when employed on smaller datasets, supporting the potential for widespread use, even when data is scarce. The development of an image quality grading scale provides a quantitative tool for assessing segmentation performance.


Assuntos
Aprendizado Profundo , Neoplasias da Próstata , Humanos , Masculino , Pelve , Próstata/diagnóstico por imagem , Neoplasias da Próstata/diagnóstico por imagem , Ultrassonografia
2.
Med Phys ; 47(10): 4956-4970, 2020 Oct.
Artigo em Inglês | MEDLINE | ID: mdl-32767411

RESUMO

PURPOSE: Many interventional procedures require the precise placement of needles or therapy applicators (tools) to correctly achieve planned targets for optimal diagnosis or treatment of cancer, typically leveraging the temporal resolution of ultrasound (US) to provide real-time feedback. Identifying tools in two-dimensional (2D) images can often be time-consuming with the precise position difficult to distinguish. We have developed and implemented a deep learning method to segment tools in 2D US images in near real-time for multiple anatomical sites, despite the widely varying appearances across interventional applications. METHODS: A U-Net architecture with a Dice similarity coefficient (DSC) loss function was used to perform segmentation on input images resized to 256 × 256 pixels. The U-Net was modified by adding 50% dropouts and the use of transpose convolutions in the decoder section of the network. The proposed approach was trained with 917 images and manual segmentations from prostate/gynecologic brachytherapy, liver ablation, and kidney biopsy/ablation procedures, as well as phantom experiments. Real-time data augmentation was applied to improve generalizability and doubled the dataset for each epoch. Postprocessing to identify the tool tip and trajectory was performed using two different approaches, comparing the largest island with a linear fit to random sample consensus (RANSAC) fitting. RESULTS: Comparing predictions from 315 unseen test images to manual segmentations, the overall median [first quartile, third quartile] tip error, angular error, and DSC were 3.5 [1.3, 13.5] mm, 0.8 [0.3, 1.7]°, and 73.3 [56.2, 82.3]%, respectively, following RANSAC postprocessing. The predictions with the lowest median tip and angular errors were observed in the gynecologic images (median tip error: 0.3 mm; median angular error: 0.4°) with the highest errors in the kidney images (median tip error: 10.1 mm; median angular error: 2.9°). The performance on the kidney images was likely due to a reduction in acoustic signal associated with oblique insertions relative to the US probe and the increased number of anatomical interfaces with similar echogenicity. Unprocessed segmentations were performed with a mean time of approximately 50 ms per image. CONCLUSIONS: We have demonstrated that our proposed approach can accurately segment tools in 2D US images from multiple anatomical locations and a variety of clinical interventional procedures in near real-time, providing the potential to improve image guidance during a broad range of diagnostic and therapeutic cancer interventions.


Assuntos
Aprendizado Profundo , Feminino , Fígado/diagnóstico por imagem , Masculino , Agulhas , Imagens de Fantasmas , Ultrassonografia
3.
Med Phys ; 47(10): 5135-5146, 2020 Oct.
Artigo em Inglês | MEDLINE | ID: mdl-32686142

RESUMO

PURPOSE: Image-guided focal ablation procedures are first-line therapy options in the treatment of liver cancer tumors that provide advantageous reductions in patient recovery times and complication rates relative to open surgery. However, extensive physician training is required and image guidance variabilities during freehand therapy applicator placement limit the sufficiency of ablation volumes and the overall potential of these procedures. We propose the use of three-dimensional ultrasound (3D US) to provide guidance and localization of therapy applicators, augmenting current ablation therapies without the need for specialized procedure suites. We have developed a novel scanning mechanism for geometrically variable 3D US images, a mechanical tracking system, and a needle applicator insertion workflow using a custom needle applicator guide for targeted image-guided procedures. METHODS: A three-motor scanner was designed to use any commercially available US probe to generate accurate, consistent, and geometrically variable 3D US images. The designed scanner was mounted on a counterbalanced stabilizing and mechanical tracking system for determining the US probe orientation, which was assessed using optical tracking. Further exploiting the utility of the motorized scanner, an image-guidance workflow was developed that moved the probe to any identified target within an acquired 3D US image. The complete 3D US guidance system was used to perform mock targeted interventional procedures on a phantom by selecting a target in a 3D US image, navigating to the target, and performing needle insertion using a custom 3D-printed needle applicator guide. Registered postinsertion 3D US images and cone-beam computed tomography (CBCT) images were used to evaluate tip targeting errors when using the motors, tracking system, or mixed navigation approaches. Two 3D US image geometries were investigated to assess the accuracy of a small-footprint tilt approach and a large field-of-view hybrid approach for a total of 48 targeted needle insertions. 3D US image quality was evaluated in a healthy volunteer and compared to a commercially available matrix array US probe. RESULTS: A mean positioning error of 1.85 ± 1.33 mm was observed when performing compound joint manipulations with the mechanical tracking system. A combined approach for navigation that incorporated the motorized movement and the in-plane tracking system corrections performed the best with a mean tip error of 3.77 ± 2.27 mm and 4.27 ± 2.47 mm based on 3D US and CBCT images, respectively. No significant differences were observed between hybrid and tilt image acquisition geometries with all mean registration errors ≤1.2 mm. 3D US volunteer images resulted in clear reconstruction of clinically relevant anatomy. CONCLUSIONS: A mechanically tracked system with geometrically variable 3D US provides a utility that enables enhanced applicator guidance, placement verification, and improved clinical workflow during focal liver tumor ablation procedures. Evaluations of the tracking accuracy, targeting capabilities, and clinical imaging feasibility of the proposed 3D US system, provided evidence for clinical translation. This system could provide a workflow for improving applicator placement and reducing local cancer recurrence during interventional procedures treating liver cancer and has the potential to be expanded to other abdominal interventions and procedures.


Assuntos
Neoplasias Hepáticas , Recidiva Local de Neoplasia , Humanos , Imageamento Tridimensional , Neoplasias Hepáticas/diagnóstico por imagem , Neoplasias Hepáticas/cirurgia , Imagens de Fantasmas , Ultrassonografia
4.
Med Phys ; 47(6): 2413-2426, 2020 Jun.
Artigo em Inglês | MEDLINE | ID: mdl-32166768

RESUMO

PURPOSE: Needle-based procedures for diagnosing and treating prostate cancer, such as biopsy and brachytherapy, have incorporated three-dimensional (3D) transrectal ultrasound (TRUS) imaging to improve needle guidance. Using these images effectively typically requires the physician to manually segment the prostate to define the margins used for accurate registration, targeting, and other guidance techniques. However, manual prostate segmentation is a time-consuming and difficult intraoperative process, often occurring while the patient is under sedation (biopsy) or anesthetic (brachytherapy). Minimizing procedure time with a 3D TRUS prostate segmentation method could provide physicians with a quick and accurate prostate segmentation, and allow for an efficient workflow with improved patient throughput to enable faster patient access to care. The purpose of this study was to develop a supervised deep learning-based method to segment the prostate in 3D TRUS images from different facilities, generated using multiple acquisition methods and commercial ultrasound machine models to create a generalizable algorithm for needle-based prostate cancer procedures. METHODS: Our proposed method for 3D segmentation involved prediction on two-dimensional (2D) slices sampled radially around the approximate central axis of the prostate, followed by reconstruction into a 3D surface. A 2D U-Net was modified, trained, and validated using images from 84 end-fire and 122 side-fire 3D TRUS images acquired during clinical biopsies and brachytherapy procedures. Modifications to the expansion section of the standard U-Net included the addition of 50% dropouts and the use of transpose convolutions instead of standard upsampling followed by convolution to reduce overfitting and improve performance, respectively. Manual contours provided the annotations needed for the training, validation, and testing datasets, with the testing dataset consisting of 20 end-fire and 20 side-fire unseen 3D TRUS images. Since predicting with 2D images has the potential to lose spatial and structural information, comparisons to 3D reconstruction and optimized 3D networks including 3D V-Net, Dense V-Net, and High-resolution 3D-Net were performed following an investigation into different loss functions. An extended selection of absolute and signed error metrics were computed, including pixel map comparisons [dice similarity coefficient (DSC), recall, and precision], volume percent differences (VPD), mean surface distance (MSD), and Hausdorff distance (HD), to assess 3D segmentation accuracy. RESULTS: Overall, our proposed reconstructed modified U-Net performed with a median [first quartile, third quartile] absolute DSC, recall, precision, VPD, MSD, and HD of 94.1 [92.6, 94.9]%, 96.0 [93.1, 98.5]%, 93.2 [88.8, 95.4]%, 5.78 [2.49, 11.50]%, 0.89 [0.73, 1.09] mm, and 2.89 [2.37, 4.35] mm, respectively. When compared to the best-performing optimized 3D network (i.e., 3D V-Net with a Dice plus cross-entropy loss function), our proposed method performed with a significant improvement across nearly all metrics. A computation time <0.7 s per prostate was observed, which is a sufficiently short segmentation time for intraoperative implementation. CONCLUSIONS: Our proposed algorithm was able to provide a fast and accurate 3D segmentation across variable 3D TRUS prostate images, enabling a generalizable intraoperative solution for needle-based prostate cancer procedures. This method has the potential to decrease procedure times, supporting the increasing interest in needle-based 3D TRUS approaches.


Assuntos
Braquiterapia , Aprendizado Profundo , Neoplasias da Próstata , Humanos , Processamento de Imagem Assistida por Computador , Imageamento Tridimensional , Masculino , Próstata/diagnóstico por imagem , Neoplasias da Próstata/diagnóstico por imagem , Neoplasias da Próstata/radioterapia , Ultrassonografia
5.
Med Phys ; 46(6): 2646-2658, 2019 Jun.
Artigo em Inglês | MEDLINE | ID: mdl-30994191

RESUMO

PURPOSE: Minimally invasive procedures, such as microwave ablation, are becoming first-line treatment options for early-stage liver cancer due to lower complication rates and shorter recovery times than conventional surgical techniques. Although these procedures are promising, one reason preventing widespread adoption is inadequate local tumor ablation leading to observations of higher local cancer recurrence compared to conventional procedures. Poor ablation coverage has been associated with two-dimensional (2D) ultrasound (US) guidance of the therapy needle applicators and has stimulated investigation into the use of three-dimensional (3D) US imaging for these procedures. We have developed a supervised 3D US needle applicator segmentation algorithm using a single user input to augment the addition of 3D US to the current focal liver tumor ablation workflow with the goals of identifying and improving needle applicator localization efficiency. METHODS: The algorithm is initialized by creating a spherical search space of line segments around a manually chosen seed point that is selected by a user on the needle applicator visualized in a 3D US image. The most probable trajectory is chosen by maximizing the count and intensity of threshold voxels along a line segment and is filtered using the Otsu method to determine the tip location. Homogeneous tissue mimicking phantom images containing needle applicators were used to optimize the parameters of the algorithm prior to a four-user investigation on retrospective 3D US images of patients who underwent microwave ablation for liver cancer. Trajectory, axis localization, and tip errors were computed based on comparisons to manual segmentations in 3D US images. RESULTS: Segmentation of needle applicators in ten phantom 3D US images was optimized to median (Q1, Q3) trajectory, axis, and tip errors of 2.1 (1.1, 3.6)°, 1.3 (0.8, 2.1) mm, and 1.3 (0.7, 2.5) mm, respectively, with a mean ± SD segmentation computation time of 0.246 ± 0.007 s. Use of the segmentation method with a 16 in vivo 3D US patient dataset resulted in median (Q1, Q3) trajectory, axis, and tip errors of 4.5 (2.4, 5.2)°, 1.9 (1.7, 2.1) mm, and 5.1 (2.2, 5.9) mm based on all users. CONCLUSIONS: Segmentation of needle applicators in 3D US images during minimally invasive liver cancer therapeutic procedures could provide a utility that enables enhanced needle applicator guidance, placement verification, and improved clinical workflow. A semi-automated 3D US needle applicator segmentation algorithm used in vivo demonstrated localization of the visualized trajectory and tip with less than 5° and 5.2 mm errors, respectively, in less than 0.31 s. This offers the ability to assess and adjust needle applicator placements intraoperatively to potentially decrease the observed liver cancer recurrence rates associated with current ablation procedures. Although optimized for deep and oblique angle needle applicator insertions, this proposed workflow has the potential to be altered for a variety of image-guided minimally invasive procedures to improve localization and verification of therapy needle applicators intraoperatively.


Assuntos
Técnicas de Ablação/instrumentação , Fígado/diagnóstico por imagem , Fígado/cirurgia , Agulhas , Cirurgia Assistida por Computador/instrumentação , Humanos , Imagens de Fantasmas , Ultrassonografia
6.
Med Phys ; 44(9): 4708-4723, 2017 Sep.
Artigo em Inglês | MEDLINE | ID: mdl-28666058

RESUMO

PURPOSE: During image-guided prostate biopsy, needles are targeted at tissues that are suspicious of cancer to obtain specimen for histological examination. Unfortunately, patient motion causes targeting errors when using an MR-transrectal ultrasound (TRUS) fusion approach to augment the conventional biopsy procedure. This study aims to develop an automatic motion correction algorithm approaching the frame rate of an ultrasound system to be used in fusion-based prostate biopsy systems. Two modes of operation have been investigated for the clinical implementation of the algorithm: motion compensation using a single user initiated correction performed prior to biopsy, and real-time continuous motion compensation performed automatically as a background process. METHODS: Retrospective 2D and 3D TRUS patient images acquired prior to biopsy gun firing were registered using an intensity-based algorithm utilizing normalized cross-correlation and Powell's method for optimization. 2D and 3D images were downsampled and cropped to estimate the optimal amount of image information that would perform registrations quickly and accurately. The optimal search order during optimization was also analyzed to avoid local optima in the search space. Error in the algorithm was computed using target registration errors (TREs) from manually identified homologous fiducials in a clinical patient dataset. The algorithm was evaluated for real-time performance using the two different modes of clinical implementations by way of user initiated and continuous motion compensation methods on a tissue mimicking prostate phantom. RESULTS: After implementation in a TRUS-guided system with an image downsampling factor of 4, the proposed approach resulted in a mean ± std TRE and computation time of 1.6 ± 0.6 mm and 57 ± 20 ms respectively. The user initiated mode performed registrations with in-plane, out-of-plane, and roll motions computation times of 108 ± 38 ms, 60 ± 23 ms, and 89 ± 27 ms, respectively, and corresponding registration errors of 0.4 ± 0.3 mm, 0.2 ± 0.4 mm, and 0.8 ± 0.5°. The continuous method performed registration significantly faster (P < 0.05) than the user initiated method, with observed computation times of 35 ± 8 ms, 43 ± 16 ms, and 27 ± 5 ms for in-plane, out-of-plane, and roll motions, respectively, and corresponding registration errors of 0.2 ± 0.3 mm, 0.7 ± 0.4 mm, and 0.8 ± 1.0°. CONCLUSIONS: The presented method encourages real-time implementation of motion compensation algorithms in prostate biopsy with clinically acceptable registration errors. Continuous motion compensation demonstrated registration accuracy with submillimeter and subdegree error, while performing < 50 ms computation times. Image registration technique approaching the frame rate of an ultrasound system offers a key advantage to be smoothly integrated to the clinical workflow. In addition, this technique could be used further for a variety of image-guided interventional procedures to treat and diagnose patients by improving targeting accuracy.


Assuntos
Algoritmos , Biópsia Guiada por Imagem , Neoplasias da Próstata/diagnóstico por imagem , Ultrassonografia de Intervenção , Biópsia por Agulha , Humanos , Imageamento Tridimensional , Masculino , Próstata , Estudos Retrospectivos
7.
Appl Spectrosc ; 70(3): 485-93, 2016 Mar.
Artigo em Inglês | MEDLINE | ID: mdl-26819441

RESUMO

Four species of bacteria, E. coli, S. epidermidis, M. smegmatis, and P. aeruginosa, were harvested from agar nutrient medium growth plates and suspended in water to create liquid specimens for the testing of a new mounting protocol. Aliquots of 30 µL were deposited on standard nitrocellulose filter paper with a mean 0.45 µm pore size to create highly flat and uniform bacterial pads. The introduction of a laser-based lens-to-sample distance measuring device and a pair of matched off-axis parabolic reflectors for light collection improved both spectral reproducibility and the signal-to-noise ratio of optical emission spectra acquired from the bacterial pads by laser-induced breakdown spectroscopy. A discriminant function analysis and a partial least squares-discriminant analysis both showed improved sensitivity and specificity compared to previous mounting techniques. The behavior of the spectra as a function of suspension concentration and filter coverage was investigated, as was the effect on chemometric cell classification of sterilization via autoclaving.


Assuntos
Escherichia coli/química , Mycobacterium smegmatis/química , Pseudomonas aeruginosa/química , Staphylococcus epidermidis/química , Colódio/química , Análise Discriminante , Desenho de Equipamento , Escherichia coli/classificação , Lasers , Análise dos Mínimos Quadrados , Filtros Microporos/microbiologia , Mycobacterium smegmatis/classificação , Pseudomonas aeruginosa/classificação , Reprodutibilidade dos Testes , Espectrofotometria Atômica/instrumentação , Staphylococcus epidermidis/classificação , Suspensões
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA