Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 5 de 5
Filtrar
Más filtros

Banco de datos
Tipo del documento
País de afiliación
Intervalo de año de publicación
1.
Eur Radiol ; 34(11): 7161-7172, 2024 Nov.
Artículo en Inglés | MEDLINE | ID: mdl-38775950

RESUMEN

OBJECTIVE: Microwave lung ablation (MWA) is a minimally invasive and inexpensive alternative cancer treatment for patients who are not candidates for surgery/radiotherapy. However, a major challenge for MWA is its relatively high tumor recurrence rates, due to incomplete treatment as a result of inaccurate planning. We introduce a patient-specific, deep-learning model to accurately predict post-treatment ablation zones to aid planning and enable effective treatments. MATERIALS AND METHODS: Our IRB-approved retrospective study consisted of ablations with a single applicator/burn/vendor between 01/2015 and 01/2019. The input data included pre-procedure computerized tomography (CT), ablation power/time, and applicator position. The ground truth ablation zone was segmented from follow-up CT post-treatment. Novel deformable image registration optimized for ablation scans and an applicator-centric co-ordinate system for data analysis were applied. Our prediction model was based on the U-net architecture. The registrations were evaluated using target registration error (TRE) and predictions using Bland-Altman plots, Dice co-efficient, precision, and recall, compared against the applicator vendor's estimates. RESULTS: The data included 113 unique ablations from 72 patients (median age 57, interquartile range (IQR) (49-67); 41 women). We obtained a TRE ≤ 2 mm on 52 ablations. Our prediction had no bias from ground truth ablation volumes (p = 0.169) unlike the vendor's estimate (p < 0.001) and had smaller limits of agreement (p < 0.001). An 11% improvement was achieved in the Dice score. The ability to account for patient-specific in-vivo anatomical effects due to vessels, chest wall, heart, lung boundaries, and fissures was shown. CONCLUSIONS: We demonstrated a patient-specific deep-learning model to predict the ablation treatment effect prior to the procedure, with the potential for improved planning, achieving complete treatments, and reduce tumor recurrence. CLINICAL RELEVANCE STATEMENT: Our method addresses the current lack of reliable tools to estimate ablation extents, required for ensuring successful ablation treatments. The potential clinical implications include improved treatment planning, ensuring complete treatments, and reducing tumor recurrence.


Asunto(s)
Aprendizaje Profundo , Neoplasias Pulmonares , Tomografía Computarizada por Rayos X , Humanos , Femenino , Masculino , Neoplasias Pulmonares/cirugía , Neoplasias Pulmonares/diagnóstico por imagen , Estudios Retrospectivos , Tomografía Computarizada por Rayos X/métodos , Anciano , Persona de Mediana Edad , Técnicas de Ablación/métodos , Microondas/uso terapéutico , Pulmón/diagnóstico por imagen , Pulmón/cirugía
2.
J Vasc Interv Radiol ; 33(11): 1408-1415.e3, 2022 11.
Artículo en Inglés | MEDLINE | ID: mdl-35940363

RESUMEN

PURPOSE: To evaluate a transmission optical spectroscopy instrument for rapid ex vivo assessment of core needle cancer biopsies (CNBs) at the point of care. MATERIALS AND METHODS: CNBs from surgically resected renal tumors and nontumor regions were scanned on their sampling trays with a custom spectroscopy instrument. After extracting principal spectral components, machine learning was used to train logistic regression, support vector machines, and random decision forest (RF) classifiers on 80% of randomized and stratified data. The algorithms were evaluated on the remaining 20% of the data set held out during training. Binary classification (tumor/nontumor) was performed based on a decision threshold. Multinomial classification was also performed to differentiate between the subtypes of renal cell carcinoma (RCC) and account for potential confounding effects from fat, blood, and necrotic tissue. Classifiers were compared based on sensitivity, specificity, and positive predictive value (PPV) relative to a histopathologic standard. RESULTS: A total of 545 CNBs from 102 patients were analyzed, yielding 5,583 spectra after outlier exclusion. At the individual spectra level, the best performing algorithm was RF with sensitivities of 96% and 92% and specificities of 90% and 89%, for the binary and multiclass analyses, respectively. At the full CNB level, RF algorithm also showed the highest sensitivity and specificity (93% and 91%, respectively). For RCC subtypes, the highest sensitivity and PPV were attained for clear cell (93.5%) and chromophobe (98.2%) subtypes, respectively. CONCLUSIONS: Ex vivo spectroscopy imaging paired with machine learning can accurately characterize renal mass CNB at the time of tissue acquisition.


Asunto(s)
Carcinoma de Células Renales , Neoplasias Renales , Humanos , Biopsia con Aguja Gruesa/métodos , Sistemas de Atención de Punto , Carcinoma de Células Renales/diagnóstico por imagen , Carcinoma de Células Renales/cirugía , Carcinoma de Células Renales/patología , Neoplasias Renales/diagnóstico por imagen , Neoplasias Renales/patología , Análisis Espectral
3.
Med Phys ; 48(11): 7154-7171, 2021 Nov.
Artículo en Inglés | MEDLINE | ID: mdl-34459001

RESUMEN

PURPOSE: Automatic localization of pneumonia on chest X-rays (CXRs) is highly desirable both as an interpretive aid to the radiologist and for timely diagnosis of the disease. However, pneumonia's amorphous appearance on CXRs and complexity of normal anatomy in the chest present key challenges that hinder accurate localization. Existing studies in this area are either not optimized to preserve spatial information of abnormality or depend on expensive expert-annotated bounding boxes. We present a novel generative adversarial network (GAN)-based machine learning approach for this problem, which is weakly supervised (does not require any location annotations), was trained to retain spatial information, and can produce pixel-wise abnormality maps highlighting regions of abnormality (as opposed to bounding boxes around abnormality). METHODS: Our method is based on the Wasserstein GAN framework and, to the best of our knowledge, the first application of GANs to this problem. Specifically, from an abnormal CXR as input, we generated the corresponding pseudo normal CXR image as output. The pseudo normal CXR is the "hypothetical" normal, if the same abnormal CXR were not to have any abnormalities. We surmise that the difference between the pseudo normal and the abnormal CXR highlights the pixels suspected to have pneumonia and hence is our output abnormality map. We trained our algorithm on an "unpaired" data set of abnormal and normal CXRs and did not require any location annotations such as bounding boxes/segmentations of abnormal regions. Furthermore, we incorporated additional prior knowledge/constraints into the model and showed that they help improve localization performance. We validated the model on a data set consisting of 14 184 CXRs from the Radiological Society of North America pneumonia detection challenge. RESULTS: We evaluated our methods by comparing the generated abnormality maps with radiologist annotated bounding boxes using receiver operating characteristic (ROC) analysis, image similarity metrics such as normalized cross-correlation/mutual information, and abnormality detection rate.We also present visual examples of the abnormality maps, covering various scenarios of abnormality occurrence. Results demonstrate the ability to highlight regions of abnormality with the best method achieving an ROC area under the curve (AUC) of 0.77 and a detection rate of 85%.The GAN tended to perform better as prior knowledge/constraints were incorporated into the model. CONCLUSIONS: We presented a novel GAN based approach for localizing pneumonia on CXRs that (1) does not require expensive hand annotated location ground truth; and (2) was trained to produce abnormality maps at the pixel level as opposed to bounding boxes. We demonstrated the efficacy of our methods via quantitative and qualitative results.


Asunto(s)
Neumonía , Algoritmos , Humanos , Neumonía/diagnóstico por imagen , Curva ROC , Radiografía , Rayos X
4.
Artículo en Inglés | MEDLINE | ID: mdl-31911737

RESUMEN

Categorization of radiological images according to characteristics such as modality, scanner parameters, body part etc, is important for quality control, clinical efficiency and research. The metadata associated with images stored in the DICOM format reliably captures scanner settings such as tube current in CT or echo time (TE) in MRI. Other parameters such as image orientation, body part examined and presence of intravenous contrast, however, are not inherent to the scanner settings, and therefore require user input which is prone to human error. There is a general need for automated approaches that will appropriately categorize images, even with parameters that are not inherent to the scanner settings. These approaches should be able to process both planar 2D images and full 3D scans. In this work, we present a deep learning based approach for automatically detecting one such parameter: the presence or absence of intravenous contrast in 3D MRI scans. Contrast is manually injected by radiology staff during the imaging examination, and its presence cannot be automatically recorded in the DICOM header by the scanner. Our classifier is a convolutional neural network (CNN) based on the ResNet architecture. Our data consisted of 1000 breast MRI scans (500 scans with and 500 scans without intravenous contrast), used for training and testing a CNN on 80%/20% split, respectively. The labels for the scans were obtained from the series descriptions created by certified radiological technologists. Preliminary results of our classifier are very promising with an area under the ROC curve (AUC) of 0.98, sensitivity and specificity of 1.0 and 0.9 respectively (at the optimal ROC cut-off point), demonstrating potential usefulness in both clinical as well as research settings.

5.
J Endocr Soc ; 3(9): 1693-1706, 2019 Sep 01.
Artículo en Inglés | MEDLINE | ID: mdl-31528829

RESUMEN

CONTEXT: Pituitary adenomas (PA) are often irregularly shaped, particularly posttreatment. There are no standardized radiographic criteria for assessing treatment response, substantially complicating interpretation of prospective outcome data. Existing imaging frameworks for intracranial tumors assume perfectly spherical targets and may be suboptimal. OBJECTIVE: To compare a three-dimensional (3D) volumetric approach against accepted surrogate measurements to assess PA posttreatment response (PTR). DESIGN: Retrospective review of patients with available pre- and postradiotherapy (RT) imaging. A neuroradiologist determined tumor sizes in one dimensional (1D) per Response Evaluation in Solid Tumors (RECIST) criteria, two dimensional (2D) per Response Assessment in Neuro-Oncology (RANO) criteria, and 3D estimates assuming a perfect sphere or perfect ellipsoid. Each tumor was manually segmented for 3D volumetric measurements. The Hakon Wadell method was used to calculate sphericity. SETTING: Tertiary cancer center. PATIENTS OR OTHER PARTICIPANTS: Patients (n = 34, median age = 50 years; 50% male) with PA and MRI scans before and after sellar RT. INTERVENTIONS: Patients received sellar RT for intact or surgically resected lesions. MAIN OUTCOME MEASURES: Radiographic PTR, defined as percent tumor size change. RESULTS: Using 3D volumetrics, mean sphericity = 0.63 pre-RT and 0.60 post-RT. With all approaches, most patients had stable disease on post-RT scan. PTR for 1D, 2D, and 3D spherical measurements were moderately well correlated with 3D volumetrics (e.g., for 1D: 0.66, P < 0.0001) and were superior to 3D ellipsoid. Intraclass correlation coefficient demonstrated moderate to good reliability for 1D, 2D, and 3D sphere (P < 0.001); 3D ellipsoid was inferior (P = 0.009). 3D volumetrics identified more potential partially responding and progressive lesions. CONCLUSIONS: Although PAs are irregularly shaped, 1D and 2D approaches are adequately correlated with volumetric assessment.

SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA