Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 71
Filtrar
2.
Ann Surg Oncol ; 2024 May 03.
Artículo en Inglés | MEDLINE | ID: mdl-38700799

RESUMEN

BACKGROUND: Rectal tumors display varying degrees of response to total neoadjuvant therapy (TNT). We evaluated the performance of a convolutional neural network (CNN) in interpreting endoscopic images of either a non-complete response to TNT or local regrowth during watch-and-wait surveillance. METHODS: Endoscopic images from stage II/III rectal cancers treated with TNT from 2012 to 2020 at a single institution were retrospectively reviewed. Images were labelled as Tumor or No Tumor based on endoscopy timing (before, during, or after treatment) and the tumor's endoluminal response. A CNN was trained using ResNet-50 architecture. The area under the curve (AUC) was analyzed during training and for two test sets. The main test set included images of tumors treated with TNT. The other contained images of local regrowth. The model's performance was compared to sixteen surgeons and surgical trainees who evaluated 119 images for evidence of tumor. Fleiss' kappa was calculated by respondent experience level. RESULTS: A total of 2717 images from 288 patients were included; 1407 (51.8%) contained tumor. The AUC was 0.99, 0.98, and 0.92 for training, main test, and local regrowth test sets. The model performed on par with surgeons of all experience levels for the main test set. Interobserver agreement was good ( k = 0.71-0.81). All groups outperformed the model in identifying tumor from images of local regrowth. Interobserver agreement was fair to moderate ( k = 0.24-0.52). CONCLUSIONS: A highly accurate CNN matched the performance of colorectal surgeons in identifying a noncomplete response to TNT. However, the model demonstrated suboptimal accuracy when analyzing images of local regrowth.

3.
BJR Artif Intell ; 1(1): ubae004, 2024 Jan.
Artículo en Inglés | MEDLINE | ID: mdl-38476956

RESUMEN

Objectives: Auto-segmentation promises greater speed and lower inter-reader variability than manual segmentations in radiation oncology clinical practice. This study aims to implement and evaluate the accuracy of the auto-segmentation algorithm, "Masked Image modeling using the vision Transformers (SMIT)," for neck nodal metastases on longitudinal T2-weighted (T2w) MR images in oropharyngeal squamous cell carcinoma (OPSCC) patients. Methods: This prospective clinical trial study included 123 human papillomaviruses (HPV-positive [+]) related OSPCC patients who received concurrent chemoradiotherapy. T2w MR images were acquired on 3 T at pre-treatment (Tx), week 0, and intra-Tx weeks (1-3). Manual delineations of metastatic neck nodes from 123 OPSCC patients were used for the SMIT auto-segmentation, and total tumor volumes were calculated. Standard statistical analyses compared contour volumes from SMIT vs manual segmentation (Wilcoxon signed-rank test [WSRT]), and Spearman's rank correlation coefficients (ρ) were computed. Segmentation accuracy was evaluated on the test data set using the dice similarity coefficient (DSC) metric value. P-values <0.05 were considered significant. Results: No significant difference in manual and SMIT delineated tumor volume at pre-Tx (8.68 ± 7.15 vs 8.38 ± 7.01 cm3, P = 0.26 [WSRT]), and the Bland-Altman method established the limits of agreement as -1.71 to 2.31 cm3, with a mean difference of 0.30 cm3. SMIT model and manually delineated tumor volume estimates were highly correlated (ρ = 0.84-0.96, P < 0.001). The mean DSC metric values were 0.86, 0.85, 0.77, and 0.79 at the pre-Tx and intra-Tx weeks (1-3), respectively. Conclusions: The SMIT algorithm provides sufficient segmentation accuracy for oncological applications in HPV+ OPSCC. Advances in knowledge: First evaluation of auto-segmentation with SMIT using longitudinal T2w MRI in HPV+ OPSCC.

4.
Phys Imaging Radiat Oncol ; 29: 100542, 2024 Jan.
Artículo en Inglés | MEDLINE | ID: mdl-38369989

RESUMEN

Background and purpose: Objective assessment of delivered radiotherapy (RT) to thoracic organs requires fast and accurate deformable dose mapping. The aim of this study was to implement and evaluate an artificial intelligence (AI) deformable image registration (DIR) and organ segmentation-based AI dose mapping (AIDA) applied to the esophagus and the heart. Materials and methods: AIDA metrics were calculated for 72 locally advanced non-small cell lung cancer patients treated with concurrent chemo-RT to 60 Gy in 2 Gy fractions in an automated pipeline. The pipeline steps were: (i) automated rigid alignment and cropping of planning CT to week 1 and week 2 cone-beam CT (CBCT) field-of-views, (ii) AI segmentation on CBCTs, and (iii) AI-DIR-based dose mapping to compute dose metrics. AIDA dose metrics were compared to the planned dose and manual contour dose mapping (manual DA). Results: AIDA required âˆ¼2 min/patient. Esophagus and heart segmentations were generated with a mean Dice similarity coefficient (DSC) of 0.80±0.15 and 0.94±0.05, a Hausdorff distance at 95th percentile (HD95) of 3.9±3.4 mm and 14.1±8.3 mm, respectively. AIDA heart dose was significantly lower than the planned heart dose (p = 0.04). Larger dose deviations (>=1Gy) were more frequently observed between AIDA and the planned dose (N = 26) than with manual DA (N = 6). Conclusions: Rapid estimation of RT dose to thoracic tissues from CBCT is feasible with AIDA. AIDA-derived metrics and segmentations were similar to manual DA, thus motivating the use of AIDA for RT applications.

5.
IEEE Trans Med Imaging ; 43(3): 916-927, 2024 Mar.
Artículo en Inglés | MEDLINE | ID: mdl-37874704

RESUMEN

Directionally sensitive radiomic features including the histogram of oriented gradient (HOG) have been shown to provide objective and quantitative measures for predicting disease outcomes in multiple cancers. However, radiomic features are sensitive to imaging variabilities including acquisition differences, imaging artifacts and noise, making them impractical for using in the clinic to inform patient care. We treat the problem of extracting robust local directionality features by mapping via optimal transport a given local image patch to an iso-intense patch of its mean. We decompose the transport map into sub-work costs each transporting in different directions. To test our approach, we evaluated the ability of the proposed approach to quantify tumor heterogeneity from magnetic resonance imaging (MRI) scans of brain glioblastoma multiforme, computed tomography (CT) scans of head and neck squamous cell carcinoma as well as longitudinal CT scans in lung cancer patients treated with immunotherapy. By considering the entropy difference of the extracted local directionality within tumor regions, we found that patients with higher entropy in their images, had significantly worse overall survival for all three datasets, which indicates that tumors that have images exhibiting flows in many directions may be more malignant. This may seem to reflect high tumor histologic grade or disorganization. Furthermore, by comparing the changes in entropy longitudinally using two imaging time points, we found patients with reduction in entropy from baseline CT are associated with longer overall survival (hazard ratio = 1.95, 95% confidence interval of 1.4-2.8, p = 1.65e-5). The proposed method provides a robust, training free approach to quantify the local directionality contained in images.


Asunto(s)
Neoplasias Pulmonares , Tomografía Computarizada por Rayos X , Humanos , Tomografía Computarizada por Rayos X/métodos , Neoplasias Pulmonares/patología , Imagen por Resonancia Magnética
6.
Med Phys ; 50(8): 4758-4774, 2023 Aug.
Artículo en Inglés | MEDLINE | ID: mdl-37265185

RESUMEN

BACKGROUND: Adaptive radiation treatment (ART) for locally advanced pancreatic cancer (LAPC) requires consistently accurate segmentation of the extremely mobile gastrointestinal (GI) organs at risk (OAR) including the stomach, duodenum, large and small bowel. Also, due to lack of sufficiently accurate and fast deformable image registration (DIR), accumulated dose to the GI OARs is currently only approximated, further limiting the ability to more precisely adapt treatments. PURPOSE: Develop a 3-D Progressively refined joint Registration-Segmentation (ProRSeg) deep network to deformably align and segment treatment fraction magnetic resonance images (MRI)s, then evaluate segmentation accuracy, registration consistency, and feasibility for OAR dose accumulation. METHOD: ProRSeg was trained using five-fold cross-validation with 110 T2-weighted MRI acquired at five treatment fractions from 10 different patients, taking care that same patient scans were not placed in training and testing folds. Segmentation accuracy was measured using Dice similarity coefficient (DSC) and Hausdorff distance at 95th percentile (HD95). Registration consistency was measured using coefficient of variation (CV) in displacement of OARs. Statistical comparison to other deep learning and iterative registration methods were done using the Kruskal-Wallis test, followed by pair-wise comparisons with Bonferroni correction applied for multiple testing. Ablation tests and accuracy comparisons against multiple methods were done. Finally, applicability of ProRSeg to segment cone-beam CT (CBCT) scans was evaluated on a publicly available dataset of 80 scans using five-fold cross-validation. RESULTS: ProRSeg processed 3D volumes (128 × 192 × 128) in 3 s on a NVIDIA Tesla V100 GPU. It's segmentations were significantly more accurate ( p < 0.001 $p<0.001$ ) than compared methods, achieving a DSC of 0.94 ±0.02 for liver, 0.88±0.04 for large bowel, 0.78±0.03 for small bowel and 0.82±0.04 for stomach-duodenum from MRI. ProRSeg achieved a DSC of 0.72±0.01 for small bowel and 0.76±0.03 for stomach-duodenum from public CBCT dataset. ProRSeg registrations resulted in the lowest CV in displacement (stomach-duodenum C V x $CV_{x}$ : 0.75%, C V y $CV_{y}$ : 0.73%, and C V z $CV_{z}$ : 0.81%; small bowel C V x $CV_{x}$ : 0.80%, C V y $CV_{y}$ : 0.80%, and C V z $CV_{z}$ : 0.68%; large bowel C V x $CV_{x}$ : 0.71%, C V y $CV_{y}$ : 0.81%, and C V z $CV_{z}$ : 0.75%). ProRSeg based dose accumulation accounting for intra-fraction (pre-treatment to post-treatment MRI scan) and inter-fraction motion showed that the organ dose constraints were violated in four patients for stomach-duodenum and for three patients for small bowel. Study limitations include lack of independent testing and ground truth phantom datasets to measure dose accumulation accuracy. CONCLUSIONS: ProRSeg produced more accurate and consistent GI OARs segmentation and DIR of MRI and CBCTs compared to multiple methods. Preliminary results indicates feasibility for OAR dose accumulation using ProRSeg.


Asunto(s)
Procesamiento de Imagen Asistido por Computador , Órganos en Riesgo , Humanos , Órganos en Riesgo/diagnóstico por imagen , Procesamiento de Imagen Asistido por Computador/métodos , Tomografía Computarizada de Haz Cónico/métodos , Imagen por Resonancia Magnética/métodos , Planificación de la Radioterapia Asistida por Computador/métodos
7.
Cancers (Basel) ; 15(9)2023 Apr 30.
Artículo en Inglés | MEDLINE | ID: mdl-37174039

RESUMEN

Cancer care increasingly relies on imaging for patient management. The two most common cross-sectional imaging modalities in oncology are computed tomography (CT) and magnetic resonance imaging (MRI), which provide high-resolution anatomic and physiological imaging. Herewith is a summary of recent applications of rapidly advancing artificial intelligence (AI) in CT and MRI oncological imaging that addresses the benefits and challenges of the resultant opportunities with examples. Major challenges remain, such as how best to integrate AI developments into clinical radiology practice, the vigorous assessment of quantitative CT and MR imaging data accuracy, and reliability for clinical utility and research integrity in oncology. Such challenges necessitate an evaluation of the robustness of imaging biomarkers to be included in AI developments, a culture of data sharing, and the cooperation of knowledgeable academics with vendor scientists and companies operating in radiology and oncology fields. Herein, we will illustrate a few challenges and solutions of these efforts using novel methods for synthesizing different contrast modality images, auto-segmentation, and image reconstruction with examples from lung CT as well as abdome, pelvis, and head and neck MRI. The imaging community must embrace the need for quantitative CT and MRI metrics beyond lesion size measurement. AI methods for the extraction and longitudinal tracking of imaging metrics from registered lesions and understanding the tumor environment will be invaluable for interpreting disease status and treatment efficacy. This is an exciting time to work together to move the imaging field forward with narrow AI-specific tasks. New AI developments using CT and MRI datasets will be used to improve the personalized management of cancer patients.

8.
Med Phys ; 50(8): 4854-4870, 2023 Aug.
Artículo en Inglés | MEDLINE | ID: mdl-36856092

RESUMEN

BACKGROUND: Dose escalation radiotherapy enables increased control of prostate cancer (PCa) but requires segmentation of dominant index lesions (DIL). This motivates the development of automated methods for fast, accurate, and consistent segmentation of PCa DIL. PURPOSE: To construct and validate a model for deep-learning-based automatic segmentation of PCa DIL defined by Gleason score (GS) ≥3+4 from MR images applied to MR-guided radiation therapy. Validate generalizability of constructed models across scanner and acquisition differences. METHODS: Five deep-learning networks were evaluated on apparent diffusion coefficient (ADC) MRI from 500 lesions in 365 patients arising from internal training Dataset 1 (156 lesions in 125 patients, 1.5Tesla GE MR with endorectal coil), testing using Dataset 1 (35 lesions in 26 patients), external ProstateX Dataset 2 (299 lesions in 204 patients, 3Tesla Siemens MR), and internal inter-rater Dataset 3 (10 lesions in 10 patients, 3Tesla Philips MR). The five networks include: multiple resolution residually connected network (MRRN) and MRRN regularized in training with deep supervision implemented into the last convolutional block (MRRN-DS), Unet, Unet++, ResUnet, and fast panoptic segmentation (FPSnet) as well as fast panoptic segmentation with smoothed labels (FPSnet-SL). Models were evaluated by volumetric DIL segmentation accuracy using Dice similarity coefficient (DSC) and the balanced F1 measure of detection accuracy, as a function of lesion aggressiveness and size (Dataset 1 and 2), and accuracy with respect to two-raters (on Dataset 3). Upon acceptance for publication segmentation models will be made available in an open-source GitHub repository. RESULTS: In general, MRRN-DS more accurately segmented tumors than other methods on the testing datasets. MRRN-DS significantly outperformed ResUnet in Dataset2 (DSC of 0.54 vs. 0.44, p < 0.001) and the Unet++ in Dataset3 (DSC of 0.45 vs. p = 0.04). FPSnet-SL was similarly accurate as MRRN-DS in Dataset2 (p = 0.30), but MRRN-DS significantly outperformed FPSnet and FPSnet-SL in both Dataset1 (0.60 vs. 0.51 [p = 0.01] and 0.54 [p = 0.049] respectively) and Dataset3 (0.45 vs. 0.06 [p = 0.002] and 0.24 [p = 0.004] respectively). Finally, MRRN-DS produced slightly higher agreement with experienced radiologist than two radiologists in Dataset 3 (DSC of 0.45 vs. 0.41). CONCLUSIONS: MRRN-DS was generalizable to different MR testing datasets acquired using different scanners. It produced slightly higher agreement with an experienced radiologist than that between two radiologists. Finally, MRRN-DS more accurately segmented aggressive lesions, which are generally candidates for radiative dose ablation.


Asunto(s)
Aprendizaje Profundo , Neoplasias de la Próstata , Oncología por Radiación , Masculino , Humanos , Neoplasias de la Próstata/diagnóstico por imagen , Neoplasias de la Próstata/radioterapia , Imagen por Resonancia Magnética , Radiólogos
9.
Med Phys ; 50(5): 3066-3075, 2023 May.
Artículo en Inglés | MEDLINE | ID: mdl-36808107

RESUMEN

BACKGROUND: Gastrointestinal (GI) tract motility is one of the main sources for intra/inter-fraction variability and uncertainty in radiation therapy for abdominal targets. Models for GI motility can improve the assessment of delivered dose and contribute to the development, testing, and validation of deformable image registration (DIR) and dose-accumulation algorithms. PURPOSE: To implement GI tract motion in the 4D extended cardiac-torso (XCAT) digital phantom of human anatomy. MATERIALS AND METHODS: Motility modes that exhibit large amplitude changes in the diameter of the GI tract and may persist over timescales comparable to online adaptive planning and radiotherapy delivery were identified based on literature research. Search criteria included amplitude changes larger than planning risk volume expansions and durations of the order of tens of minutes. The following modes were identified: peristalsis, rhythmic segmentation, high amplitude propagating contractions (HAPCs), and tonic contractions. Peristalsis and rhythmic segmentations were modeled by traveling and standing sinusoidal waves. HAPCs and tonic contractions were modeled by traveling and stationary Gaussian waves. Wave dispersion in the temporal and spatial domain was implemented by linear, exponential, and inverse power law functions. Modeling functions were applied to the control points of the nonuniform rational B-spline surfaces defined in the reference XCAT library. GI motility was combined with the cardiac and respiratory motions available in the standard 4D-XCAT phantom. Default model parameters were estimated based on the analysis of cine MRI acquisitions in 10 patients treated in a 1.5T MR-linac. RESULTS: We demonstrate the ability to generate realistic 4D multimodal images that simulate GI motility combined with respiratory and cardiac motion. All modes of motility, except tonic contractions, were observed in the analysis of our cine MRI acquisitions. Peristalsis was the most common. Default parameters estimated from cine MRI were used as initial values for simulation experiments. It is shown that in patients undergoing stereotactic body radiotherapy for abdominal targets, the effects of GI motility can be comparable or larger than the effects of respiratory motion. CONCLUSION: The digital phantom provides realistic models to aid in medical imaging and radiation therapy research. The addition of GI motility will further contribute to the development, testing, and validation of DIR and dose accumulation algorithms for MR-guided radiotherapy.


Asunto(s)
Algoritmos , Imagen por Resonancia Cinemagnética , Humanos , Fantasmas de Imagen , Simulación por Computador , Tracto Gastrointestinal , Imagen por Resonancia Magnética/métodos
10.
Adv Radiat Oncol ; 8(1): 100916, 2023.
Artículo en Inglés | MEDLINE | ID: mdl-36711062

RESUMEN

Purpose: Pseudoprogression mimicking recurrent glioblastoma remains a diagnostic challenge that may adversely confound or delay appropriate treatment or clinical trial enrollment. We sought to build a radiomic classifier to predict pseudoprogression in patients with primary isocitrate dehydrogenase wild type glioblastoma. Methods and Materials: We retrospectively examined a training cohort of 74 patients with isocitrate dehydrogenase wild type glioblastomas with brain magnetic resonance imaging including dynamic contrast enhanced T1 perfusion before resection of an enhancing lesion indeterminate for recurrent tumor or pseudoprogression. A recursive feature elimination random forest classifier was built using nested cross-validation without and with O6-methylguanine-DNA methyltransferase status to predict pseudoprogression. Results: A classifier constructed with cross-validation on the training cohort achieved an area under the receiver operating curve of 81% for predicting pseudoprogression. This was further improved to 89% with the addition of O6-methylguanine-DNA methyltransferase status into the classifier. Conclusions: Our results suggest that radiomic analysis of contrast T1-weighted images and magnetic resonance imaging perfusion images can assist the prompt diagnosis of pseudoprogression. Validation on external and independent data sets is necessary to verify these advanced analyses, which can be performed on routinely acquired clinical images and may help inform clinical treatment decisions.

11.
Dis Colon Rectum ; 66(3): 383-391, 2023 03 01.
Artículo en Inglés | MEDLINE | ID: mdl-35358109

RESUMEN

BACKGROUND: A barrier to the widespread adoption of watch-and-wait management for locally advanced rectal cancer is the inaccuracy and variability of identifying tumor response endoscopically in patients who have completed total neoadjuvant therapy (chemoradiotherapy and systemic chemotherapy). OBJECTIVE: This study aimed to develop a novel method of identifying the presence or absence of a tumor in endoscopic images using deep convolutional neural network-based automatic classification and to assess the accuracy of the method. DESIGN: In this prospective pilot study, endoscopic images obtained before, during, and after total neoadjuvant therapy were grouped on the basis of tumor presence. A convolutional neural network was modified for probabilistic classification of tumor versus no tumor and trained with an endoscopic image set. After training, a testing endoscopic imaging set was applied to the network. SETTINGS: The study was conducted at a comprehensive cancer center. PATIENTS: Images were analyzed from 109 patients who were diagnosed with locally advanced rectal cancer between December 2012 and July 2017 and who underwent total neoadjuvant therapy. MAIN OUTCOME MEASURES: The main outcomes were accuracy of identifying tumor presence or absence in endoscopic images measured as area under the receiver operating characteristic for the training and testing image sets. RESULTS: A total of 1392 images were included; 1099 images (468 of no tumor and 631 of tumor) were for training and 293 images (151 of no tumor and 142 of tumor) for testing. The area under the receiver operating characteristic for training and testing was 0.83. LIMITATIONS: The study had a limited number of images in each set and was conducted at a single institution. CONCLUSIONS: The convolutional neural network method is moderately accurate in distinguishing tumor from no tumor. Further research should focus on validating the convolutional neural network on a large image set. See Video Abstract at http://links.lww.com/DCR/B959 . MODELO BASADO EN APRENDIZAJE PROFUNDO PARA IDENTIFICAR TUMORES EN IMGENES ENDOSCPICAS DE PACIENTES CON CNCER DE RECTO LOCALMENTE AVANZADO TRATADOS CON TERAPIA NEOADYUVANTE TOTAL: ANTECEDENTES:Una barrera para la aceptación generalizada del tratamiento de Observar y Esperar para el cáncer de recto localmente avanzado, es la imprecisión y la variabilidad en la identificación de la respuesta tumoral endoscópica, en pacientes que completaron la terapia neoadyuvante total (quimiorradioterapia y quimioterapia sistémica).OBJETIVO:Desarrollar un método novedoso para identificar la presencia o ausencia de un tumor en imágenes endoscópicas utilizando una clasificación automática basada en redes neuronales convolucionales profundas y evaluar la precisión del método.DISEÑO:Las imágenes endoscópicas obtenidas antes, durante y después de la terapia neoadyuvante total se agruparon en base de la presencia del tumor. Se modificó una red neuronal convolucional para la clasificación probabilística de tumor versus no tumor y se entrenó con un conjunto de imágenes endoscópicas. Después del entrenamiento, se aplicó a la red un conjunto de imágenes endoscópicas de prueba.ENTORNO CLINICO:El estudio se realizó en un centro oncológico integral.PACIENTES:Analizamos imágenes de 109 pacientes que fueron diagnosticados de cáncer de recto localmente avanzado entre diciembre de 2012 y julio de 2017 y que se sometieron a terapia neoadyuvante total.PRINCIPALES MEDIDAS DE VALORACION:La precisión en la identificación de la presencia o ausencia de tumores en imágenes endoscópicas medidas como el área bajo la curva de funcionamiento del receptor para los conjuntos de imágenes de entrenamiento y prueba.RESULTADOS:Se incluyeron mil trescientas noventa y dos imágenes: 1099 (468 sin tumor y 631 con tumor) para entrenamiento y 293 (151 sin tumor y 142 con tumor) para prueba. El área bajo la curva operativa del receptor para entrenamiento y prueba fue de 0,83.LIMITACIONES:El estudio tuvo un número limitado de imágenes en cada conjunto y se realizó en una sola institución.CONCLUSIÓN:El método de la red neuronal convolucional es moderadamente preciso para distinguir el tumor de ningún tumor. La investigación adicional debería centrarse en validar la red neuronal convolucional en un conjunto de imágenes mayor. Consulte Video Resumen en http://links.lww.com/DCR/B959 . (Traducción -Dr. Fidel Ruiz Healy ).


Asunto(s)
Aprendizaje Profundo , Neoplasias Primarias Secundarias , Neoplasias del Recto , Humanos , Terapia Neoadyuvante/métodos , Estudios Retrospectivos , Estudios Prospectivos , Proyectos Piloto , Neoplasias del Recto/diagnóstico por imagen , Neoplasias del Recto/terapia , Neoplasias del Recto/patología
12.
Med Image Comput Comput Assist Interv ; 13434: 556-566, 2022 Sep.
Artículo en Inglés | MEDLINE | ID: mdl-36468915

RESUMEN

Vision transformers efficiently model long-range context and thus have demonstrated impressive accuracy gains in several image analysis tasks including segmentation. However, such methods need large labeled datasets for training, which is hard to obtain for medical image analysis. Self-supervised learning (SSL) has demonstrated success in medical image segmentation using convolutional networks. In this work, we developed a self-distillation learning with masked image modeling method to perform SSL for vision transformers (SMIT) applied to 3D multi-organ segmentation from CT and MRI. Our contribution combines a dense pixel-wise regression pretext task performed within masked patches called masked image prediction with masked patch token distillation to pre-train vision transformers. Our approach is more accurate and requires fewer fine tuning datasets than other pretext tasks. Unlike prior methods, which typically used image sets arising from disease sites and imaging modalities corresponding to the target tasks, we used 3,643 CT scans (602,708 images) arising from head and neck, lung, and kidney cancers as well as COVID-19 for pre-training and applied it to abdominal organs segmentation from MRI pancreatic cancer patients as well as publicly available 13 different abdominal organs segmentation from CT. Our method showed clear accuracy improvement (average DSC of 0.875 from MRI and 0.878 from CT) with reduced requirement for fine-tuning datasets over commonly used pretext tasks. Extensive comparisons against multiple current SSL methods were done. Our code is available at: https://github.com/harveerar/SMIT.git.

13.
Abdom Radiol (NY) ; 47(8): 2770-2782, 2022 08.
Artículo en Inglés | MEDLINE | ID: mdl-35710951

RESUMEN

PURPOSE: To evaluate an MRI-based radiomic texture classifier alone and combined with radiologist qualitative assessment in predicting pathological complete response (pCR) using restaging MRI with internal training and external validation. METHODS: Consecutive patients with locally advanced rectal cancer (LARC) who underwent neoadjuvant therapy followed by total mesorectal excision from March 2012 to February 2016 (Memorial Sloan Kettering Cancer Center/internal dataset, n = 114, 41% female, median age = 55) and July 2014 to October 2015 (Instituto do Câncer do Estado de São Paulo/external dataset, n = 50, 52% female, median age = 64.5) were retrospectively included. Two radiologists (R1, senior; R2, junior) independently evaluated restaging MRI, classifying patients (radiological complete response vs radiological partial response). Model A (n = 33 texture features), model B (n = 91 features including texture, shape, and edge features), and two combination models (model A + B + R1, model A + B + R2) were constructed. Pathology served as the reference standard for neoadjuvant treatment response. Comparison of the classifiers' AUCs on the external set was done using DeLong's test. RESULTS: Models A and B had similar discriminative ability (P = 0.3; Model B AUC = 83%, 95% CI 70%-97%). Combined models increased inter-reader agreement compared with radiologist-only interpretation (κ = 0.82, 95% CI 0.70-0.89 vs k = 0.25, 95% CI 0.11-0.61). The combined model slightly increased junior radiologist specificity, positive predictive value, and negative predictive values (93% vs 90%, 57% vs 50%, and 91% vs 90%, respectively). CONCLUSION: We developed and externally validated a combined model using radiomics and radiologist qualitative assessment, which improved inter-reader agreement and slightly increased the diagnostic performance of the junior radiologist in predicting pCR after neoadjuvant treatment in patients with LARC.


Asunto(s)
Inteligencia Artificial , Neoplasias del Recto , Brasil , Quimioradioterapia , Femenino , Humanos , Imagen por Resonancia Magnética , Espectroscopía de Resonancia Magnética , Masculino , Persona de Mediana Edad , Radiólogos , Neoplasias del Recto/diagnóstico por imagen , Neoplasias del Recto/patología , Neoplasias del Recto/terapia , Estudios Retrospectivos , Resultado del Tratamiento
14.
Nat Cancer ; 3(6): 723-733, 2022 06.
Artículo en Inglés | MEDLINE | ID: mdl-35764743

RESUMEN

Patients with high-grade serous ovarian cancer suffer poor prognosis and variable response to treatment. Known prognostic factors for this disease include homologous recombination deficiency status, age, pathological stage and residual disease status after debulking surgery. Recent work has highlighted important prognostic information captured in computed tomography and histopathological specimens, which can be exploited through machine learning. However, little is known about the capacity of combining features from these disparate sources to improve prediction of treatment response. Here, we assembled a multimodal dataset of 444 patients with primarily late-stage high-grade serous ovarian cancer and discovered quantitative features, such as tumor nuclear size on staining with hematoxylin and eosin and omental texture on contrast-enhanced computed tomography, associated with prognosis. We found that these features contributed complementary prognostic information relative to one another and clinicogenomic features. By fusing histopathological, radiologic and clinicogenomic machine-learning models, we demonstrate a promising path toward improved risk stratification of patients with cancer through multimodal data integration.


Asunto(s)
Cistadenocarcinoma Seroso , Neoplasias Ováricas , Cistadenocarcinoma Seroso/diagnóstico por imagen , Femenino , Humanos , Aprendizaje Automático , Neoplasias Ováricas/diagnóstico por imagen , Medición de Riesgo
15.
Med Phys ; 49(8): 5244-5257, 2022 Aug.
Artículo en Inglés | MEDLINE | ID: mdl-35598077

RESUMEN

BACKGROUND: Fast and accurate multiorgans segmentation from computed tomography (CT) scans is essential for radiation treatment planning. Self-attention(SA)-based deep learning methodologies provide higher accuracies than standard methods but require memory and computationally intensive calculations, which restricts their use to relatively shallow networks. PURPOSE: Our goal was to develop and test a new computationally fast and memory-efficient bidirectional SA method called nested block self-attention (NBSA), which is applicable to shallow and deep multiorgan segmentation networks. METHODS: A new multiorgan segmentation method combining a deep multiple resolution residual network with computationally efficient SA called nested block SA (MRRN-NBSA) was developed and evaluated to segment 18 different organs from head and neck (HN) and abdomen organs. MRRN-NBSA combines features from multiple image resolutions and feature levels with SA to extract organ-specific contextual features. Computational efficiency is achieved by using memory blocks of fixed spatial extent for SA calculation combined with bidirectional attention flow. Separate models were trained for HN (n = 238) and abdomen (n = 30) and tested on set aside open-source grand challenge data sets for HN (n = 10) using a public domain database of computational anatomy and blinded testing on 20 cases from Beyond the Cranial Vault data set with overall accuracy provided by the grand challenge website for abdominal organs. Robustness to two-rater segmentations was also evaluated for HN cases using the open-source data set. Statistical comparison of MRRN-NBSA against Unet, convolutional network-based SA using criss-cross attention (CCA), dual SA, and transformer-based (UNETR) methods was done by measuring the differences in the average Dice similarity coefficient (DSC) accuracy for all HN organs using the Kruskall-Wallis test, followed by individual method comparisons using paired, two-sided Wilcoxon-signed rank tests at 95% confidence level with Bonferroni correction used for multiple comparisons. RESULTS: MRRN-NBSA produced an average high DSC of 0.88 for HN and 0.86 for the abdomen that exceeded current methods. MRRN-NBSA was more accurate than the computationally most efficient CCA (average DSC of 0.845 for HN, 0.727 for abdomen). Kruskal-Wallis test showed significant difference between evaluated methods (p=0.00025). Pair-wise comparisons showed significant differences between MRRN-NBSA than Unet (p=0.0003), CCA (p=0.030), dual (p=0.038), and UNETR methods (p=0.012) after Bonferroni correction. MRRN-NBSA produced less variable segmentations for submandibular glands (0.82 ± 0.06) compared to two raters (0.75 ± 0.31). CONCLUSIONS: MRRN-NBSA produced more accurate multiorgan segmentations than current methods on two different public data sets. Testing on larger institutional cohorts is required to establish feasibility for clinical use.


Asunto(s)
Procesamiento de Imagen Asistido por Computador , Tomografía Computarizada por Rayos X , Abdomen , Atención , Cabeza , Procesamiento de Imagen Asistido por Computador/métodos , Tomografía Computarizada por Rayos X/métodos
16.
Phys Imaging Radiat Oncol ; 21: 54-61, 2022 Jan.
Artículo en Inglés | MEDLINE | ID: mdl-35243032

RESUMEN

BACKGROUND AND PURPOSE: Stereotactic body radiation therapy (SBRT) of locally advanced pancreatic cancer (LAPC) is challenging due to significant motion of gastrointestinal (GI) organs. The goal of our study was to quantify inter and intrafraction deformations and dose accumulation of upper GI organs in LAPC patients. MATERIALS AND METHODS: Five LAPC patients undergoing five-fraction magnetic resonance-guided radiation therapy (MRgRT) using abdominal compression and daily online plan adaptation to 50 Gy were analyzed. A pre-treatment, verification, and post-treatment MR imaging (MRI) for each of the five fractions (75 total) were used to calculate intra and interfraction motion. The MRIs were registered using Large Deformation Diffeomorphic Metric Mapping (LDDMM) deformable image registration (DIR) method and total dose delivered to stomach_duodenum, small bowel (SB) and large bowel (LB) were accumulated. Deformations were quantified using gradient magnitude and Jacobian integral of the Deformation Vector Fields (DVF). Registration DVFs were geometrically assessed using Dice and 95th percentile Hausdorff distance (HD95) between the deformed and physician's contours. Accumulated doses were then calculated from the DVFs. RESULTS: Median Dice and HD95 were: Stomach_duodenum (0.9, 1.0 mm), SB (0.9, 3.6 mm), and LB (0.9, 2.0 mm). Median (max) interfraction deformation for stomach_duodenum, SB and LB was 6.4 (25.8) mm, 7.9 (40.5) mm and 7.6 (35.9) mm. Median intrafraction deformation was 5.5 (22.6) mm, 8.2 (37.8) mm and 7.2 (26.5) mm. Accumulated doses for two patients exceeded institutional constraints for stomach_duodenum, one of whom experienced Grade1 acute and late abdominal toxicity. CONCLUSION: LDDMM method indicates feasibility to measure large GI motion and accumulate dose. Further validation on larger cohort will allow quantitative dose accumulation to more reliably optimize online MRgRT.

17.
IEEE Trans Med Imaging ; PP2022 Feb 25.
Artículo en Inglés | MEDLINE | ID: mdl-35213307

RESUMEN

Image-guided adaptive lung radiotherapy requires accurate tumor and organs segmentation from during treatment cone-beam CT (CBCT) images. Thoracic CBCTs are hard to segment because of low soft-tissue contrast, imaging artifacts, respiratory motion, and large treatment induced intra-thoracic anatomic changes. Hence, we developed a novel Patient-specific Anatomic Context and Shape prior or PACS-aware 3D recurrent registration-segmentation network for longitudinal thoracic CBCT segmentation. Segmentation and registration networks were concurrently trained in an end-to-end framework and implemented with convolutional long-short term memory models. The registration network was trained in an unsupervised manner using pairs of planning CT (pCT) and CBCT images and produced a progressively deformed sequence of images. The segmentation network was optimized in a one-shot setting by combining progressively deformed pCT (anatomic context) and pCT delineations (shape context) with CBCT images. Our method, one-shot PACS was significantly more accurate (p <0.001) for tumor (DSC of 0.83 ± 0.08, surface DSC [sDSC] of 0.97 ± 0.06, and Hausdorff distance at 95th percentile [HD95] of 3.97±3.02mm) and the esophagus (DSC of 0.78 ± 0.13, sDSC of 0.90±0.14, HD95 of 3.22±2.02) segmentation than multiple methods. Ablation tests and comparative experiments were also done.

18.
Phys Med Biol ; 67(2)2022 01 17.
Artículo en Inglés | MEDLINE | ID: mdl-34874302

RESUMEN

Objective.Delineating swallowing and chewing structures aids in radiotherapy (RT) treatment planning to limit dysphagia, trismus, and speech dysfunction. We aim to develop an accurate and efficient method to automate this process.Approach.CT scans of 242 head and neck (H&N) cancer patients acquired from 2004 to 2009 at our institution were used to develop auto-segmentation models for the masseters, medial pterygoids, larynx, and pharyngeal constrictor muscle using DeepLabV3+. A cascaded framework was used, wherein models were trained sequentially to spatially constrain each structure group based on prior segmentations. Additionally, an ensemble of models, combining contextual information from axial, coronal, and sagittal views was used to improve segmentation accuracy. Prospective evaluation was conducted by measuring the amount of manual editing required in 91 H&N CT scans acquired February-May 2021.Main results. Medians and inter-quartile ranges of Dice similarity coefficients (DSC) computed on the retrospective testing set (N = 24) were 0.87 (0.85-0.89) for the masseters, 0.80 (0.79-0.81) for the medial pterygoids, 0.81 (0.79-0.84) for the larynx, and 0.69 (0.67-0.71) for the constrictor. Auto-segmentations, when compared to two sets of manual segmentations in 10 randomly selected scans, showed better agreement (DSC) with each observer than inter-observer DSC. Prospective analysis showed most manual modifications needed for clinical use were minor, suggesting auto-contouring could increase clinical efficiency. Trained segmentation models are available for research use upon request viahttps://github.com/cerr/CERR/wiki/Auto-Segmentation-models.Significance.We developed deep learning-based auto-segmentation models for swallowing and chewing structures in CT and demonstrated its potential for use in treatment planning to limit complications post-RT. To the best of our knowledge, this is the only prospectively-validated deep learning-based model for segmenting chewing and swallowing structures in CT. Segmentation models have been made open-source to facilitate reproducibility and multi-institutional research.


Asunto(s)
Aprendizaje Profundo , Neoplasias de Cabeza y Cuello , Deglución , Humanos , Masticación , Órganos en Riesgo , Planificación de la Radioterapia Asistida por Computador/métodos , Reproducibilidad de los Resultados , Estudios Retrospectivos , Tomografía Computarizada por Rayos X/métodos
19.
IEEE Trans Med Imaging ; 41(5): 1057-1068, 2022 05.
Artículo en Inglés | MEDLINE | ID: mdl-34855590

RESUMEN

Accurate and robust segmentation of lung cancers from CT, even those located close to mediastinum, is needed to more accurately plan and deliver radiotherapy and to measure treatment response. Therefore, we developed a new cross-modality educed distillation (CMEDL) approach, using unpaired CT and MRI scans, whereby an informative teacher MRI network guides a student CT network to extract features that signal the difference between foreground and background. Our contribution eliminates two requirements of distillation methods: (i) paired image sets by using an image to image (I2I) translation and (ii) pre-training of the teacher network with a large training set by using concurrent training of all networks. Our framework uses an end-to-end trained unpaired I2I translation, teacher, and student segmentation networks. Architectural flexibility of our framework is demonstrated using 3 segmentation and 2 I2I networks. Networks were trained with 377 CT and 82 T2w MRI from different sets of patients, with independent validation (N = 209 tumors) and testing (N = 609 tumors) datasets. Network design, methods to combine MRI with CT information, distillation learning under informative (MRI to CT), weak (CT to MRI) and equal teacher (MRI to MRI), and ablation tests were performed. Accuracy was measured using Dice similarity (DSC), surface Dice (sDSC), and Hausdorff distance at the 95th percentile (HD95). The CMEDL approach was significantly (p < 0.001) more accurate (DSC of 0.77 vs. 0.73) than non-CMEDL methods with an informative teacher for CT lung tumor, with a weak teacher (DSC of 0.84 vs. 0.81) for MRI lung tumor, and with equal teacher (DSC of 0.90 vs. 0.88) for MRI multi-organ segmentation. CMEDL also reduced inter-rater lung tumor segmentation variabilities.


Asunto(s)
Procesamiento de Imagen Asistido por Computador , Neoplasias Pulmonares , Humanos , Procesamiento de Imagen Asistido por Computador/métodos , Neoplasias Pulmonares/diagnóstico por imagen , Imagen por Resonancia Magnética , Tomografía Computarizada por Rayos X
20.
Phys Imaging Radiat Oncol ; 19: 96-101, 2021 Jul.
Artículo en Inglés | MEDLINE | ID: mdl-34746452

RESUMEN

BACKGROUND AND PURPOSE: Reducing trismus in radiotherapy for head and neck cancer (HNC) is important. Automated deep learning (DL) segmentation and automated planning was used to introduce new and rarely segmented masticatory structures to study if trismus risk could be decreased. MATERIALS AND METHODS: Auto-segmentation was based on purpose-built DL, and automated planning used our in-house system, ECHO. Treatment plans for ten HNC patients, treated with 2 Gy × 35 fractions, were optimized (ECHO0). Six manually segmented OARs were replaced with DL auto-segmentations and the plans re-optimized (ECHO1). In a third set of plans, mean doses for auto-segmented ipsilateral masseter and medial pterygoid (MIMean, MPIMean), derived from a trismus risk model, were implemented as dose-volume objectives (ECHO2). Clinical dose-volume criteria were compared between the two scenarios (ECHO0 vs. ECHO1; ECHO1 vs. ECHO2; Wilcoxon signed-rank test; significance: p < 0.01). RESULTS: Small systematic differences were observed between the doses to the six auto-segmented OARs and their manual counterparts (median: ECHO1 = 6.2 (range: 0.4, 21) Gy vs. ECHO0 = 6.6 (range: 0.3, 22) Gy; p = 0.007), and the ECHO1 plans provided improved normal tissue sparing across a larger dose-volume range. Only in the ECHO2 plans, all patients fulfilled both MIMean and MPIMean criteria. The population median MIMean and MPIMean were considerably lower than those suggested by the trismus model (ECHO0: MIMean = 13 Gy vs. ≤42 Gy; MPIMean = 29 Gy vs. ≤68 Gy). CONCLUSIONS: Automated treatment planning can efficiently incorporate new structures from DL auto-segmentation, which results in trismus risk sparing without deteriorating treatment plan quality. Auto-planning and deep learning auto-segmentation together provide a powerful platform to further improve treatment planning.

SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA