Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 27
Filtrar
1.
J Imaging Inform Med ; 2024 Mar 14.
Artículo en Inglés | MEDLINE | ID: mdl-38483694

RESUMEN

The application of deep learning (DL) in medicine introduces transformative tools with the potential to enhance prognosis, diagnosis, and treatment planning. However, ensuring transparent documentation is essential for researchers to enhance reproducibility and refine techniques. Our study addresses the unique challenges presented by DL in medical imaging by developing a comprehensive checklist using the Delphi method to enhance reproducibility and reliability in this dynamic field. We compiled a preliminary checklist based on a comprehensive review of existing checklists and relevant literature. A panel of 11 experts in medical imaging and DL assessed these items using Likert scales, with two survey rounds to refine responses and gauge consensus. We also employed the content validity ratio with a cutoff of 0.59 to determine item face and content validity. Round 1 included a 27-item questionnaire, with 12 items demonstrating high consensus for face and content validity that were then left out of round 2. Round 2 involved refining the checklist, resulting in an additional 17 items. In the last round, 3 items were deemed non-essential or infeasible, while 2 newly suggested items received unanimous agreement for inclusion, resulting in a final 26-item DL model reporting checklist derived from the Delphi process. The 26-item checklist facilitates the reproducible reporting of DL tools and enables scientists to replicate the study's results.

2.
ArXiv ; 2024 Mar 08.
Artículo en Inglés | MEDLINE | ID: mdl-37292481

RESUMEN

Pediatric tumors of the central nervous system are the most common cause of cancer-related death in children. The five-year survival rate for high-grade gliomas in children is less than 20%. Due to their rarity, the diagnosis of these entities is often delayed, their treatment is mainly based on historic treatment concepts, and clinical trials require multi-institutional collaborations. The MICCAI Brain Tumor Segmentation (BraTS) Challenge is a landmark community benchmark event with a successful history of 12 years of resource creation for the segmentation and analysis of adult glioma. Here we present the CBTN-CONNECT-DIPGR-ASNR-MICCAI BraTS-PEDs 2023 challenge, which represents the first BraTS challenge focused on pediatric brain tumors with data acquired across multiple international consortia dedicated to pediatric neuro-oncology and clinical trials. The BraTS-PEDs 2023 challenge focuses on benchmarking the development of volumentric segmentation algorithms for pediatric brain glioma through standardized quantitative performance evaluation metrics utilized across the BraTS 2023 cluster of challenges. Models gaining knowledge from the BraTS-PEDs multi-parametric structural MRI (mpMRI) training data will be evaluated on separate validation and unseen test mpMRI dataof high-grade pediatric glioma. The CBTN-CONNECT-DIPGR-ASNR-MICCAI BraTS-PEDs 2023 challenge brings together clinicians and AI/imaging scientists to lead to faster development of automated segmentation techniques that could benefit clinical trials, and ultimately the care of children with brain tumors.

3.
AJNR Am J Neuroradiol ; 44(10): 1126-1134, 2023 10.
Artículo en Inglés | MEDLINE | ID: mdl-37770204

RESUMEN

BACKGROUND: The molecular profile of gliomas is a prognostic indicator for survival, driving clinical decision-making for treatment. Pathology-based molecular diagnosis is challenging because of the invasiveness of the procedure, exclusion from neoadjuvant therapy options, and the heterogeneous nature of the tumor. PURPOSE: We performed a systematic review of algorithms that predict molecular subtypes of gliomas from MR Imaging. DATA SOURCES: Data sources were Ovid Embase, Ovid MEDLINE, Cochrane Central Register of Controlled Trials, Web of Science. STUDY SELECTION: Per the Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) guidelines, 12,318 abstracts were screened and 1323 underwent full-text review, with 85 articles meeting the inclusion criteria. DATA ANALYSIS: We compared prediction results from different machine learning approaches for predicting molecular subtypes of gliomas. Bias analysis was conducted for each study, following the Prediction model Risk Of Bias Assessment Tool (PROBAST) guidelines. DATA SYNTHESIS: Isocitrate dehydrogenase mutation status was reported with an area under the curve and accuracy of 0.88 and 85% in internal validation and 0.86 and 87% in limited external validation data sets, respectively. For the prediction of O6-methylguanine-DNA methyltransferase promoter methylation, the area under the curve and accuracy in internal validation data sets were 0.79 and 77%, and in limited external validation, 0.89 and 83%, respectively. PROBAST scoring demonstrated high bias in all articles. LIMITATIONS: The low number of external validation and studies with incomplete data resulted in unequal data analysis. Comparing the best prediction pipelines of each study may introduce bias. CONCLUSIONS: While the high area under the curve and accuracy for the prediction of molecular subtypes of gliomas are reported in internal and external validation data sets, limited use of external validation and the increased risk of bias in all articles may present obstacles for clinical translation of these techniques.


Asunto(s)
Glioma , Humanos , Glioma/diagnóstico por imagen , Glioma/genética , Glioma/terapia , Aprendizaje Automático , Pronóstico , Imagen por Resonancia Magnética/métodos , Mutación
4.
ArXiv ; 2023 Jun 28.
Artículo en Inglés | MEDLINE | ID: mdl-37608932

RESUMEN

Automated brain tumor segmentation methods have become well-established and reached performance levels offering clear clinical utility. These methods typically rely on four input magnetic resonance imaging (MRI) modalities: T1-weighted images with and without contrast enhancement, T2-weighted images, and FLAIR images. However, some sequences are often missing in clinical practice due to time constraints or image artifacts, such as patient motion. Consequently, the ability to substitute missing modalities and gain segmentation performance is highly desirable and necessary for the broader adoption of these algorithms in the clinical routine. In this work, we present the establishment of the Brain MR Image Synthesis Benchmark (BraSyn) in conjunction with the Medical Image Computing and Computer-Assisted Intervention (MICCAI) 2023. The primary objective of this challenge is to evaluate image synthesis methods that can realistically generate missing MRI modalities when multiple available images are provided. The ultimate aim is to facilitate automated brain tumor segmentation pipelines. The image dataset used in the benchmark is diverse and multi-modal, created through collaboration with various hospitals and research institutions.

5.
ArXiv ; 2023 May 12.
Artículo en Inglés | MEDLINE | ID: mdl-37608937

RESUMEN

Meningiomas are the most common primary intracranial tumor in adults and can be associated with significant morbidity and mortality. Radiologists, neurosurgeons, neuro-oncologists, and radiation oncologists rely on multiparametric MRI (mpMRI) for diagnosis, treatment planning, and longitudinal treatment monitoring; yet automated, objective, and quantitative tools for non-invasive assessment of meningiomas on mpMRI are lacking. The BraTS meningioma 2023 challenge will provide a community standard and benchmark for state-of-the-art automated intracranial meningioma segmentation models based on the largest expert annotated multilabel meningioma mpMRI dataset to date. Challenge competitors will develop automated segmentation models to predict three distinct meningioma sub-regions on MRI including enhancing tumor, non-enhancing tumor core, and surrounding nonenhancing T2/FLAIR hyperintensity. Models will be evaluated on separate validation and held-out test datasets using standardized metrics utilized across the BraTS 2023 series of challenges including the Dice similarity coefficient and Hausdorff distance. The models developed during the course of this challenge will aid in incorporation of automated meningioma MRI segmentation into clinical practice, which will ultimately improve care of patients with meningioma.

6.
ArXiv ; 2023 May 30.
Artículo en Inglés | MEDLINE | ID: mdl-37396608

RESUMEN

Gliomas are the most common type of primary brain tumors. Although gliomas are relatively rare, they are among the deadliest types of cancer, with a survival rate of less than 2 years after diagnosis. Gliomas are challenging to diagnose, hard to treat and inherently resistant to conventional therapy. Years of extensive research to improve diagnosis and treatment of gliomas have decreased mortality rates across the Global North, while chances of survival among individuals in low- and middle-income countries (LMICs) remain unchanged and are significantly worse in Sub-Saharan Africa (SSA) populations. Long-term survival with glioma is associated with the identification of appropriate pathological features on brain MRI and confirmation by histopathology. Since 2012, the Brain Tumor Segmentation (BraTS) Challenge have evaluated state-of-the-art machine learning methods to detect, characterize, and classify gliomas. However, it is unclear if the state-of-the-art methods can be widely implemented in SSA given the extensive use of lower-quality MRI technology, which produces poor image contrast and resolution and more importantly, the propensity for late presentation of disease at advanced stages as well as the unique characteristics of gliomas in SSA (i.e., suspected higher rates of gliomatosis cerebri). Thus, the BraTS-Africa Challenge provides a unique opportunity to include brain MRI glioma cases from SSA in global efforts through the BraTS Challenge to develop and evaluate computer-aided-diagnostic (CAD) methods for the detection and characterization of glioma in resource-limited settings, where the potential for CAD tools to transform healthcare are more likely.

7.
J Digit Imaging ; 36(5): 2306-2312, 2023 10.
Artículo en Inglés | MEDLINE | ID: mdl-37407841

RESUMEN

Since 2000, there have been more than 8000 publications on radiology artificial intelligence (AI). AI breakthroughs allow complex tasks to be automated and even performed beyond human capabilities. However, the lack of details on the methods and algorithm code undercuts its scientific value. Many science subfields have recently faced a reproducibility crisis, eroding trust in processes and results, and influencing the rise in retractions of scientific papers. For the same reasons, conducting research in deep learning (DL) also requires reproducibility. Although several valuable manuscript checklists for AI in medical imaging exist, they are not focused specifically on reproducibility. In this study, we conducted a systematic review of recently published papers in the field of DL to evaluate if the description of their methodology could allow the reproducibility of their findings. We focused on the Journal of Digital Imaging (JDI), a specialized journal that publishes papers on AI and medical imaging. We used the keyword "Deep Learning" and collected the articles published between January 2020 and January 2022. We screened all the articles and included the ones which reported the development of a DL tool in medical imaging. We extracted the reported details about the dataset, data handling steps, data splitting, model details, and performance metrics of each included article. We found 148 articles. Eighty were included after screening for articles that reported developing a DL model for medical image analysis. Five studies have made their code publicly available, and 35 studies have utilized publicly available datasets. We provided figures to show the ratio and absolute count of reported items from included studies. According to our cross-sectional study, in JDI publications on DL in medical imaging, authors infrequently report the key elements of their study to make it reproducible.


Asunto(s)
Inteligencia Artificial , Diagnóstico por Imagen , Humanos , Estudios Transversales , Reproducibilidad de los Resultados , Algoritmos
8.
J Digit Imaging ; 36(3): 837-846, 2023 06.
Artículo en Inglés | MEDLINE | ID: mdl-36604366

RESUMEN

Glioblastoma (GBM) is the most common primary malignant brain tumor in adults. The standard treatment for GBM consists of surgical resection followed by concurrent chemoradiotherapy and adjuvant temozolomide. O-6-methylguanine-DNA methyltransferase (MGMT) promoter methylation status is an important prognostic biomarker that predicts the response to temozolomide and guides treatment decisions. At present, the only reliable way to determine MGMT promoter methylation status is through the analysis of tumor tissues. Considering the complications of the tissue-based methods, an imaging-based approach is preferred. This study aimed to compare three different deep learning-based approaches for predicting MGMT promoter methylation status. We obtained 576 T2WI with their corresponding tumor masks, and MGMT promoter methylation status from, The Brain Tumor Segmentation (BraTS) 2021 datasets. We developed three different models: voxel-wise, slice-wise, and whole-brain. For voxel-wise classification, methylated and unmethylated MGMT tumor masks were made into 1 and 2 with 0 background, respectively. We converted each T2WI into 32 × 32 × 32 patches. We trained a 3D-Vnet model for tumor segmentation. After inference, we constructed the whole brain volume based on the patch's coordinates. The final prediction of MGMT methylation status was made by majority voting between the predicted voxel values of the biggest connected component. For slice-wise classification, we trained an object detection model for tumor detection and MGMT methylation status prediction, then for final prediction, we used majority voting. For the whole-brain approach, we trained a 3D Densenet121 for prediction. Whole-brain, slice-wise, and voxel-wise, accuracy was 65.42% (SD 3.97%), 61.37% (SD 1.48%), and 56.84% (SD 4.38%), respectively.


Asunto(s)
Neoplasias Encefálicas , Aprendizaje Profundo , Glioblastoma , Adulto , Humanos , Glioblastoma/diagnóstico por imagen , Glioblastoma/genética , Glioblastoma/patología , Temozolomida/uso terapéutico , Neoplasias Encefálicas/diagnóstico por imagen , Neoplasias Encefálicas/genética , Neoplasias Encefálicas/patología , Metilación de ADN , Encéfalo/diagnóstico por imagen , Imagen por Resonancia Magnética/métodos , O(6)-Metilguanina-ADN Metiltransferasa/genética , Metilasas de Modificación del ADN/genética , Proteínas Supresoras de Tumor/genética , Enzimas Reparadoras del ADN/genética
9.
Eur Radiol Exp ; 6(1): 58, 2022 11 18.
Artículo en Inglés | MEDLINE | ID: mdl-36396865

RESUMEN

BACKGROUND: Primary sclerosing cholangitis (PSC) is a chronic cholestatic liver disease that can lead to cirrhosis and hepatic decompensation. However, predicting future outcomes in patients with PSC is challenging. Our aim was to extract magnetic resonance imaging (MRI) features that predict the development of hepatic decompensation by applying algebraic topology-based machine learning (ML). METHODS: We conducted a retrospective multicenter study among adults with large duct PSC who underwent MRI. A topological data analysis-inspired nonlinear framework was used to predict the risk of hepatic decompensation, which was motivated by algebraic topology theory-based ML. The topological representations (persistence images) were employed as input for classification to predict who developed early hepatic decompensation within one year after their baseline MRI. RESULTS: We reviewed 590 patients; 298 were excluded due to poor image quality or inadequate liver coverage, leaving 292 potentially eligible subjects, of which 169 subjects were included in the study. We trained our model using contrast-enhanced delayed phase T1-weighted images on a single center derivation cohort consisting of 54 patients (hepatic decompensation, n = 21; no hepatic decompensation, n = 33) and a multicenter independent validation cohort of 115 individuals (hepatic decompensation, n = 31; no hepatic decompensation, n = 84). When our model was applied in the independent validation cohort, it remained predictive of early hepatic decompensation (area under the receiver operating characteristic curve = 0.84). CONCLUSIONS: Algebraic topology-based ML is a methodological approach that can predict outcomes in patients with PSC and has the potential for application in other chronic liver diseases.


Asunto(s)
Colangitis Esclerosante , Hepatopatías , Adulto , Humanos , Colangitis Esclerosante/diagnóstico por imagen , Colangitis Esclerosante/patología , Aprendizaje Automático , Imagen por Resonancia Magnética/métodos , Estudios Multicéntricos como Asunto
10.
Radiol Artif Intell ; 4(5): e210290, 2022 Sep.
Artículo en Inglés | MEDLINE | ID: mdl-36204544

RESUMEN

Minimizing bias is critical to adoption and implementation of machine learning (ML) in clinical practice. Systematic mathematical biases produce consistent and reproducible differences between the observed and expected performance of ML systems, resulting in suboptimal performance. Such biases can be traced back to various phases of ML development: data handling, model development, and performance evaluation. This report presents 12 suboptimal practices during data handling of an ML study, explains how those practices can lead to biases, and describes what may be done to mitigate them. Authors employ an arbitrary and simplified framework that splits ML data handling into four steps: data collection, data investigation, data splitting, and feature engineering. Examples from the available research literature are provided. A Google Colaboratory Jupyter notebook includes code examples to demonstrate the suboptimal practices and steps to prevent them. Keywords: Data Handling, Bias, Machine Learning, Deep Learning, Convolutional Neural Network (CNN), Computer-aided Diagnosis (CAD) © RSNA, 2022.

11.
J Neurooncol ; 159(2): 447-455, 2022 Sep.
Artículo en Inglés | MEDLINE | ID: mdl-35852738

RESUMEN

INTRODUCTION: Glioblastomas (GBMs) are highly aggressive tumors. A common clinical challenge after standard of care treatment is differentiating tumor progression from treatment-related changes, also known as pseudoprogression (PsP). Usually, PsP resolves or stabilizes without further treatment or a course of steroids, whereas true progression (TP) requires more aggressive management. Differentiating PsP from TP will affect the patient's outcome. This study investigated using deep learning to distinguish PsP MRI features from progressive disease. METHOD: We included GBM patients with a new or increasingly enhancing lesion within the original radiation field. We labeled those who subsequently were stable or improved on imaging and clinically as PsP and those with clinical and imaging deterioration as TP. A subset of subjects underwent a second resection. We labeled these subjects as PsP, or TP based on the histological diagnosis. We coregistered contrast-enhanced T1 MRIs with T2-weighted images for each patient and used them as input to a 3-D Densenet121 model and using five-fold cross-validation to predict TP vs PsP. RESULT: We included 124 patients who met the criteria, and of those, 63 were PsP and 61 were TP. We trained a deep learning model that achieved 76.4% (range 70-84%, SD 5.122) mean accuracy over the 5 folds, 0.7560 (range 0.6553-0.8535, SD 0.069) mean AUROCC, 88.72% (SD 6.86) mean sensitivity, and 62.05% (SD 9.11) mean specificity. CONCLUSION: We report the development of a deep learning model that distinguishes PsP from TP in GBM patients treated per the Stupp protocol. Further refinement and external validation are required prior to widespread adoption in clinical practice.


Asunto(s)
Neoplasias Encefálicas , Aprendizaje Profundo , Glioblastoma , Progresión de la Enfermedad , Humanos , Imagen por Resonancia Magnética , Estudios Retrospectivos
12.
Tomography ; 8(2): 905-919, 2022 03 24.
Artículo en Inglés | MEDLINE | ID: mdl-35448707

RESUMEN

There is a growing demand for high-resolution (HR) medical images for both clinical and research applications. Image quality is inevitably traded off with acquisition time, which in turn impacts patient comfort, examination costs, dose, and motion-induced artifacts. For many image-based tasks, increasing the apparent spatial resolution in the perpendicular plane to produce multi-planar reformats or 3D images is commonly used. Single-image super-resolution (SR) is a promising technique to provide HR images based on deep learning to increase the resolution of a 2D image, but there are few reports on 3D SR. Further, perceptual loss is proposed in the literature to better capture the textural details and edges versus pixel-wise loss functions, by comparing the semantic distances in the high-dimensional feature space of a pre-trained 2D network (e.g., VGG). However, it is not clear how one should generalize it to 3D medical images, and the attendant implications are unclear. In this paper, we propose a framework called SOUP-GAN: Super-resolution Optimized Using Perceptual-tuned Generative Adversarial Network (GAN), in order to produce thinner slices (e.g., higher resolution in the 'Z' plane) with anti-aliasing and deblurring. The proposed method outperforms other conventional resolution-enhancement methods and previous SR work on medical images based on both qualitative and quantitative comparisons. Moreover, we examine the model in terms of its generalization for arbitrarily user-selected SR ratios and imaging modalities. Our model shows promise as a novel 3D SR interpolation technique, providing potential applications for both clinical and research applications.


Asunto(s)
Artefactos , Imagen por Resonancia Magnética , Humanos , Imagenología Tridimensional , Movimiento (Física)
13.
Curr Res Immunol ; 2: 155-162, 2021.
Artículo en Inglés | MEDLINE | ID: mdl-34545350

RESUMEN

Early prediction of COVID-19 in-hospital mortality relies usually on patients' preexisting comorbidities and is rarely reproducible in independent cohorts. We wanted to compare the role of routinely measured biomarkers of immunity, inflammation, and cellular damage with preexisting comorbidities in eight different machine-learning models to predict mortality, and evaluate their performance in an independent population. We recruited and followed-up consecutive adult patients with SARS-Cov-2 infection in two different Italian hospitals. We predicted 60-day mortality in one cohort (development dataset, n = 299 patients, of which 80% was allocated to the development dataset and 20% to the training set) and retested the models in the second cohort (external validation dataset, n = 402). Demographic, clinical, and laboratory features at admission, treatments and disease outcomes were significantly different between the two cohorts. Notably, significant differences were observed for %lymphocytes (p < 0.05), international-normalized-ratio (p < 0.01), platelets, alanine-aminotransferase, creatinine (all p < 0.001). The primary outcome (60-day mortality) was 29.10% (n = 87) in the development dataset, and 39.55% (n = 159) in the external validation dataset. The performance of the 8 tested models on the external validation dataset were similar to that of the holdout test dataset, indicating that the models capture the key predictors of mortality. The shap analysis in both datasets showed that age, immune features (%lymphocytes, platelets) and LDH substantially impacted on all models' predictions, while creatinine and CRP varied among the different models. The model with the better performance was model 8 (60-day mortality AUROC 0.83 ± 0.06 in holdout test set, 0.79 ± 0.02 in external validation dataset). The features that had the greatest impact on this model's prediction were age, LDH, platelets, and %lymphocytes, more than comorbidities or inflammation markers, and these findings were highly consistent in both datasets, likely reflecting the virus effect at the very beginning of the disease.

16.
Nucl Med Commun ; 42(7): 763-771, 2021 Jul 01.
Artículo en Inglés | MEDLINE | ID: mdl-33741855

RESUMEN

BACKGROUND: To investigate the correlation between 18F-labeled fluoroazomycinarabinoside (18F-FAZA) PET data and hypoxia immunohistochemical markers in patients with high-grade glioma (HGG). PATIENTS AND METHODS: Prospective study including 20 patients with brain MRI suggestive for HGG and undergoing 18F-FAZA PET/CT before treatment for hypoxia assessment. For each 18F-FAZA PET scan SUVmax, SUVmean and 18F-FAZA tumour volume (FTV) at 40, 50 and 60% threshold of SUVmax were calculated; hypoxic volume was estimated by applying different thresholds (1.2, 1.3 and 1.4) to tumour/blood ratio. Seventeen patients were analysed. The immunohistochemical analysis assessed the following parameters: hypoxia-inducible factor 1α, carbonic anhydrase IX (CA-IX), glucose transporter-1, tumour vascularity and Ki-67. RESULTS: 18F-FAZA PET showed a single lesion in 15/17 patients and multiple lesions in 2/17 patients. Twelve/17 patients had grade IV glioma and 5/17 with grade III glioma. Bioptic and surgical samples have been analysed separately. In the surgical subgroup (n = 7) a positive correlation was observed between CA-IX and SUVmax (P = 0.0002), SUVmean40 (P = 0.0058), SUVmean50 (P = 0.009), SUVmean60 (P = 0.0153), FTV-40-50-60 (P = 0.0424) and hypoxic volume1.2-1.3-1.4 (P = 0.0058). In the bioptic group (n = 10) tumour vascularisation was inversely correlated with SUVmax (P = 0.0094), SUVmean40 (P = 0.0107), SUVmean50 (P = 0.0094) and SUVmean60 (P = 0.0154). CONCLUSIONS: The correlation of 18F-FAZA PET parameters with CD31 and CA-IX represents a reliable method for assessing tumour hypoxia in HGG. The inverse correlation between tumour vascularisation, SUVmax and SUVmean suggest that highly vascularized tumours might present more oxygen supply than hypoxia.


Asunto(s)
Nitroimidazoles , Tomografía Computarizada por Tomografía de Emisión de Positrones , Adulto , Humanos , Masculino , Persona de Mediana Edad , Tomografía de Emisión de Positrones
17.
Radiology ; 299(2): 313-323, 2021 05.
Artículo en Inglés | MEDLINE | ID: mdl-33687284

RESUMEN

Background Missing MRI sequences represent an obstacle in the development and use of deep learning (DL) models that require multiple inputs. Purpose To determine if synthesizing brain MRI scans using generative adversarial networks (GANs) allows for the use of a DL model for brain lesion segmentation that requires T1-weighted images, postcontrast T1-weighted images, fluid-attenuated inversion recovery (FLAIR) images, and T2-weighted images. Materials and Methods In this retrospective study, brain MRI scans obtained between 2011 and 2019 were collected, and scenarios were simulated in which the T1-weighted images and FLAIR images were missing. Two GANs were trained, validated, and tested using 210 glioblastomas (GBMs) (Multimodal Brain Tumor Image Segmentation Benchmark [BRATS] 2017) to generate T1-weighted images from postcontrast T1-weighted images and FLAIR images from T2-weighted images. The quality of the generated images was evaluated with mean squared error (MSE) and the structural similarity index (SSI). The segmentations obtained with the generated scans were compared with those obtained with the original MRI scans using the dice similarity coefficient (DSC). The GANs were validated on sets of GBMs and central nervous system lymphomas from the authors' institution to assess their generalizability. Statistical analysis was performed using the Mann-Whitney, Friedman, and Dunn tests. Results Two hundred ten GBMs from the BRATS data set and 46 GBMs (mean patient age, 58 years ± 11 [standard deviation]; 27 men [59%] and 19 women [41%]) and 21 central nervous system lymphomas (mean patient age, 67 years ± 13; 12 men [57%] and nine women [43%]) from the authors' institution were evaluated. The median MSE for the generated T1-weighted images ranged from 0.005 to 0.013, and the median MSE for the generated FLAIR images ranged from 0.004 to 0.103. The median SSI ranged from 0.82 to 0.92 for the generated T1-weighted images and from 0.76 to 0.92 for the generated FLAIR images. The median DSCs for the segmentation of the whole lesion, the FLAIR hyperintensities, and the contrast-enhanced areas using the generated scans were 0.82, 0.71, and 0.92, respectively, when replacing both T1-weighted and FLAIR images; 0.84, 0.74, and 0.97 when replacing only the FLAIR images; and 0.97, 0.95, and 0.92 when replacing only the T1-weighted images. Conclusion Brain MRI scans generated using generative adversarial networks can be used as deep learning model inputs in case MRI sequences are missing. © RSNA, 2021 Online supplemental material is available for this article. See also the editorial by Zhong in this issue. An earlier incorrect version of this article appeared online. This article was corrected on April 12, 2021.


Asunto(s)
Neoplasias Encefálicas/diagnóstico por imagen , Aprendizaje Profundo , Glioblastoma/diagnóstico por imagen , Procesamiento de Imagen Asistido por Computador/métodos , Linfoma/diagnóstico por imagen , Imagen por Resonancia Magnética/métodos , Anciano , Medios de Contraste , Femenino , Humanos , Masculino , Persona de Mediana Edad , Estudios Retrospectivos
18.
Blood Adv ; 4(15): 3648-3658, 2020 08 11.
Artículo en Inglés | MEDLINE | ID: mdl-32766857

RESUMEN

Rituximab, cyclophosphamide, doxorubicin, vincristine, and prednisone (R-CHOP) is the standard treatment of diffuse large B-cell lymphoma (DLBCL). Primary DLBCL of the central nervous system (CNS) (primary central nervous system lymphoma [PCNSL]) is an exception because of the low CNS bioavailability of related drugs. NGR-human tumor necrosis factor (NGR-hTNF) targets CD13+ vessels, enhances vascular permeability and CNS access of anticancer drugs, and provides the rationale for the treatment of PCNSL with R-CHOP. Herein, we report activity and safety of R-CHOP preceded by NGR-hTNF in patients with PCNSL relapsed/refractory to high-dose methotrexate-based chemotherapy enrolled in a phase 2 trial. Overall response rate (ORR) was the primary endpoint. A sample size of 28 patients was considered necessary to demonstrate improvement from 30% to 50% ORR. NGR-hTNF/R-CHOP would be declared active if ≥12 responses were recorded. Treatment was well tolerated; there were no cases of unexpected toxicities, dose reductions or interruptions. NGR-hTNF/R-CHOP was active, with confirmed tumor response in 21 patients (75%; 95% confidence interval, 59%-91%), which was complete in 11. Seventeen of the 21 patients with response to treatment received consolidation (ASCT, WBRT, and/or lenalidomide maintenance). At a median follow-up of 21 (range, 14-31) months, 5 patients remained relapse-free and 6 were alive. The activity of NGR-hTNF/R-CHOP is in line with the expression of CD13 in both pericytes and endothelial cells of tumor vessels. High plasma levels of chromogranin A, an NGR-hTNF inhibitor, were associated with proton pump inhibitor use and a lower remission rate, suggesting that these drugs should be avoided during TNF-based therapy. Further research on this innovative approach to CNS lymphomas is warranted. The trial was registered as EudraCT: 2014-001532-11.


Asunto(s)
Protocolos de Quimioterapia Combinada Antineoplásica , Células Endoteliales , Protocolos de Quimioterapia Combinada Antineoplásica/uso terapéutico , Ciclofosfamida/uso terapéutico , Doxorrubicina/uso terapéutico , Humanos , Recurrencia Local de Neoplasia , Prednisona/uso terapéutico , Proteínas Recombinantes de Fusión , Rituximab , Factor de Necrosis Tumoral alfa , Vincristina/uso terapéutico
19.
Med Phys ; 47(11): 5609-5618, 2020 Nov.
Artículo en Inglés | MEDLINE | ID: mdl-32740931

RESUMEN

PURPOSE: Organ segmentation of computed tomography (CT) imaging is essential for radiotherapy treatment planning. Treatment planning requires segmentation not only of the affected tissue, but nearby healthy organs-at-risk, which is laborious and time-consuming. We present a fully automated segmentation method based on the three-dimensional (3D) U-Net convolutional neural network (CNN) capable of whole abdomen and pelvis segmentation into 33 unique organ and tissue structures, including tissues that may be overlooked by other automated segmentation approaches such as adipose tissue, skeletal muscle, and connective tissue and vessels. Whole abdomen segmentation is capable of quantifying exposure beyond a handful of organs-at-risk to all tissues within the abdomen. METHODS: Sixty-six (66) CT examinations of 64 individuals were included in the training and validation sets and 18 CT examinations from 16 individuals were included in the test set. All pixels in each examination were segmented by image analysts (with physician correction) and assigned one of 33 labels. Segmentation was performed with a 3D U-Net variant architecture which included residual blocks, and model performance was quantified on 18 test cases. Human interobserver variability (using semiautomated segmentation) was also reported on two scans, and manual interobserver variability of three individuals was reported on one scan. Model performance was also compared to several of the best models reported in the literature for multiple organ segmentation. RESULTS: The accuracy of the 3D U-Net model ranges from a Dice coefficient of 0.95 in the liver, 0.93 in the kidneys, 0.79 in the pancreas, 0.69 in the adrenals, and 0.51 in the renal arteries. Model accuracy is within 5% of human segmentation in eight of 19 organs and within 10% accuracy in 13 of 19 organs. CONCLUSIONS: The CNN approaches the accuracy of human tracers and on certain complex organs displays more consistent prediction than human tracers. Fully automated deep learning-based segmentation of CT abdomen has the potential to improve both the speed and accuracy of radiotherapy dose prediction for organs-at-risk.


Asunto(s)
Abdomen , Redes Neurales de la Computación , Abdomen/diagnóstico por imagen , Humanos , Procesamiento de Imagen Asistido por Computador , Órganos en Riesgo , Pelvis/diagnóstico por imagen , Tomografía Computarizada por Rayos X
20.
BMC Med Inform Decis Mak ; 20(1): 149, 2020 07 06.
Artículo en Inglés | MEDLINE | ID: mdl-32631306

RESUMEN

BACKGROUND: Combining MRI techniques with machine learning methodology is rapidly gaining attention as a promising method for staging of brain gliomas. This study assesses the diagnostic value of such a framework applied to dynamic susceptibility contrast (DSC)-MRI in classifying treatment-naïve gliomas from a multi-center patients into WHO grades II-IV and across their isocitrate dehydrogenase (IDH) mutation status. METHODS: Three hundred thirty-three patients from 6 tertiary centres, diagnosed histologically and molecularly with primary gliomas (IDH-mutant = 151 or IDH-wildtype = 182) were retrospectively identified. Raw DSC-MRI data was post-processed for normalised leakage-corrected relative cerebral blood volume (rCBV) maps. Shape, intensity distribution (histogram) and rotational invariant Haralick texture features over the tumour mask were extracted. Differences in extracted features across glioma grades and mutation status were tested using the Wilcoxon two-sample test. A random-forest algorithm was employed (2-fold cross-validation, 250 repeats) to predict grades or mutation status using the extracted features. RESULTS: Shape, distribution and texture features showed significant differences across mutation status. WHO grade II-III differentiation was mostly driven by shape features while texture and intensity feature were more relevant for the III-IV separation. Increased number of features became significant when differentiating grades further apart from one another. Gliomas were correctly stratified by mutation status in 71% and by grade in 53% of the cases (87% of the gliomas grades predicted with distance less than 1). CONCLUSIONS: Despite large heterogeneity in the multi-center dataset, machine learning assisted DSC-MRI radiomics hold potential to address the inherent variability and presents a promising approach for non-invasive glioma molecular subtyping and grading.


Asunto(s)
Neoplasias Encefálicas , Glioma , Humanos , Aprendizaje Automático , Imagen por Resonancia Magnética , Mutación , Clasificación del Tumor , Estudios Retrospectivos
SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA
...