Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 30
Filtrar
1.
Radiology ; 312(2): e232635, 2024 Aug.
Artículo en Inglés | MEDLINE | ID: mdl-39105640

RESUMEN

Background Multiparametric MRI can help identify clinically significant prostate cancer (csPCa) (Gleason score ≥7) but is limited by reader experience and interobserver variability. In contrast, deep learning (DL) produces deterministic outputs. Purpose To develop a DL model to predict the presence of csPCa by using patient-level labels without information about tumor location and to compare its performance with that of radiologists. Materials and Methods Data from patients without known csPCa who underwent MRI from January 2017 to December 2019 at one of multiple sites of a single academic institution were retrospectively reviewed. A convolutional neural network was trained to predict csPCa from T2-weighted images, diffusion-weighted images, apparent diffusion coefficient maps, and T1-weighted contrast-enhanced images. The reference standard was pathologic diagnosis. Radiologist performance was evaluated as follows: Radiology reports were used for the internal test set, and four radiologists' PI-RADS ratings were used for the external (ProstateX) test set. The performance was compared using areas under the receiver operating characteristic curves (AUCs) and the DeLong test. Gradient-weighted class activation maps (Grad-CAMs) were used to show tumor localization. Results Among 5735 examinations in 5215 patients (mean age, 66 years ± 8 [SD]; all male), 1514 examinations (1454 patients) showed csPCa. In the internal test set (400 examinations), the AUC was 0.89 and 0.89 for the DL classifier and radiologists, respectively (P = .88). In the external test set (204 examinations), the AUC was 0.86 and 0.84 for the DL classifier and radiologists, respectively (P = .68). DL classifier plus radiologists had an AUC of 0.89 (P < .001). Grad-CAMs demonstrated activation over the csPCa lesion in 35 of 38 and 56 of 58 true-positive examinations in internal and external test sets, respectively. Conclusion The performance of a DL model was not different from that of radiologists in the detection of csPCa at MRI, and Grad-CAMs localized the tumor. © RSNA, 2024 Supplemental material is available for this article. See also the editorial by Johnson and Chandarana in this issue.


Asunto(s)
Aprendizaje Profundo , Imagen por Resonancia Magnética , Neoplasias de la Próstata , Masculino , Humanos , Neoplasias de la Próstata/diagnóstico por imagen , Estudios Retrospectivos , Anciano , Persona de Mediana Edad , Imagen por Resonancia Magnética/métodos , Interpretación de Imagen Asistida por Computador/métodos , Imágenes de Resonancia Magnética Multiparamétrica/métodos , Próstata/diagnóstico por imagen , Próstata/patología
2.
J Imaging Inform Med ; 2024 Jun 06.
Artículo en Inglés | MEDLINE | ID: mdl-38844717

RESUMEN

Artificial intelligence-enhanced identification of organs, lesions, and other structures in medical imaging is typically done using convolutional neural networks (CNNs) designed to make voxel-accurate segmentations of the region of interest. However, the labels required to train these CNNs are time-consuming to generate and require attention from subject matter experts to ensure quality. For tasks where voxel-level precision is not required, object detection models offer a viable alternative that can reduce annotation effort. Despite this potential application, there are few options for general-purpose object detection frameworks available for 3-D medical imaging. We report on MedYOLO, a 3-D object detection framework using the one-shot detection method of the YOLO family of models and designed for use with medical imaging. We tested this model on four different datasets: BRaTS, LIDC, an abdominal organ Computed tomography (CT) dataset, and an ECG-gated heart CT dataset. We found our models achieve high performance on a diverse range of structures even without hyperparameter tuning, reaching mean average precision (mAP) at intersection over union (IoU) 0.5 of 0.861 on BRaTS, 0.715 on the abdominal CT dataset, and 0.995 on the heart CT dataset. However, the models struggle with some structures, failing to converge on LIDC resulting in a mAP@0.5 of 0.0.

3.
J Am Heart Assoc ; 13(11): e032965, 2024 Jun 04.
Artículo en Inglés | MEDLINE | ID: mdl-38818948

RESUMEN

BACKGROUND: The goal was to compare patterns of physical activity (PA) behaviors (sedentary behavior [SB], light PA, moderate-to-vigorous PA [MVPA], and sleep) measured via accelerometers for 7 days between patients with incident cerebrovascular disease (CeVD) (n=2141) and controls (n=73 938). METHODS AND RESULTS: In multivariate models, cases spent 3.7% less time in MVPA (incidence rate ratio [IRR], 0.963 [95% CI, 0.929-0.998]) and 1.0% more time in SB (IRR, 1.010 [95% CI, 1.001-1.018]). Between 12 and 24 months before diagnosis, cases spent more time in SB (IRR, 1.028 [95% CI, 1.001-1.057]). Within the year before diagnosis, cases spent less time in MVPA (IRR, 0.861 [95% CI, 0.771-0.964]). Although SB time was not associated with CeVD risk, MVPA time, both total min/d (hazard ratio [HR], 0.998 [95% CI, 0.997-0.999]) and guideline threshold adherence (≥150 min/wk) (HR, 0.909 [95% CI, 0.827-0.998]), was associated with decreased CeVD risk. Comorbid burden had a significant partial mediation effect on the relationship between MVPA and CeVD. Cases slept more during 12:00 to 17:59 hours (IRR, 1.091 [95% CI, 1.002-1.191]) but less during 0:00 to 5:59 hours (IRR, 0.984 [95% CI, 0.977-0.992]). No between-group differences were significant at subgroup analysis. CONCLUSIONS: Daily behavior patterns were significantly different in patients before CeVD. Although SB was not associated with CeVD risk, the association between MVPA and CeVD risk is partially mediated by comorbid burden. This study has implications for understanding observable behavior patterns in cerebrovascular dysfunction and may help in developing remote monitoring strategies to prevent or reduce cerebrovascular decline.


Asunto(s)
Trastornos Cerebrovasculares , Ejercicio Físico , Conducta Sedentaria , Humanos , Trastornos Cerebrovasculares/epidemiología , Trastornos Cerebrovasculares/prevención & control , Trastornos Cerebrovasculares/diagnóstico , Masculino , Femenino , Persona de Mediana Edad , Anciano , Reino Unido/epidemiología , Incidencia , Sueño , Factores de Tiempo , Factores de Riesgo , Acelerometría , Estudios de Casos y Controles , Bancos de Muestras Biológicas , Medición de Riesgo , Biobanco del Reino Unido
4.
J Imaging Inform Med ; 37(4): 1664-1673, 2024 Aug.
Artículo en Inglés | MEDLINE | ID: mdl-38483694

RESUMEN

The application of deep learning (DL) in medicine introduces transformative tools with the potential to enhance prognosis, diagnosis, and treatment planning. However, ensuring transparent documentation is essential for researchers to enhance reproducibility and refine techniques. Our study addresses the unique challenges presented by DL in medical imaging by developing a comprehensive checklist using the Delphi method to enhance reproducibility and reliability in this dynamic field. We compiled a preliminary checklist based on a comprehensive review of existing checklists and relevant literature. A panel of 11 experts in medical imaging and DL assessed these items using Likert scales, with two survey rounds to refine responses and gauge consensus. We also employed the content validity ratio with a cutoff of 0.59 to determine item face and content validity. Round 1 included a 27-item questionnaire, with 12 items demonstrating high consensus for face and content validity that were then left out of round 2. Round 2 involved refining the checklist, resulting in an additional 17 items. In the last round, 3 items were deemed non-essential or infeasible, while 2 newly suggested items received unanimous agreement for inclusion, resulting in a final 26-item DL model reporting checklist derived from the Delphi process. The 26-item checklist facilitates the reproducible reporting of DL tools and enables scientists to replicate the study's results.


Asunto(s)
Lista de Verificación , Aprendizaje Profundo , Técnica Delphi , Diagnóstico por Imagen , Humanos , Reproducibilidad de los Resultados , Diagnóstico por Imagen/métodos , Diagnóstico por Imagen/normas , Encuestas y Cuestionarios
5.
ArXiv ; 2024 Mar 08.
Artículo en Inglés | MEDLINE | ID: mdl-37292481

RESUMEN

Pediatric tumors of the central nervous system are the most common cause of cancer-related death in children. The five-year survival rate for high-grade gliomas in children is less than 20%. Due to their rarity, the diagnosis of these entities is often delayed, their treatment is mainly based on historic treatment concepts, and clinical trials require multi-institutional collaborations. The MICCAI Brain Tumor Segmentation (BraTS) Challenge is a landmark community benchmark event with a successful history of 12 years of resource creation for the segmentation and analysis of adult glioma. Here we present the CBTN-CONNECT-DIPGR-ASNR-MICCAI BraTS-PEDs 2023 challenge, which represents the first BraTS challenge focused on pediatric brain tumors with data acquired across multiple international consortia dedicated to pediatric neuro-oncology and clinical trials. The BraTS-PEDs 2023 challenge focuses on benchmarking the development of volumentric segmentation algorithms for pediatric brain glioma through standardized quantitative performance evaluation metrics utilized across the BraTS 2023 cluster of challenges. Models gaining knowledge from the BraTS-PEDs multi-parametric structural MRI (mpMRI) training data will be evaluated on separate validation and unseen test mpMRI dataof high-grade pediatric glioma. The CBTN-CONNECT-DIPGR-ASNR-MICCAI BraTS-PEDs 2023 challenge brings together clinicians and AI/imaging scientists to lead to faster development of automated segmentation techniques that could benefit clinical trials, and ultimately the care of children with brain tumors.

6.
AJNR Am J Neuroradiol ; 44(10): 1126-1134, 2023 10.
Artículo en Inglés | MEDLINE | ID: mdl-37770204

RESUMEN

BACKGROUND: The molecular profile of gliomas is a prognostic indicator for survival, driving clinical decision-making for treatment. Pathology-based molecular diagnosis is challenging because of the invasiveness of the procedure, exclusion from neoadjuvant therapy options, and the heterogeneous nature of the tumor. PURPOSE: We performed a systematic review of algorithms that predict molecular subtypes of gliomas from MR Imaging. DATA SOURCES: Data sources were Ovid Embase, Ovid MEDLINE, Cochrane Central Register of Controlled Trials, Web of Science. STUDY SELECTION: Per the Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) guidelines, 12,318 abstracts were screened and 1323 underwent full-text review, with 85 articles meeting the inclusion criteria. DATA ANALYSIS: We compared prediction results from different machine learning approaches for predicting molecular subtypes of gliomas. Bias analysis was conducted for each study, following the Prediction model Risk Of Bias Assessment Tool (PROBAST) guidelines. DATA SYNTHESIS: Isocitrate dehydrogenase mutation status was reported with an area under the curve and accuracy of 0.88 and 85% in internal validation and 0.86 and 87% in limited external validation data sets, respectively. For the prediction of O6-methylguanine-DNA methyltransferase promoter methylation, the area under the curve and accuracy in internal validation data sets were 0.79 and 77%, and in limited external validation, 0.89 and 83%, respectively. PROBAST scoring demonstrated high bias in all articles. LIMITATIONS: The low number of external validation and studies with incomplete data resulted in unequal data analysis. Comparing the best prediction pipelines of each study may introduce bias. CONCLUSIONS: While the high area under the curve and accuracy for the prediction of molecular subtypes of gliomas are reported in internal and external validation data sets, limited use of external validation and the increased risk of bias in all articles may present obstacles for clinical translation of these techniques.


Asunto(s)
Glioma , Humanos , Glioma/diagnóstico por imagen , Glioma/genética , Glioma/terapia , Aprendizaje Automático , Pronóstico , Imagen por Resonancia Magnética/métodos , Mutación
7.
ArXiv ; 2023 Jun 28.
Artículo en Inglés | MEDLINE | ID: mdl-37608932

RESUMEN

Automated brain tumor segmentation methods have become well-established and reached performance levels offering clear clinical utility. These methods typically rely on four input magnetic resonance imaging (MRI) modalities: T1-weighted images with and without contrast enhancement, T2-weighted images, and FLAIR images. However, some sequences are often missing in clinical practice due to time constraints or image artifacts, such as patient motion. Consequently, the ability to substitute missing modalities and gain segmentation performance is highly desirable and necessary for the broader adoption of these algorithms in the clinical routine. In this work, we present the establishment of the Brain MR Image Synthesis Benchmark (BraSyn) in conjunction with the Medical Image Computing and Computer-Assisted Intervention (MICCAI) 2023. The primary objective of this challenge is to evaluate image synthesis methods that can realistically generate missing MRI modalities when multiple available images are provided. The ultimate aim is to facilitate automated brain tumor segmentation pipelines. The image dataset used in the benchmark is diverse and multi-modal, created through collaboration with various hospitals and research institutions.

8.
ArXiv ; 2023 May 12.
Artículo en Inglés | MEDLINE | ID: mdl-37608937

RESUMEN

Meningiomas are the most common primary intracranial tumor in adults and can be associated with significant morbidity and mortality. Radiologists, neurosurgeons, neuro-oncologists, and radiation oncologists rely on multiparametric MRI (mpMRI) for diagnosis, treatment planning, and longitudinal treatment monitoring; yet automated, objective, and quantitative tools for non-invasive assessment of meningiomas on mpMRI are lacking. The BraTS meningioma 2023 challenge will provide a community standard and benchmark for state-of-the-art automated intracranial meningioma segmentation models based on the largest expert annotated multilabel meningioma mpMRI dataset to date. Challenge competitors will develop automated segmentation models to predict three distinct meningioma sub-regions on MRI including enhancing tumor, non-enhancing tumor core, and surrounding nonenhancing T2/FLAIR hyperintensity. Models will be evaluated on separate validation and held-out test datasets using standardized metrics utilized across the BraTS 2023 series of challenges including the Dice similarity coefficient and Hausdorff distance. The models developed during the course of this challenge will aid in incorporation of automated meningioma MRI segmentation into clinical practice, which will ultimately improve care of patients with meningioma.

9.
ArXiv ; 2023 May 30.
Artículo en Inglés | MEDLINE | ID: mdl-37396608

RESUMEN

Gliomas are the most common type of primary brain tumors. Although gliomas are relatively rare, they are among the deadliest types of cancer, with a survival rate of less than 2 years after diagnosis. Gliomas are challenging to diagnose, hard to treat and inherently resistant to conventional therapy. Years of extensive research to improve diagnosis and treatment of gliomas have decreased mortality rates across the Global North, while chances of survival among individuals in low- and middle-income countries (LMICs) remain unchanged and are significantly worse in Sub-Saharan Africa (SSA) populations. Long-term survival with glioma is associated with the identification of appropriate pathological features on brain MRI and confirmation by histopathology. Since 2012, the Brain Tumor Segmentation (BraTS) Challenge have evaluated state-of-the-art machine learning methods to detect, characterize, and classify gliomas. However, it is unclear if the state-of-the-art methods can be widely implemented in SSA given the extensive use of lower-quality MRI technology, which produces poor image contrast and resolution and more importantly, the propensity for late presentation of disease at advanced stages as well as the unique characteristics of gliomas in SSA (i.e., suspected higher rates of gliomatosis cerebri). Thus, the BraTS-Africa Challenge provides a unique opportunity to include brain MRI glioma cases from SSA in global efforts through the BraTS Challenge to develop and evaluate computer-aided-diagnostic (CAD) methods for the detection and characterization of glioma in resource-limited settings, where the potential for CAD tools to transform healthcare are more likely.

10.
J Digit Imaging ; 36(5): 2306-2312, 2023 10.
Artículo en Inglés | MEDLINE | ID: mdl-37407841

RESUMEN

Since 2000, there have been more than 8000 publications on radiology artificial intelligence (AI). AI breakthroughs allow complex tasks to be automated and even performed beyond human capabilities. However, the lack of details on the methods and algorithm code undercuts its scientific value. Many science subfields have recently faced a reproducibility crisis, eroding trust in processes and results, and influencing the rise in retractions of scientific papers. For the same reasons, conducting research in deep learning (DL) also requires reproducibility. Although several valuable manuscript checklists for AI in medical imaging exist, they are not focused specifically on reproducibility. In this study, we conducted a systematic review of recently published papers in the field of DL to evaluate if the description of their methodology could allow the reproducibility of their findings. We focused on the Journal of Digital Imaging (JDI), a specialized journal that publishes papers on AI and medical imaging. We used the keyword "Deep Learning" and collected the articles published between January 2020 and January 2022. We screened all the articles and included the ones which reported the development of a DL tool in medical imaging. We extracted the reported details about the dataset, data handling steps, data splitting, model details, and performance metrics of each included article. We found 148 articles. Eighty were included after screening for articles that reported developing a DL model for medical image analysis. Five studies have made their code publicly available, and 35 studies have utilized publicly available datasets. We provided figures to show the ratio and absolute count of reported items from included studies. According to our cross-sectional study, in JDI publications on DL in medical imaging, authors infrequently report the key elements of their study to make it reproducible.


Asunto(s)
Inteligencia Artificial , Diagnóstico por Imagen , Humanos , Estudios Transversales , Reproducibilidad de los Resultados , Algoritmos
11.
J Digit Imaging ; 36(3): 837-846, 2023 06.
Artículo en Inglés | MEDLINE | ID: mdl-36604366

RESUMEN

Glioblastoma (GBM) is the most common primary malignant brain tumor in adults. The standard treatment for GBM consists of surgical resection followed by concurrent chemoradiotherapy and adjuvant temozolomide. O-6-methylguanine-DNA methyltransferase (MGMT) promoter methylation status is an important prognostic biomarker that predicts the response to temozolomide and guides treatment decisions. At present, the only reliable way to determine MGMT promoter methylation status is through the analysis of tumor tissues. Considering the complications of the tissue-based methods, an imaging-based approach is preferred. This study aimed to compare three different deep learning-based approaches for predicting MGMT promoter methylation status. We obtained 576 T2WI with their corresponding tumor masks, and MGMT promoter methylation status from, The Brain Tumor Segmentation (BraTS) 2021 datasets. We developed three different models: voxel-wise, slice-wise, and whole-brain. For voxel-wise classification, methylated and unmethylated MGMT tumor masks were made into 1 and 2 with 0 background, respectively. We converted each T2WI into 32 × 32 × 32 patches. We trained a 3D-Vnet model for tumor segmentation. After inference, we constructed the whole brain volume based on the patch's coordinates. The final prediction of MGMT methylation status was made by majority voting between the predicted voxel values of the biggest connected component. For slice-wise classification, we trained an object detection model for tumor detection and MGMT methylation status prediction, then for final prediction, we used majority voting. For the whole-brain approach, we trained a 3D Densenet121 for prediction. Whole-brain, slice-wise, and voxel-wise, accuracy was 65.42% (SD 3.97%), 61.37% (SD 1.48%), and 56.84% (SD 4.38%), respectively.


Asunto(s)
Neoplasias Encefálicas , Aprendizaje Profundo , Glioblastoma , Adulto , Humanos , Glioblastoma/diagnóstico por imagen , Glioblastoma/genética , Glioblastoma/patología , Temozolomida/uso terapéutico , Neoplasias Encefálicas/diagnóstico por imagen , Neoplasias Encefálicas/genética , Neoplasias Encefálicas/patología , Metilación de ADN , Encéfalo/diagnóstico por imagen , Imagen por Resonancia Magnética/métodos , O(6)-Metilguanina-ADN Metiltransferasa/genética , Metilasas de Modificación del ADN/genética , Proteínas Supresoras de Tumor/genética , Enzimas Reparadoras del ADN/genética
12.
Eur Radiol Exp ; 6(1): 58, 2022 11 18.
Artículo en Inglés | MEDLINE | ID: mdl-36396865

RESUMEN

BACKGROUND: Primary sclerosing cholangitis (PSC) is a chronic cholestatic liver disease that can lead to cirrhosis and hepatic decompensation. However, predicting future outcomes in patients with PSC is challenging. Our aim was to extract magnetic resonance imaging (MRI) features that predict the development of hepatic decompensation by applying algebraic topology-based machine learning (ML). METHODS: We conducted a retrospective multicenter study among adults with large duct PSC who underwent MRI. A topological data analysis-inspired nonlinear framework was used to predict the risk of hepatic decompensation, which was motivated by algebraic topology theory-based ML. The topological representations (persistence images) were employed as input for classification to predict who developed early hepatic decompensation within one year after their baseline MRI. RESULTS: We reviewed 590 patients; 298 were excluded due to poor image quality or inadequate liver coverage, leaving 292 potentially eligible subjects, of which 169 subjects were included in the study. We trained our model using contrast-enhanced delayed phase T1-weighted images on a single center derivation cohort consisting of 54 patients (hepatic decompensation, n = 21; no hepatic decompensation, n = 33) and a multicenter independent validation cohort of 115 individuals (hepatic decompensation, n = 31; no hepatic decompensation, n = 84). When our model was applied in the independent validation cohort, it remained predictive of early hepatic decompensation (area under the receiver operating characteristic curve = 0.84). CONCLUSIONS: Algebraic topology-based ML is a methodological approach that can predict outcomes in patients with PSC and has the potential for application in other chronic liver diseases.


Asunto(s)
Colangitis Esclerosante , Hepatopatías , Adulto , Humanos , Colangitis Esclerosante/diagnóstico por imagen , Colangitis Esclerosante/patología , Aprendizaje Automático , Imagen por Resonancia Magnética/métodos , Estudios Multicéntricos como Asunto
13.
Radiol Artif Intell ; 4(5): e210290, 2022 Sep.
Artículo en Inglés | MEDLINE | ID: mdl-36204544

RESUMEN

Minimizing bias is critical to adoption and implementation of machine learning (ML) in clinical practice. Systematic mathematical biases produce consistent and reproducible differences between the observed and expected performance of ML systems, resulting in suboptimal performance. Such biases can be traced back to various phases of ML development: data handling, model development, and performance evaluation. This report presents 12 suboptimal practices during data handling of an ML study, explains how those practices can lead to biases, and describes what may be done to mitigate them. Authors employ an arbitrary and simplified framework that splits ML data handling into four steps: data collection, data investigation, data splitting, and feature engineering. Examples from the available research literature are provided. A Google Colaboratory Jupyter notebook includes code examples to demonstrate the suboptimal practices and steps to prevent them. Keywords: Data Handling, Bias, Machine Learning, Deep Learning, Convolutional Neural Network (CNN), Computer-aided Diagnosis (CAD) © RSNA, 2022.

14.
J Neurooncol ; 159(2): 447-455, 2022 Sep.
Artículo en Inglés | MEDLINE | ID: mdl-35852738

RESUMEN

INTRODUCTION: Glioblastomas (GBMs) are highly aggressive tumors. A common clinical challenge after standard of care treatment is differentiating tumor progression from treatment-related changes, also known as pseudoprogression (PsP). Usually, PsP resolves or stabilizes without further treatment or a course of steroids, whereas true progression (TP) requires more aggressive management. Differentiating PsP from TP will affect the patient's outcome. This study investigated using deep learning to distinguish PsP MRI features from progressive disease. METHOD: We included GBM patients with a new or increasingly enhancing lesion within the original radiation field. We labeled those who subsequently were stable or improved on imaging and clinically as PsP and those with clinical and imaging deterioration as TP. A subset of subjects underwent a second resection. We labeled these subjects as PsP, or TP based on the histological diagnosis. We coregistered contrast-enhanced T1 MRIs with T2-weighted images for each patient and used them as input to a 3-D Densenet121 model and using five-fold cross-validation to predict TP vs PsP. RESULT: We included 124 patients who met the criteria, and of those, 63 were PsP and 61 were TP. We trained a deep learning model that achieved 76.4% (range 70-84%, SD 5.122) mean accuracy over the 5 folds, 0.7560 (range 0.6553-0.8535, SD 0.069) mean AUROCC, 88.72% (SD 6.86) mean sensitivity, and 62.05% (SD 9.11) mean specificity. CONCLUSION: We report the development of a deep learning model that distinguishes PsP from TP in GBM patients treated per the Stupp protocol. Further refinement and external validation are required prior to widespread adoption in clinical practice.


Asunto(s)
Neoplasias Encefálicas , Aprendizaje Profundo , Glioblastoma , Progresión de la Enfermedad , Humanos , Imagen por Resonancia Magnética , Estudios Retrospectivos
15.
Tomography ; 8(2): 905-919, 2022 03 24.
Artículo en Inglés | MEDLINE | ID: mdl-35448707

RESUMEN

There is a growing demand for high-resolution (HR) medical images for both clinical and research applications. Image quality is inevitably traded off with acquisition time, which in turn impacts patient comfort, examination costs, dose, and motion-induced artifacts. For many image-based tasks, increasing the apparent spatial resolution in the perpendicular plane to produce multi-planar reformats or 3D images is commonly used. Single-image super-resolution (SR) is a promising technique to provide HR images based on deep learning to increase the resolution of a 2D image, but there are few reports on 3D SR. Further, perceptual loss is proposed in the literature to better capture the textural details and edges versus pixel-wise loss functions, by comparing the semantic distances in the high-dimensional feature space of a pre-trained 2D network (e.g., VGG). However, it is not clear how one should generalize it to 3D medical images, and the attendant implications are unclear. In this paper, we propose a framework called SOUP-GAN: Super-resolution Optimized Using Perceptual-tuned Generative Adversarial Network (GAN), in order to produce thinner slices (e.g., higher resolution in the 'Z' plane) with anti-aliasing and deblurring. The proposed method outperforms other conventional resolution-enhancement methods and previous SR work on medical images based on both qualitative and quantitative comparisons. Moreover, we examine the model in terms of its generalization for arbitrarily user-selected SR ratios and imaging modalities. Our model shows promise as a novel 3D SR interpolation technique, providing potential applications for both clinical and research applications.


Asunto(s)
Artefactos , Imagen por Resonancia Magnética , Humanos , Imagenología Tridimensional , Movimiento (Física)
16.
Curr Res Immunol ; 2: 155-162, 2021.
Artículo en Inglés | MEDLINE | ID: mdl-34545350

RESUMEN

Early prediction of COVID-19 in-hospital mortality relies usually on patients' preexisting comorbidities and is rarely reproducible in independent cohorts. We wanted to compare the role of routinely measured biomarkers of immunity, inflammation, and cellular damage with preexisting comorbidities in eight different machine-learning models to predict mortality, and evaluate their performance in an independent population. We recruited and followed-up consecutive adult patients with SARS-Cov-2 infection in two different Italian hospitals. We predicted 60-day mortality in one cohort (development dataset, n = 299 patients, of which 80% was allocated to the development dataset and 20% to the training set) and retested the models in the second cohort (external validation dataset, n = 402). Demographic, clinical, and laboratory features at admission, treatments and disease outcomes were significantly different between the two cohorts. Notably, significant differences were observed for %lymphocytes (p < 0.05), international-normalized-ratio (p < 0.01), platelets, alanine-aminotransferase, creatinine (all p < 0.001). The primary outcome (60-day mortality) was 29.10% (n = 87) in the development dataset, and 39.55% (n = 159) in the external validation dataset. The performance of the 8 tested models on the external validation dataset were similar to that of the holdout test dataset, indicating that the models capture the key predictors of mortality. The shap analysis in both datasets showed that age, immune features (%lymphocytes, platelets) and LDH substantially impacted on all models' predictions, while creatinine and CRP varied among the different models. The model with the better performance was model 8 (60-day mortality AUROC 0.83 ± 0.06 in holdout test set, 0.79 ± 0.02 in external validation dataset). The features that had the greatest impact on this model's prediction were age, LDH, platelets, and %lymphocytes, more than comorbidities or inflammation markers, and these findings were highly consistent in both datasets, likely reflecting the virus effect at the very beginning of the disease.

19.
Radiology ; 299(2): 313-323, 2021 05.
Artículo en Inglés | MEDLINE | ID: mdl-33687284

RESUMEN

Background Missing MRI sequences represent an obstacle in the development and use of deep learning (DL) models that require multiple inputs. Purpose To determine if synthesizing brain MRI scans using generative adversarial networks (GANs) allows for the use of a DL model for brain lesion segmentation that requires T1-weighted images, postcontrast T1-weighted images, fluid-attenuated inversion recovery (FLAIR) images, and T2-weighted images. Materials and Methods In this retrospective study, brain MRI scans obtained between 2011 and 2019 were collected, and scenarios were simulated in which the T1-weighted images and FLAIR images were missing. Two GANs were trained, validated, and tested using 210 glioblastomas (GBMs) (Multimodal Brain Tumor Image Segmentation Benchmark [BRATS] 2017) to generate T1-weighted images from postcontrast T1-weighted images and FLAIR images from T2-weighted images. The quality of the generated images was evaluated with mean squared error (MSE) and the structural similarity index (SSI). The segmentations obtained with the generated scans were compared with those obtained with the original MRI scans using the dice similarity coefficient (DSC). The GANs were validated on sets of GBMs and central nervous system lymphomas from the authors' institution to assess their generalizability. Statistical analysis was performed using the Mann-Whitney, Friedman, and Dunn tests. Results Two hundred ten GBMs from the BRATS data set and 46 GBMs (mean patient age, 58 years ± 11 [standard deviation]; 27 men [59%] and 19 women [41%]) and 21 central nervous system lymphomas (mean patient age, 67 years ± 13; 12 men [57%] and nine women [43%]) from the authors' institution were evaluated. The median MSE for the generated T1-weighted images ranged from 0.005 to 0.013, and the median MSE for the generated FLAIR images ranged from 0.004 to 0.103. The median SSI ranged from 0.82 to 0.92 for the generated T1-weighted images and from 0.76 to 0.92 for the generated FLAIR images. The median DSCs for the segmentation of the whole lesion, the FLAIR hyperintensities, and the contrast-enhanced areas using the generated scans were 0.82, 0.71, and 0.92, respectively, when replacing both T1-weighted and FLAIR images; 0.84, 0.74, and 0.97 when replacing only the FLAIR images; and 0.97, 0.95, and 0.92 when replacing only the T1-weighted images. Conclusion Brain MRI scans generated using generative adversarial networks can be used as deep learning model inputs in case MRI sequences are missing. © RSNA, 2021 Online supplemental material is available for this article. See also the editorial by Zhong in this issue. An earlier incorrect version of this article appeared online. This article was corrected on April 12, 2021.


Asunto(s)
Neoplasias Encefálicas/diagnóstico por imagen , Aprendizaje Profundo , Glioblastoma/diagnóstico por imagen , Procesamiento de Imagen Asistido por Computador/métodos , Linfoma/diagnóstico por imagen , Imagen por Resonancia Magnética/métodos , Anciano , Medios de Contraste , Femenino , Humanos , Masculino , Persona de Mediana Edad , Estudios Retrospectivos
20.
Nucl Med Commun ; 42(7): 763-771, 2021 Jul 01.
Artículo en Inglés | MEDLINE | ID: mdl-33741855

RESUMEN

BACKGROUND: To investigate the correlation between 18F-labeled fluoroazomycinarabinoside (18F-FAZA) PET data and hypoxia immunohistochemical markers in patients with high-grade glioma (HGG). PATIENTS AND METHODS: Prospective study including 20 patients with brain MRI suggestive for HGG and undergoing 18F-FAZA PET/CT before treatment for hypoxia assessment. For each 18F-FAZA PET scan SUVmax, SUVmean and 18F-FAZA tumour volume (FTV) at 40, 50 and 60% threshold of SUVmax were calculated; hypoxic volume was estimated by applying different thresholds (1.2, 1.3 and 1.4) to tumour/blood ratio. Seventeen patients were analysed. The immunohistochemical analysis assessed the following parameters: hypoxia-inducible factor 1α, carbonic anhydrase IX (CA-IX), glucose transporter-1, tumour vascularity and Ki-67. RESULTS: 18F-FAZA PET showed a single lesion in 15/17 patients and multiple lesions in 2/17 patients. Twelve/17 patients had grade IV glioma and 5/17 with grade III glioma. Bioptic and surgical samples have been analysed separately. In the surgical subgroup (n = 7) a positive correlation was observed between CA-IX and SUVmax (P = 0.0002), SUVmean40 (P = 0.0058), SUVmean50 (P = 0.009), SUVmean60 (P = 0.0153), FTV-40-50-60 (P = 0.0424) and hypoxic volume1.2-1.3-1.4 (P = 0.0058). In the bioptic group (n = 10) tumour vascularisation was inversely correlated with SUVmax (P = 0.0094), SUVmean40 (P = 0.0107), SUVmean50 (P = 0.0094) and SUVmean60 (P = 0.0154). CONCLUSIONS: The correlation of 18F-FAZA PET parameters with CD31 and CA-IX represents a reliable method for assessing tumour hypoxia in HGG. The inverse correlation between tumour vascularisation, SUVmax and SUVmean suggest that highly vascularized tumours might present more oxygen supply than hypoxia.


Asunto(s)
Nitroimidazoles , Tomografía Computarizada por Tomografía de Emisión de Positrones , Adulto , Humanos , Masculino , Persona de Mediana Edad , Tomografía de Emisión de Positrones
SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA