Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 66
Filtrar
1.
Artículo en Inglés | MEDLINE | ID: mdl-38993353

RESUMEN

Among patients with early-stage non-small cell lung cancer (NSCLC) undergoing surgical resection, identifying who is at high-risk of recurrence can inform clinical guidelines with respect to more aggressive follow-up and/or adjuvant therapy. While predicting recurrence based on pre-surgical resection data is ideal, clinically important pathological features are only evaluated postoperatively. Therefore, we developed two supervised classification models to assess the importance of pre- and post-surgical features for predicting 5-year recurrence. An integrated dataset was generated by combining clinical covariates and radiomic features calculated from pre-surgical computed tomography images. After removing correlated radiomic features, the SHapley Additive exPlanations (SHAP) method was used to measure feature importance and select relevant features. Binary classification was performed using a Support Vector Machine, followed by a feature ablation study assessing the impact of radiomic and clinical features. We demonstrate that the post-surgical model significantly outperforms the pre-surgical model in predicting lung cancer recurrence, with tumor pathological features and peritumoral radiomic features contributing significantly to the model's performance.

2.
Bioengineering (Basel) ; 11(5)2024 Apr 28.
Artículo en Inglés | MEDLINE | ID: mdl-38790302

RESUMEN

The progress of incorporating deep learning in the field of medical image interpretation has been greatly hindered due to the tremendous cost and time associated with generating ground truth for supervised machine learning, alongside concerns about the inconsistent quality of images acquired. Active learning offers a potential solution to these problems of expanding dataset ground truth by algorithmically choosing the most informative samples for ground truth labeling. Still, this effort incurs the costs of human labeling, which needs minimization. Furthermore, automatic labeling approaches employing active learning often exhibit overfitting tendencies while selecting samples closely aligned with the training set distribution and excluding out-of-distribution samples, which could potentially improve the model's effectiveness. We propose that the majority of out-of-distribution instances can be attributed to inconsistent cross images. Since the FDA approved the first whole-slide image system for medical diagnosis in 2017, whole-slide images have provided enriched critical information to advance the field of automated histopathology. Here, we exemplify the benefits of a novel deep learning strategy that utilizes high-resolution whole-slide microscopic images. We quantitatively assess and visually highlight the inconsistencies within the whole-slide image dataset employed in this study. Accordingly, we introduce a deep learning-based preprocessing algorithm designed to normalize unknown samples to the training set distribution, effectively mitigating the overfitting issue. Consequently, our approach significantly increases the amount of automatic region-of-interest ground truth labeling on high-resolution whole-slide images using active deep learning. We accept 92% of the automatic labels generated for our unlabeled data cohort, expanding the labeled dataset by 845%. Additionally, we demonstrate expert time savings of 96% relative to manual expert ground-truth labeling.

3.
Neurotoxicol Teratol ; 102: 107336, 2024.
Artículo en Inglés | MEDLINE | ID: mdl-38402997

RESUMEN

Microglial cells mediate diverse homeostatic, inflammatory, and immune processes during normal development and in response to cytotoxic challenges. During these functional activities, microglial cells undergo distinct numerical and morphological changes in different tissue volumes in both rodent and human brains. However, it remains unclear how these cytostructural changes in microglia correlate with region-specific neurochemical functions. To better understand these relationships, neuroscientists need accurate, reproducible, and efficient methods for quantifying microglial cell number and morphologies in histological sections. To address this deficit, we developed a novel deep learning (DL)-based classification, stereology approach that links the appearance of Iba1 immunostained microglial cells at low magnification (20×) with the total number of cells in the same brain region based on unbiased stereology counts as ground truth. Once DL models are trained, total microglial cell numbers in specific regions of interest can be estimated and treatment groups predicted in a high-throughput manner (<1 min) using only low-power images from test cases, without the need for time and labor-intensive stereology counts or morphology ratings in test cases. Results for this DL-based automatic stereology approach on two datasets (total 39 mouse brains) showed >90% accuracy, 100% percent repeatability (Test-Retest) and 60× greater efficiency than manual stereology (<1 min vs. ∼ 60 min) using the same tissue sections. Ongoing and future work includes use of this DL-based approach to establish clear neurodegeneration profiles in age-related human neurological diseases and related animal models.


Asunto(s)
Aprendizaje Profundo , Microglía , Animales , Ratones , Humanos , Encéfalo/patología , Recuento de Células/métodos
4.
Phys Rev Lett ; 131(22): 221802, 2023 Dec 01.
Artículo en Inglés | MEDLINE | ID: mdl-38101356

RESUMEN

We propose theories of a complete mirror world with parity (P) solving the strong CP problem. P exchanges the entire standard model with its mirror copy. We derive bounds on the two new mass scales that arise: v^{'} where parity and mirror electroweak symmetry are spontaneously broken, and v_{3} where the color groups break to the diagonal strong interactions. The strong CP problem is solved even if v_{3}≪v^{'}, when heavy colored states at the scale v_{3} may be accessible at LHC and future colliders. Furthermore, we argue that the breaking of P introduces negligible contributions to θ[over ¯]_{QCD}, starting at three-loop order. The symmetry breaking at v_{3} can be made dynamical, without introducing an additional hierarchy problem.

5.
Front Big Data ; 6: 1135191, 2023.
Artículo en Inglés | MEDLINE | ID: mdl-37265587

RESUMEN

Accurately modeling information diffusion within and across social media platforms has many practical applications, such as estimating the size of the audience exposed to a particular narrative or testing intervention techniques for addressing misinformation. However, it turns out that real data reveal phenomena that pose significant challenges to modeling: events in the physical world affect in varying ways conversations on different social media platforms; coordinated influence campaigns may swing discussions in unexpected directions; a platform's algorithms direct who sees which message, which affects in opaque ways how information spreads. This article describes our research efforts in the SocialSim program of the Defense Advanced Research Projects Agency. As formulated by DARPA, the intent of the SocialSim research program was "to develop innovative technologies for high-fidelity computational simulation of online social behavior ... [focused] specifically on information spread and evolution." In this article we document lessons we learned over the 4+ years of the recently concluded project. Our hope is that an accounting of our experience may prove useful to other researchers should they attempt a related project.

6.
Cancers (Basel) ; 15(8)2023 Apr 17.
Artículo en Inglés | MEDLINE | ID: mdl-37190264

RESUMEN

Histopathological classification in prostate cancer remains a challenge with high dependence on the expert practitioner. We develop a deep learning (DL) model to identify the most prominent Gleason pattern in a highly curated data cohort and validate it on an independent dataset. The histology images are partitioned in tiles (14,509) and are curated by an expert to identify individual glandular structures with assigned primary Gleason pattern grades. We use transfer learning and fine-tuning approaches to compare several deep neural network architectures that are trained on a corpus of camera images (ImageNet) and tuned with histology examples to be context appropriate for histopathological discrimination with small samples. In our study, the best DL network is able to discriminate cancer grade (GS3/4) from benign with an accuracy of 91%, F1-score of 0.91 and AUC 0.96 in a baseline test (52 patients), while the cancer grade discrimination of the GS3 from GS4 had an accuracy of 68% and AUC of 0.71 (40 patients).

7.
EPJ Data Sci ; 12(1): 8, 2023.
Artículo en Inglés | MEDLINE | ID: mdl-37006640

RESUMEN

Forecasting social media activity can be of practical use in many scenarios, from understanding trends, such as which topics are likely to engage more users in the coming week, to identifying unusual behavior, such as coordinated information operations or currency manipulation efforts. To evaluate a new approach to forecasting, it is important to have baselines against which to assess performance gains. We experimentally evaluate the performance of four baselines for forecasting activity in several social media datasets that record discussions related to three different geo-political contexts synchronously taking place on two different platforms, Twitter and YouTube. Experiments are done over hourly time periods. Our evaluation identifies the baselines which are most accurate for particular metrics and thus provides guidance for future work in social media modeling.

8.
Artículo en Inglés | MEDLINE | ID: mdl-36327184

RESUMEN

The detection and segmentation of stained cells and nuclei are essential prerequisites for subsequent quantitative research for many diseases. Recently, deep learning has shown strong performance in many computer vision problems, including solutions for medical image analysis. Furthermore, accurate stereological quantification of microscopic structures in stained tissue sections plays a critical role in understanding human diseases and developing safe and effective treatments. In this article, we review the most recent deep learning approaches for cell (nuclei) detection and segmentation in cancer and Alzheimer's disease with an emphasis on deep learning approaches combined with unbiased stereology. Major challenges include accurate and reproducible cell detection and segmentation of microscopic images from stained sections. Finally, we discuss potential improvements and future trends in deep learning applied to cell detection and segmentation.

9.
J Chem Neuroanat ; 124: 102134, 2022 10.
Artículo en Inglés | MEDLINE | ID: mdl-35839940

RESUMEN

Stereology-based methods provide the current state-of-the-art approaches for accurate quantification of numbers and other morphometric parameters of biological objects in stained tissue sections. The advent of artificial intelligence (AI)-based deep learning (DL) offers the possibility of improving throughput by automating the collection of stereology data. We have recently shown that DL can effectively achieve comparable accuracy to manual stereology but with higher repeatability, improved throughput, and less variation due to human factors by quantifying the total number of immunostained cells at their maximal profile of focus in extended depth of field (EDF) images. In the first of two novel contributions in this work, we propose a semi-automatic approach using a handcrafted Adaptive Segmentation Algorithm (ASA) to automatically generate ground truth on EDF images for training our deep learning (DL) models to automatically count cells using unbiased stereology methods. This update increases the amount of training data, thereby improving the accuracy and efficiency of automatic cell counting methods, without a requirement for extra expert time. The second contribution of this work is a Multi-channel Input and Multi-channel Output (MIMO) method using a U-Net deep learning architecture for automatic cell counting in a stack of z-axis images (also known as disector stacks). This DL-based digital automation of the ordinary optical fractionator ensures accurate counts through spatial separation of stained cells in the z-plane, thereby avoiding false negatives from overlapping cells in EDF images without the shortcomings of 3D and recurrent DL models. The contribution overcomes the issue of under-counting errors with EDF images due to overlapping cells in the z-plane (masking). We demonstrate the practical applications of these advances with automatic disector-based estimates of the total number of NeuN-immunostained neurons in a mouse neocortex. In summary, this work provides the first demonstration of automatic estimation of a total cell number in tissue sections using a combination of deep learning and the disector-based optical fractionator method.


Asunto(s)
Inteligencia Artificial , Neocórtex , Algoritmos , Animales , Recuento de Células/métodos , Humanos , Ratones , Neuronas
10.
Diagnostics (Basel) ; 12(2)2022 Jan 29.
Artículo en Inglés | MEDLINE | ID: mdl-35204436

RESUMEN

Glioma is the most common type of primary malignant brain tumor. Accurate survival time prediction for glioma patients may positively impact treatment planning. In this paper, we develop an automatic survival time prediction tool for glioblastoma patients along with an effective solution to the limited availability of annotated medical imaging datasets. Ensembles of snapshots of three dimensional (3D) deep convolutional neural networks (CNN) are applied to Magnetic Resonance Image (MRI) data to predict survival time of high-grade glioma patients. Additionally, multi-sequence MRI images were used to enhance survival prediction performance. A novel way to leverage the potential of ensembles to overcome the limitation of labeled medical image availability is shown. This new classification method separates glioblastoma patients into long- and short-term survivors. The BraTS (Brain Tumor Image Segmentation) 2019 training dataset was used in this work. Each patient case consisted of three MRI sequences (T1CE, T2, and FLAIR). Our training set contained 163 cases while the test set included 46 cases. The best known prediction accuracy of 74% for this type of problem was achieved on the unseen test set.

11.
Cureus ; 13(9): e17889, 2021 Sep.
Artículo en Inglés | MEDLINE | ID: mdl-34548989

RESUMEN

Pemphigus is a skin condition that causes intraepidermal separation of keratinocytes. Multiple types of pemphigus exist, including pemphigus vulgaris and pemphigus foliaceus. These can be differentiated by histopathology, clinical presentation, appearance of lesions, and antibodies, among other factors. It is important to distinguish between the two because of differences in management and prognosis. Here we present a case of pemphigus foliaceus, as well as a discussion of the key differences between pemphigus foliaceus and vulgaris.

12.
IEEE Access ; 9: 72970-72979, 2021.
Artículo en Inglés | MEDLINE | ID: mdl-34178559

RESUMEN

A number of recent papers have shown experimental evidence that suggests it is possible to build highly accurate deep neural network models to detect COVID-19 from chest X-ray images. In this paper, we show that good generalization to unseen sources has not been achieved. Experiments with richer data sets than have previously been used show models have high accuracy on seen sources, but poor accuracy on unseen sources. The reason for the disparity is that the convolutional neural network model, which learns features, can focus on differences in X-ray machines or in positioning within the machines, for example. Any feature that a person would clearly rule out is called a confounding feature. Some of the models were trained on COVID-19 image data taken from publications, which may be different than raw images. Some data sets were of pediatric cases with pneumonia where COVID-19 chest X-rays are almost exclusively from adults, so lung size becomes a spurious feature that can be exploited. In this work, we have eliminated many confounding features by working with as close to raw data as possible. Still, deep learned models may leverage source specific confounders to differentiate COVID-19 from pneumonia preventing generalizing to new data sources (i.e. external sites). Our models have achieved an AUC of 1.00 on seen data sources but in the worst case only scored an AUC of 0.38 on unseen ones. This indicates that such models need further assessment/development before they can be broadly clinically deployed. An example of fine-tuning to improve performance at a new site is given.

13.
Tomography ; 7(2): 154-168, 2021 04 29.
Artículo en Inglés | MEDLINE | ID: mdl-33946756

RESUMEN

Lung cancer causes more deaths globally than any other type of cancer. To determine the best treatment, detecting EGFR and KRAS mutations is of interest. However, non-invasive ways to obtain this information are not available. Furthermore, many times there is a lack of big enough relevant public datasets, so the performance of single classifiers is not outstanding. In this paper, an ensemble approach is applied to increase the performance of EGFR and KRAS mutation prediction using a small dataset. A new voting scheme, Selective Class Average Voting (SCAV), is proposed and its performance is assessed both for machine learning models and CNNs. For the EGFR mutation, in the machine learning approach, there was an increase in the sensitivity from 0.66 to 0.75, and an increase in AUC from 0.68 to 0.70. With the deep learning approach, an AUC of 0.846 was obtained, and with SCAV, the accuracy of the model was increased from 0.80 to 0.857. For the KRAS mutation, both in the machine learning models (0.65 to 0.71 AUC) and the deep learning models (0.739 to 0.778 AUC), a significant increase in performance was found. The results obtained in this work show how to effectively learn from small image datasets to predict EGFR and KRAS mutations, and that using ensembles with SCAV increases the performance of machine learning classifiers and CNNs. The results provide confidence that as large datasets become available, tools to augment clinical capabilities can be fielded.


Asunto(s)
Carcinoma de Pulmón de Células no Pequeñas , Neoplasias Pulmonares , Carcinoma de Pulmón de Células no Pequeñas/genética , Receptores ErbB/genética , Humanos , Neoplasias Pulmonares/genética , Mutación , Proteínas Proto-Oncogénicas p21(ras)/genética
14.
J Neurosci Methods ; 354: 109102, 2021 04 15.
Artículo en Inglés | MEDLINE | ID: mdl-33607171

RESUMEN

BACKGROUND: Quantifying cells in a defined region of biological tissue is critical for many clinical and preclinical studies, especially in the fields of pathology, toxicology, cancer and behavior. As part of a program to develop accurate, precise and more efficient automatic approaches for quantifying morphometric changes in biological tissue, we have shown that both deep learning-based and hand-crafted algorithms can estimate the total number of histologically stained cells at their maximal profile of focus in Extended Depth of Field (EDF) images. Deep learning-based approaches show accuracy comparable to manual counts on EDF images but significant enhancement in reproducibility, throughput efficiency and reduced error from human factors. However, a majority of the automated counts are designed for single-immunostained tissue sections. NEW METHOD: To expand the automatic counting methods to more complex dual-staining protocols, we developed an adaptive method to separate stain color channels on images from tissue sections stained by a primary immunostain with secondary counterstain. COMPARISON WITH EXISTING METHODS: The proposed method overcomes the limitations of the state-of-the-art stain-separation methods, like the requirement of pure stain color basis as a prerequisite or stain color basis learning on each image. RESULTS: Experimental results are presented for automatic counts using deep learning-based and hand-crafted algorithms for sections immunostained for neurons (Neu-N) or microglial cells (Iba-1) with cresyl violet counterstain. CONCLUSION: Our findings show more accurate counts by deep learning methods compared to the handcrafted method. Thus, stain-separated images can function as input for automatic deep learning-based quantification methods designed for single-stained tissue sections.


Asunto(s)
Aprendizaje Profundo , Algoritmos , Colorantes , Humanos , Procesamiento de Imagen Asistido por Computador , Reproducibilidad de los Resultados , Coloración y Etiquetado
15.
Phys Rev Lett ; 124(25): 251802, 2020 Jun 26.
Artículo en Inglés | MEDLINE | ID: mdl-32639773

RESUMEN

In the conventional misalignment mechanism, the axion field has a constant initial field value in the early Universe and later begins to oscillate. We present an alternative scenario where the axion field has a nonzero initial velocity, allowing an axion decay constant much below the conventional prediction from axion dark matter. This axion velocity can be generated from explicit breaking of the axion shift symmetry in the early Universe, which may occur as this symmetry is approximate.

16.
Comput Biol Med ; 122: 103882, 2020 07.
Artículo en Inglés | MEDLINE | ID: mdl-32658721

RESUMEN

Convolutional Neural Networks (CNNs) have been utilized for to distinguish between benign lung nodules and those that will become malignant. The objective of this study was to use an ensemble of CNNs to predict which baseline nodules would be diagnosed as lung cancer in a second follow up screening after more than one year. Low-dose helical computed tomography images and data were utilized from the National Lung Screening Trial (NLST). The malignant nodules and nodule positive controls were divided into training and test cohorts. T0 nodules were used to predict lung cancer incidence at T1 or T2. To increase the sample size, image augmentation was performed using rotations, flipping, and elastic deformation. Three CNN architectures were designed for malignancy prediction, and each architecture was trained using seven different seeds to create the initial weights. This enabled variability in the CNN models which were combined to generate a robust, more accurate ensemble model. Augmenting images using only rotation and flipping and training with images from T0 yielded the best accuracy to predict lung cancer incidence at T2 from a separate test cohort (Accuracy = 90.29%; AUC = 0.96) based on an ensemble 21 models. Images augmented by rotation and flipping enabled effective learning by increasing the relatively small sample size. Ensemble learning with deep neural networks is a compelling approach that accurately predicted lung cancer incidence at the second screening after the baseline screen mostly 2 years later.


Asunto(s)
Neoplasias Pulmonares , Tomografía Computarizada por Rayos X , Estudios de Cohortes , Humanos , Pulmón , Neoplasias Pulmonares/diagnóstico por imagen , Redes Neurales de la Computación
17.
Tomography ; 6(2): 209-215, 2020 06.
Artículo en Inglés | MEDLINE | ID: mdl-32548298

RESUMEN

Noninvasive diagnosis of lung cancer in early stages is one task where radiomics helps. Clinical practice shows that the size of a nodule has high predictive power for malignancy. In the literature, convolutional neural networks (CNNs) have become widely used in medical image analysis. We study the ability of a CNN to capture nodule size in computed tomography images after images are resized for CNN input. For our experiments, we used the National Lung Screening Trial data set. Nodules were labeled into 2 categories (small/large) based on the original size of a nodule. After all extracted patches were re-sampled into 100-by-100-pixel images, a CNN was able to successfully classify test nodules into small- and large-size groups with high accuracy. To show the generality of our discovery, we repeated size classification experiments using Common Objects in Context (COCO) data set. From the data set, we selected 3 categories of images, namely, bears, cats, and dogs. For all 3 categories a 5- × 2-fold cross-validation was performed to put them into small and large classes. The average area under receiver operating curve is 0.954, 0.952, and 0.979 for the bear, cat, and dog categories, respectively. Thus, camera image rescaling also enables a CNN to discover the size of an object. The source code for experiments with the COCO data set is publicly available in Github (https://github.com/VisionAI-USF/COCO_Size_Decoding/).


Asunto(s)
Neoplasias Pulmonares , Nódulos Pulmonares Múltiples , Animales , Gatos , Perros , Humanos , Neoplasias Pulmonares/diagnóstico por imagen , Nódulos Pulmonares Múltiples/diagnóstico por imagen , Redes Neurales de la Computación , Ensayos Clínicos Controlados Aleatorios como Asunto , Tomografía Computarizada por Rayos X , Ursidae
18.
Tomography ; 6(2): 250-260, 2020 06.
Artículo en Inglés | MEDLINE | ID: mdl-32548303

RESUMEN

Image acquisition parameters for computed tomography scans such as slice thickness and field of view may vary depending on tumor size and site. Recent studies have shown that some radiomics features were dependent on voxel size (= pixel size × slice thickness), and with proper normalization, this voxel size dependency could be reduced. Deep features from a convolutional neural network (CNN) have shown great promise in characterizing cancers. However, how do these deep features vary with changes in imaging acquisition parameters? To analyze the variability of deep features, a physical radiomics phantom with 10 different material cartridges was scanned on 8 different scanners. We assessed scans from 3 different cartridges (rubber, dense cork, and normal cork). Deep features from the penultimate layer of the CNN before (pre-rectified linear unit) and after (post-rectified linear unit) applying the rectified linear unit activation function were extracted from a pre-trained CNN using transfer learning. We studied both the interscanner and intrascanner dependency of deep features and also the deep features' dependency over the 3 cartridges. We found some deep features were dependent on pixel size and that, with appropriate normalization, this dependency could be reduced. False discovery rate was applied for multiple comparisons, to mitigate potentially optimistic results. We also used stable deep features for prognostic analysis on 1 non-small cell lung cancer data set.


Asunto(s)
Carcinoma de Pulmón de Células no Pequeñas , Neoplasias Pulmonares , Tomografía Computarizada por Rayos X , Carcinoma de Pulmón de Células no Pequeñas/diagnóstico por imagen , Humanos , Redes Neurales de la Computación , Fantasmas de Imagen
19.
J Med Imaging (Bellingham) ; 7(2): 024502, 2020 Mar.
Artículo en Inglés | MEDLINE | ID: mdl-32280729

RESUMEN

Purpose: Due to the high incidence and mortality rates of lung cancer worldwide, early detection of a precancerous lesion is essential. Low-dose computed tomography is a commonly used technique for screening, diagnosis, and prognosis of non-small-cell lung cancer. Recently, convolutional neural networks (CNN) had shown great potential in lung nodule classification. Clinical information (family history, gender, and smoking history) together with nodule size provide information about lung cancer risk. Large nodules have greater risk than small nodules. Approach: A subset of cases from the National Lung Screening Trial was chosen as a dataset in our study. We divided the nodules into large and small nodules based on different clinical guideline thresholds and then analyzed the groups individually. Similarly, we also analyzed clinical features by dividing them into groups. CNNs were designed and trained over each of these groups individually. To our knowledge, this is the first study to incorporate nodule size and clinical features for classification using CNN. We further made a hybrid model using an ensemble with the CNN models of clinical and size information to enhance malignancy prediction. Results: From our study, we obtained 0.9 AUC and 83.12% accuracy, which was a significant improvement over our previous best results. Conclusions: In conclusion, we found that dividing the nodules by size and clinical information for building predictive models resulted in improved malignancy predictions. Our analysis also showed that appropriately integrating clinical information and size groups could further improve risk prediction.

20.
Radiol Artif Intell ; 2(6): e190218, 2020 Nov.
Artículo en Inglés | MEDLINE | ID: mdl-33937845

RESUMEN

PURPOSE: To determine if quantitative features extracted from pretherapy fluorine 18 fluorodeoxyglucose (18F-FDG) PET/CT estimate prognosis in patients with locally advanced cervical cancer treated with chemoradiotherapy. MATERIALS AND METHODS: In this retrospective study, PET/CT images and outcomes were curated from 154 patients with locally advanced cervical cancer, who underwent chemoradiotherapy from two institutions between March 2008 and June 2016, separated into independent training (n = 78; mean age, 51 years ± 13 [standard deviation]) and testing (n = 76; mean age, 50 years ± 10) cohorts. Radiomic features were extracted from PET, CT, and habitat (subregions with different metabolic characteristics) images that were derived by fusing PET and CT images. Parsimonious sets of these features were identified by the least absolute shrinkage and selection operator analysis and used to generate predictive radiomics signatures for progression-free survival (PFS) and overall survival (OS) estimation. Prognostic validation of the radiomic signatures as independent prognostic markers was performed using multivariable Cox regression, which was expressed as nomograms, together with other clinical risk factors. RESULTS: The radiomics nomograms constructed with T stage, lymph node status, and radiomics signatures resulted in significantly better performance for the estimation of PFS (Harrell concordance index [C-index], 0.85 for training and 0.82 for test) and OS (C-index, 0.86 for training and 0.82 for test) compared with International Federation of Gynecology and Obstetrics staging system (C-index for PFS, 0.70 for training [P = .001] and 0.70 for test [P = .002]; C-index for OS, 0.73 for training [P < .001] and 0.70 for test [P < .001]), respectively. CONCLUSION: Prognostic models were generated and validated from quantitative analysis of 18F-FDG PET/CT habitat images and clinical data, and may have the potential to identify the patients who need more aggressive treatment in clinical practice, pending further validation with larger prospective cohorts.Supplemental material is available for this article.© RSNA, 2020.

SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA
...