Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 6 de 6
Filtrar
Más filtros












Base de datos
Intervalo de año de publicación
1.
Radiology ; 307(5): e221843, 2023 06.
Artículo en Inglés | MEDLINE | ID: mdl-37338353

RESUMEN

Background Handcrafted radiomics and deep learning (DL) models individually achieve good performance in lesion classification (benign vs malignant) on contrast-enhanced mammography (CEM) images. Purpose To develop a comprehensive machine learning tool able to fully automatically identify, segment, and classify breast lesions on the basis of CEM images in recall patients. Materials and Methods CEM images and clinical data were retrospectively collected between 2013 and 2018 for 1601 recall patients at Maastricht UMC+ and 283 patients at Gustave Roussy Institute for external validation. Lesions with a known status (malignant or benign) were delineated by a research assistant overseen by an expert breast radiologist. Preprocessed low-energy and recombined images were used to train a DL model for automatic lesion identification, segmentation, and classification. A handcrafted radiomics model was also trained to classify both human- and DL-segmented lesions. Sensitivity for identification and the area under the receiver operating characteristic curve (AUC) for classification were compared between individual and combined models at the image and patient levels. Results After the exclusion of patients without suspicious lesions, the total number of patients included in the training, test, and validation data sets were 850 (mean age, 63 years ± 8 [SD]), 212 (62 years ± 8), and 279 (55 years ± 12), respectively. In the external data set, lesion identification sensitivity was 90% and 99% at the image and patient level, respectively, and the mean Dice coefficient was 0.71 and 0.80 at the image and patient level, respectively. Using manual segmentations, the combined DL and handcrafted radiomics classification model achieved the highest AUC (0.88 [95% CI: 0.86, 0.91]) (P < .05 except compared with DL, handcrafted radiomics, and clinical features model, where P = .90). Using DL-generated segmentations, the combined DL and handcrafted radiomics model showed the highest AUC (0.95 [95% CI: 0.94, 0.96]) (P < .05). Conclusion The DL model accurately identified and delineated suspicious lesions on CEM images, and the combined output of the DL and handcrafted radiomics models achieved good diagnostic performance. © RSNA, 2023 Supplemental material is available for this article. See also the editorial by Bahl and Do in this issue.


Asunto(s)
Aprendizaje Profundo , Humanos , Persona de Mediana Edad , Estudios Retrospectivos , Mamografía/métodos , Mama/diagnóstico por imagen , Curva ROC
2.
J Pathol Inform ; 14: 100192, 2023.
Artículo en Inglés | MEDLINE | ID: mdl-36818020

RESUMEN

Treatment of patients with oesophageal and gastric cancer (OeGC) is guided by disease stage, patient performance status and preferences. Lymph node (LN) status is one of the strongest prognostic factors for OeGC patients. However, survival varies between patients with the same disease stage and LN status. We recently showed that LN size from patients with OeGC might also have prognostic value, thus making delineations of LNs essential for size estimation and the extraction of other imaging biomarkers. We hypothesized that a machine learning workflow is able to: (1) find digital H&E stained slides containing LNs, (2) create a scoring system providing degrees of certainty for the results, and (3) delineate LNs in those images. To train and validate the pipeline, we used 1695 H&E slides from the OE02 trial. The dataset was divided into training (80%) and validation (20%). The model was tested on an external dataset of 826 H&E slides from the OE05 trial. U-Net architecture was used to generate prediction maps from which predefined features were extracted. These features were subsequently used to train an XGBoost model to determine if a region truly contained a LN. With our innovative method, the balanced accuracies of the LN detection were 0.93 on the validation dataset (0.83 on the test dataset) compared to 0.81 (0.81) on the validation (test) datasets when using the standard method of thresholding U-Net predictions to arrive at a binary mask. Our method allowed for the creation of an "uncertain" category, and partly limited false-positive predictions on the external dataset. The mean Dice score was 0.73 (0.60) per-image and 0.66 (0.48) per-LN for the validation (test) datasets. Our pipeline detects images with LNs more accurately than conventional methods, and high-throughput delineation of LNs can facilitate future LN content analyses of large datasets.

3.
Front Oncol ; 12: 920393, 2022.
Artículo en Inglés | MEDLINE | ID: mdl-35912214

RESUMEN

Introduction: There is a cumulative risk of 20-40% of developing brain metastases (BM) in solid cancers. Stereotactic radiotherapy (SRT) enables the application of high focal doses of radiation to a volume and is often used for BM treatment. However, SRT can cause adverse radiation effects (ARE), such as radiation necrosis, which sometimes cause irreversible damage to the brain. It is therefore of clinical interest to identify patients at a high risk of developing ARE. We hypothesized that models trained with radiomics features, deep learning (DL) features, and patient characteristics or their combination can predict ARE risk in patients with BM before SRT. Methods: Gadolinium-enhanced T1-weighted MRIs and characteristics from patients treated with SRT for BM were collected for a training and testing cohort (N = 1,404) and a validation cohort (N = 237) from a separate institute. From each lesion in the training set, radiomics features were extracted and used to train an extreme gradient boosting (XGBoost) model. A DL model was trained on the same cohort to make a separate prediction and to extract the last layer of features. Different models using XGBoost were built using only radiomics features, DL features, and patient characteristics or a combination of them. Evaluation was performed using the area under the curve (AUC) of the receiver operating characteristic curve on the external dataset. Predictions for individual lesions and per patient developing ARE were investigated. Results: The best-performing XGBoost model on a lesion level was trained on a combination of radiomics features and DL features (AUC of 0.71 and recall of 0.80). On a patient level, a combination of radiomics features, DL features, and patient characteristics obtained the best performance (AUC of 0.72 and recall of 0.84). The DL model achieved an AUC of 0.64 and recall of 0.85 per lesion and an AUC of 0.70 and recall of 0.60 per patient. Conclusion: Machine learning models built on radiomics features and DL features extracted from BM combined with patient characteristics show potential to predict ARE at the patient and lesion levels. These models could be used in clinical decision making, informing patients on their risk of ARE and allowing physicians to opt for different therapies.

4.
Nat Commun ; 13(1): 3423, 2022 06 14.
Artículo en Inglés | MEDLINE | ID: mdl-35701415

RESUMEN

Detection and segmentation of abnormalities on medical images is highly important for patient management including diagnosis, radiotherapy, response evaluation, as well as for quantitative image research. We present a fully automated pipeline for the detection and volumetric segmentation of non-small cell lung cancer (NSCLC) developed and validated on 1328 thoracic CT scans from 8 institutions. Along with quantitative performance detailed by image slice thickness, tumor size, image interpretation difficulty, and tumor location, we report an in-silico prospective clinical trial, where we show that the proposed method is faster and more reproducible compared to the experts. Moreover, we demonstrate that on average, radiologists & radiation oncologists preferred automatic segmentations in 56% of the cases. Additionally, we evaluate the prognostic power of the automatic contours by applying RECIST criteria and measuring the tumor volumes. Segmentations by our method stratified patients into low and high survival groups with higher significance compared to those methods based on manual contours.


Asunto(s)
Carcinoma de Pulmón de Células no Pequeñas , Neoplasias Pulmonares , Algoritmos , Carcinoma de Pulmón de Células no Pequeñas/diagnóstico por imagen , Humanos , Neoplasias Pulmonares/diagnóstico por imagen , Estudios Prospectivos , Tomografía Computarizada por Rayos X/métodos
5.
Comput Biol Med ; 138: 104918, 2021 11.
Artículo en Inglés | MEDLINE | ID: mdl-34638018

RESUMEN

BACKGROUND: Barrett's esophagus (BE) is a precursor lesion of esophageal adenocarcinoma and may progress from non-dysplastic through low-grade dysplasia (LGD) to high-grade dysplasia (HGD) and cancer. Grading BE is of crucial prognostic value and is currently based on the subjective evaluation of biopsies. This study aims to investigate the potential of machine learning (ML) using spatially resolved molecular data from mass spectrometry imaging (MSI) and histological data from microscopic hematoxylin and eosin (H&E)-stained imaging for computer-aided diagnosis and prognosis of BE. METHODS: Biopsies from 57 patients were considered, divided into non-dysplastic (n = 15), LGD non-progressive (n = 14), LGD progressive (n = 14), and HGD (n = 14). MSI experiments were conducted at 50 × 50 µm spatial resolution per pixel corresponding to a tile size of 96x96 pixels in the co-registered H&E images, making a total of 144,823 tiles for the whole dataset. RESULTS: ML models were trained to distinguish epithelial tissue from stroma with area-under-the-curve (AUC) values of 0.89 (MSI) and 0.95 (H&E)) and dysplastic grade (AUC of 0.97 (MSI) and 0.85 (H&E)) on a tile level, and low-grade progressors from non-progressors on a patient level (accuracies of 0.72 (MSI) and 0.48 (H&E)). CONCLUSIONS: In summary, while the H&E-based classifier was best at distinguishing tissue types, the MSI-based model was more accurate at distinguishing dysplastic grades and patients at progression risk, which demonstrates the complementarity of both approaches. Data are available via ProteomeXchange with identifier PXD028949.


Asunto(s)
Esófago de Barrett , Neoplasias Esofágicas , Lesiones Precancerosas , Esófago de Barrett/diagnóstico por imagen , Progresión de la Enfermedad , Neoplasias Esofágicas/diagnóstico por imagen , Humanos , Aprendizaje Automático , Espectrometría de Masas
6.
Br J Radiol ; 93(1108): 20190948, 2020 Apr.
Artículo en Inglés | MEDLINE | ID: mdl-32101448

RESUMEN

Historically, medical imaging has been a qualitative or semi-quantitative modality. It is difficult to quantify what can be seen in an image, and to turn it into valuable predictive outcomes. As a result of advances in both computational hardware and machine learning algorithms, computers are making great strides in obtaining quantitative information from imaging and correlating it with outcomes. Radiomics, in its two forms "handcrafted and deep," is an emerging field that translates medical images into quantitative data to yield biological information and enable radiologic phenotypic profiling for diagnosis, theragnosis, decision support, and monitoring. Handcrafted radiomics is a multistage process in which features based on shape, pixel intensities, and texture are extracted from radiographs. Within this review, we describe the steps: starting with quantitative imaging data, how it can be extracted, how to correlate it with clinical and biological outcomes, resulting in models that can be used to make predictions, such as survival, or for detection and classification used in diagnostics. The application of deep learning, the second arm of radiomics, and its place in the radiomics workflow is discussed, along with its advantages and disadvantages. To better illustrate the technologies being used, we provide real-world clinical applications of radiomics in oncology, showcasing research on the applications of radiomics, as well as covering its limitations and its future direction.


Asunto(s)
Aprendizaje Profundo/tendencias , Diagnóstico por Imagen/tendencias , Procesamiento de Imagen Asistido por Computador/tendencias , Tecnología Radiológica/tendencias , Neoplasias Encefálicas/diagnóstico por imagen , Neoplasias de la Mama/diagnóstico por imagen , Diagnóstico por Imagen/métodos , Femenino , Predicción , Humanos , Procesamiento de Imagen Asistido por Computador/métodos , Neoplasias Pulmonares/diagnóstico por imagen , Masculino , Radiografía/métodos , Tecnología Radiológica/métodos , Flujo de Trabajo
SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA
...