Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 26
Filtrar
Más filtros

Banco de datos
Tipo del documento
Intervalo de año de publicación
1.
Radiology ; 306(2): e220505, 2023 Feb.
Artículo en Inglés | MEDLINE | ID: mdl-36165796

RESUMEN

Background Although deep learning (DL) models have demonstrated expert-level ability for pediatric bone age prediction, they have shown poor generalizability and bias in other use cases. Purpose To quantify generalizability and bias in a bone age DL model measured by performance on external versus internal test sets and performance differences between different demographic groups, respectively. Materials and Methods The winning DL model of the 2017 RSNA Pediatric Bone Age Challenge was retrospectively evaluated and trained on 12 611 pediatric hand radiographs from two U.S. hospitals. The DL model was tested from September 2021 to December 2021 on an internal validation set and an external test set of pediatric hand radiographs with diverse demographic representation. Images reporting ground-truth bone age were included for study. Mean absolute difference (MAD) between ground-truth bone age and the model prediction bone age was calculated for each set. Generalizability was evaluated by comparing MAD between internal and external evaluation sets with use of t tests. Bias was evaluated by comparing MAD and clinically significant error rate (rate of errors changing the clinical diagnosis) between demographic groups with use of t tests or analysis of variance and χ2 tests, respectively (statistically significant difference defined as P < .05). Results The internal validation set had images from 1425 individuals (773 boys), and the external test set had images from 1202 individuals (mean age, 133 months ± 60 [SD]; 614 boys). The bone age model generalized well to the external test set, with no difference in MAD (6.8 months in the validation set vs 6.9 months in the external set; P = .64). Model predictions would have led to clinically significant errors in 194 of 1202 images (16%) in the external test set. The MAD was greater for girls than boys in the internal validation set (P = .01) and in the subcategories of age and Tanner stage in the external test set (P < .001 for both). Conclusion A deep learning (DL) bone age model generalized well to an external test set, although clinically significant sex-, age-, and sexual maturity-based biases in DL bone age were identified. © RSNA, 2022 Online supplemental material is available for this article See also the editorial by Larson in this issue.


Asunto(s)
Aprendizaje Profundo , Masculino , Femenino , Humanos , Niño , Lactante , Estudios Retrospectivos , Radiografía
3.
Muscle Nerve ; 64(2): 172-179, 2021 08.
Artículo en Inglés | MEDLINE | ID: mdl-33961310

RESUMEN

INTRODUCTION/AIMS: In this study we report the results of a phase Ib/IIa, open-label, multiple ascending-dose trial of domagrozumab, a myostatin inhibitor, in patients with fukutin-related protein (FKRP)-associated limb-girdle muscular dystrophy. METHODS: Nineteen patients were enrolled and assigned to one of three dosing arms (5, 20, or 40 mg/kg every 4 weeks). After 32 weeks of treatment, participants receiving the lowest dose were switched to the highest dose (40 mg/kg) for an additional 32 weeks. An extension study was also conducted. The primary endpoints were safety and tolerability. Secondary endpoints included muscle strength, timed function testing, pulmonary function, lean body mass, pharmacokinetics, and pharmacodynamics. As an exploratory outcome, muscle fat fractions were derived from whole-body magnetic resonance images. RESULTS: Serum concentrations of domagrozumab increased in a dose-dependent manner and modest levels of myostatin inhibition were observed in both serum and muscle tissue. The most frequently occurring adverse events were injuries secondary to falls. There were no significant between-group differences in the strength, functional, or imaging outcomes studied. DISCUSSION: We conclude that, although domagrozumab was safe in patients in limb-girdle muscular dystrophy type 2I/R9, there was no clear evidence supporting its efficacy in improving muscle strength or function.


Asunto(s)
Anticuerpos Monoclonales Humanizados/uso terapéutico , Fuerza Muscular/efectos de los fármacos , Distrofia Muscular de Cinturas/tratamiento farmacológico , Adulto , Composición Corporal/efectos de los fármacos , Femenino , Humanos , Masculino , Persona de Mediana Edad , Músculo Esquelético/efectos de los fármacos , Músculo Esquelético/fisiopatología , Distrofia Muscular de Cinturas/fisiopatología , Pentosiltransferasa/metabolismo , Adulto Joven
4.
Breast Cancer Res Treat ; 180(2): 407-421, 2020 Apr.
Artículo en Inglés | MEDLINE | ID: mdl-32020435

RESUMEN

BACKGROUND AND PURPOSE: Multiparametric radiological imaging is vital for detection, characterization, and diagnosis of many different diseases. Radiomics provide quantitative metrics from radiological imaging that may infer potential biological meaning of the underlying tissue. However, current methods are limited to regions of interest extracted from a single imaging parameter or modality, which limits the amount of information available within the data. This limitation can directly affect the integration and applicable scope of radiomics into different clinical settings, since single image radiomics are not capable of capturing the true underlying tissue characteristics in the multiparametric radiological imaging space. To that end, we developed a multiparametric imaging radiomic (mpRad) framework for extraction of first and second order radiomic features from multiparametric radiological datasets. METHODS: We developed five different radiomic techniques that extract different aspects of the inter-voxel and inter-parametric relationships within the high-dimensional multiparametric magnetic resonance imaging breast datasets. Our patient cohort consisted of 138 breast patients, where, 97 patients had malignant lesions and 41 patients had benign lesions. Sensitivity, specificity, receiver operating characteristic (ROC) and areas under the curve (AUC) analysis were performed to assess diagnostic performance of the mpRad parameters. Statistical significance was set at p < 0.05. RESULTS: The mpRad features successfully classified malignant from benign breast lesions with excellent sensitivity and specificity of 82.5% and 80.5%, respectively, with Area Under the receiver operating characteristic Curve (AUC) of 0.87 (0.81-0.93). mpRad provided a 9-28% increase in AUC metrics over single radiomic parameters. CONCLUSIONS: We have introduced the mpRad framework that extends radiomic analysis from single images to multiparametric datasets for better characterization of the underlying tissue biology.


Asunto(s)
Neoplasias de la Mama/patología , Mama/patología , Interpretación de Imagen Asistida por Computador/métodos , Aprendizaje Automático , Imágenes de Resonancia Magnética Multiparamétrica/métodos , Radiología/normas , Adulto , Anciano , Anciano de 80 o más Años , Mama/diagnóstico por imagen , Neoplasias de la Mama/diagnóstico por imagen , Femenino , Humanos , Persona de Mediana Edad , Curva ROC , Estudios Retrospectivos , Adulto Joven
5.
BMC Neurol ; 20(1): 196, 2020 May 19.
Artículo en Inglés | MEDLINE | ID: mdl-32429923

RESUMEN

BACKGROUND: Pathogenic variants in the FKRP gene cause impaired glycosylation of α-dystroglycan in muscle, producing a limb-girdle muscular dystrophy with cardiomyopathy. Despite advances in understanding the pathophysiology of FKRP-associated myopathies, clinical research in the limb-girdle muscular dystrophies has been limited by the lack of normative biomarker data to gauge disease progression. METHODS: Participants in a phase 2 clinical trial were evaluated over a 4-month, untreated lead-in period to evaluate repeatability and to obtain normative data for timed function tests, strength tests, pulmonary function, and body composition using DEXA and whole-body MRI. Novel deep learning algorithms were used to analyze MRI scans and quantify muscle, fat, and intramuscular fat infiltration in the thighs. T-tests and signed rank tests were used to assess changes in these outcome measures. RESULTS: Nineteen participants were observed during the lead-in period for this trial. No significant changes were noted in the strength, pulmonary function, or body composition outcome measures over the 4-month observation period. One timed function measure, the 4-stair climb, showed a statistically significant difference over the observation period. Quantitative estimates of muscle, fat, and intramuscular fat infiltration from whole-body MRI corresponded significantly with DEXA estimates of body composition, strength, and timed function measures. CONCLUSIONS: We describe normative data and repeatability performance for multiple physical function measures in an adult FKRP muscular dystrophy population. Our analysis indicates that deep learning algorithms can be used to quantify healthy and dystrophic muscle seen on whole-body imaging. TRIAL REGISTRATION: This study was retrospectively registered in clinicaltrials.gov (NCT02841267) on July 22, 2016 and data supporting this study has been submitted to this registry.


Asunto(s)
Distrofia Muscular de Cinturas/fisiopatología , Pentosiltransferasa/genética , Adulto , Anciano , Distroglicanos/metabolismo , Femenino , Glicosilación , Humanos , Imagen por Resonancia Magnética , Masculino , Persona de Mediana Edad , Músculo Esquelético/patología , Distrofia Muscular de Cinturas/genética , Evaluación de Resultado en la Atención de Salud , Adulto Joven
6.
AJR Am J Roentgenol ; 222(4): e2330573, 2024 04.
Artículo en Inglés | MEDLINE | ID: mdl-38230901

RESUMEN

GPT-4 outperformed a radiology domain-specific natural language processing model in classifying imaging findings from chest radiograph reports, both with and without predefined labels. Prompt engineering for context further improved performance. The findings indicate a role for large language models to accelerate artificial intelligence model development in radiology by automating data annotation.


Asunto(s)
Procesamiento de Lenguaje Natural , Radiografía Torácica , Humanos , Radiografía Torácica/métodos , Sistemas de Información Radiológica
9.
J Imaging Inform Med ; 2024 Jun 27.
Artículo en Inglés | MEDLINE | ID: mdl-38937343

RESUMEN

As the adoption of artificial intelligence (AI) systems in radiology grows, the increase in demand for greater bandwidth and computational resources can lead to greater infrastructural costs for healthcare providers and AI vendors. To that end, we developed ISLE, an intelligent streaming framework to address inefficiencies in current imaging infrastructures. Our framework draws inspiration from video-on-demand platforms to intelligently stream medical images to AI vendors at an optimal resolution for inference from a single high-resolution copy using progressive encoding. We hypothesize that ISLE can dramatically reduce the bandwidth and computational requirements for AI inference, while increasing throughput (i.e., the number of scans processed by the AI system per second). We evaluate our framework by streaming chest X-rays for classification and abdomen CT scans for liver and spleen segmentation and comparing them with the original versions of each dataset. For classification, our results show that ISLE reduced data transmission and decoding time by at least 92% and 88%, respectively, while increasing throughput by more than 3.72 × . For both segmentation tasks, ISLE reduced data transmission and decoding time by at least 82% and 88%, respectively, while increasing throughput by more than 2.9 × . In all three tasks, the ISLE streamed data had no impact on the AI system's diagnostic performance (all P > 0.05). Therefore, our results indicate that our framework can address inefficiencies in current imaging infrastructures by improving data and computational efficiency of AI deployments in the clinical environment without impacting clinical decision-making using AI systems.

10.
Radiol Artif Intell ; 6(3): e230240, 2024 May.
Artículo en Inglés | MEDLINE | ID: mdl-38477660

RESUMEN

Purpose To evaluate the robustness of an award-winning bone age deep learning (DL) model to extensive variations in image appearance. Materials and Methods In December 2021, the DL bone age model that won the 2017 RSNA Pediatric Bone Age Challenge was retrospectively evaluated using the RSNA validation set (1425 pediatric hand radiographs; internal test set in this study) and the Digital Hand Atlas (DHA) (1202 pediatric hand radiographs; external test set). Each test image underwent seven types of transformations (rotations, flips, brightness, contrast, inversion, laterality marker, and resolution) to represent a range of image appearances, many of which simulate real-world variations. Computational "stress tests" were performed by comparing the model's predictions on baseline and transformed images. Mean absolute differences (MADs) of predicted bone ages compared with radiologist-determined ground truth on baseline versus transformed images were compared using Wilcoxon signed rank tests. The proportion of clinically significant errors (CSEs) was compared using McNemar tests. Results There was no evidence of a difference in MAD of the model on the two baseline test sets (RSNA = 6.8 months, DHA = 6.9 months; P = .05), indicating good model generalization to external data. Except for the RSNA dataset images with an appended radiologic laterality marker (P = .86), there were significant differences in MAD for both the DHA and RSNA datasets among other transformation groups (rotations, flips, brightness, contrast, inversion, and resolution). There were significant differences in proportion of CSEs for 57% of the image transformations (19 of 33) performed on the DHA dataset. Conclusion Although an award-winning pediatric bone age DL model generalized well to curated external images, it had inconsistent predictions on images that had undergone simple transformations reflective of several real-world variations in image appearance. Keywords: Pediatrics, Hand, Convolutional Neural Network, Radiography Supplemental material is available for this article. © RSNA, 2024 See also commentary by Faghani and Erickson in this issue.


Asunto(s)
Determinación de la Edad por el Esqueleto , Aprendizaje Profundo , Niño , Humanos , Algoritmos , Redes Neurales de la Computación , Radiografía , Estudios Retrospectivos , Determinación de la Edad por el Esqueleto/métodos
11.
J Am Coll Radiol ; 21(2): 248-256, 2024 Feb.
Artículo en Inglés | MEDLINE | ID: mdl-38072221

RESUMEN

Radiology is on the verge of a technological revolution driven by artificial intelligence (including large language models), which requires robust computing and storage capabilities, often beyond the capacity of current non-cloud-based informatics systems. The cloud presents a potential solution for radiology, and we should weigh its economic and environmental implications. Recently, cloud technologies have become a cost-effective strategy by providing necessary infrastructure while reducing expenditures associated with hardware ownership, maintenance, and upgrades. Simultaneously, given the optimized energy consumption in modern cloud data centers, this transition is expected to reduce the environmental footprint of radiologic operations. The path to cloud integration comes with its own challenges, and radiology informatics leaders must consider elements such as cloud architectural choices, pricing, data security, uptime service agreements, user training and support, and broader interoperability. With the increasing importance of data-driven tools in radiology, understanding and navigating the cloud landscape will be essential for the future of radiology and its various stakeholders.


Asunto(s)
Inteligencia Artificial , Radiología , Nube Computacional , Costos y Análisis de Costo , Diagnóstico por Imagen
12.
J Am Coll Radiol ; 21(2): 239-247, 2024 Feb.
Artículo en Inglés | MEDLINE | ID: mdl-38043630

RESUMEN

Radiology is a major contributor to health care's impact on climate change, in part due to its reliance on energy-intensive equipment as well as its growing technological reliance. Delivering modern patient care requires a robust informatics team to move images from the imaging equipment to the workstations and the health care system. Radiology informatics is the field that manages medical imaging IT. This involves the acquisition, storage, retrieval, and use of imaging information in health care to improve access and quality, which includes PACS, cloud services, and artificial intelligence. However, the electricity consumption of computing and the life cycle of various computer components expands the carbon footprint of health care. The authors provide a general framework to understand the environmental impact of clinical radiology informatics, which includes using the international Greenhouse Gas Protocol to draft a definition of scopes of emissions pertinent to radiology informatics, as well as exploring existing tools to measure and account for these emissions. A novel standard ecolabel for radiology informatics tools, such as the Energy Star label for consumer devices or Leadership in Energy and Environmental Design certification for buildings, should be developed to promote awareness and guide radiologists and radiology informatics leaders in making environmentally conscious decisions for their clinical practice. At this critical climate juncture, the radiology community has a unique and pressing obligation to consider our shared environmental responsibility in innovating clinical technology for patient care.


Asunto(s)
Informática Médica , Radiología , Humanos , Inteligencia Artificial , Radiografía , Diagnóstico por Imagen
13.
Radiol Imaging Cancer ; 6(1): e230033, 2024 Jan.
Artículo en Inglés | MEDLINE | ID: mdl-38180338

RESUMEN

Purpose To describe the design, conduct, and results of the Breast Multiparametric MRI for prediction of neoadjuvant chemotherapy Response (BMMR2) challenge. Materials and Methods The BMMR2 computational challenge opened on May 28, 2021, and closed on December 21, 2021. The goal of the challenge was to identify image-based markers derived from multiparametric breast MRI, including diffusion-weighted imaging (DWI) and dynamic contrast-enhanced (DCE) MRI, along with clinical data for predicting pathologic complete response (pCR) following neoadjuvant treatment. Data included 573 breast MRI studies from 191 women (mean age [±SD], 48.9 years ± 10.56) in the I-SPY 2/American College of Radiology Imaging Network (ACRIN) 6698 trial (ClinicalTrials.gov: NCT01042379). The challenge cohort was split into training (60%) and test (40%) sets, with teams blinded to test set pCR outcomes. Prediction performance was evaluated by area under the receiver operating characteristic curve (AUC) and compared with the benchmark established from the ACRIN 6698 primary analysis. Results Eight teams submitted final predictions. Entries from three teams had point estimators of AUC that were higher than the benchmark performance (AUC, 0.782 [95% CI: 0.670, 0.893], with AUCs of 0.803 [95% CI: 0.702, 0.904], 0.838 [95% CI: 0.748, 0.928], and 0.840 [95% CI: 0.748, 0.932]). A variety of approaches were used, ranging from extraction of individual features to deep learning and artificial intelligence methods, incorporating DCE and DWI alone or in combination. Conclusion The BMMR2 challenge identified several models with high predictive performance, which may further expand the value of multiparametric breast MRI as an early marker of treatment response. Clinical trial registration no. NCT01042379 Keywords: MRI, Breast, Tumor Response Supplemental material is available for this article. © RSNA, 2024.


Asunto(s)
Neoplasias de la Mama , Imágenes de Resonancia Magnética Multiparamétrica , Femenino , Humanos , Persona de Mediana Edad , Inteligencia Artificial , Neoplasias de la Mama/diagnóstico por imagen , Neoplasias de la Mama/tratamiento farmacológico , Imagen por Resonancia Magnética , Terapia Neoadyuvante , Respuesta Patológica Completa , Adulto
14.
Radiol Artif Intell ; 5(2): e220062, 2023 Mar.
Artículo en Inglés | MEDLINE | ID: mdl-37035428

RESUMEN

Purpose: To evaluate the performance and usability of code-free deep learning (CFDL) platforms in creating DL models for disease classification, object detection, and segmentation on chest radiographs. Materials and Methods: Six CFDL platforms were evaluated in this retrospective study (September 2021). Single- and multilabel classifiers were trained for thoracic pathologic conditions using Guangzhou pediatric and NIH-CXR14 (ie, National Institutes of Health ChestX-ray14) datasets, and external testing was performed using subsets of NIH-CXR14 and Stanford CheXpert datasets, respectively. Pneumonia detection and pneumothorax segmentation models were trained using the Radiological Society of North America (RSNA) Pneumonia and Society for Imaging Informatics in Medicine (SIIM) Pneumothorax datasets, respectively. Model performance was evaluated using F1 scores. Usability was evaluated based on feasibility of image uploading and model training, ease of use, and cost. Results: NIH-CXR14 and CheXpert datasets contained 112 120 (mean age, 47 years ± 17 [SD]; 63 340 male patients) and 151 522 images (mean age, 61 years ± 18; 88 931 male patients), respectively. The other datasets did not report demographics (Guangzhou, 5826 images; RSNA, 26 683 images; SIIM, 15 301 images). Six platforms offered single-label classifiers, four multilabel classifiers, five object detection models, and one segmentation model. Guangzhou pneumonia classifiers demonstrated good internal (F1, 0.93-0.99) and poor external (F1, 0.39-0.44) performance. Multilabel NIH-CXR14 classifiers showed poor internal and external performance (F1, 0.00-0.36 and 0.00-0.76, respectively). NIH-CXR14 single-label classifiers performed poorly (F1, 0.00, all). The single successfully trained pneumonia detection model had an F1 score of 0.48. No segmentation model was successfully trained. Platform usability was limited, with all requiring some type of coded solution. Conclusion: CFDL platforms demonstrated limited performance and usability for chest radiograph analysis.Keywords: Artificial Intelligence, Automated Machine Learning, Chest Radiographs, Deep Learning, Code-Free Deep Learning, Pneumonia, Pneumothorax, Radiology Supplemental material is available for this article. © RSNA, 2023.

15.
J Am Coll Radiol ; 20(6): 561-569, 2023 06.
Artículo en Inglés | MEDLINE | ID: mdl-37127217

RESUMEN

OBJECTIVE: Although educating radiology trainees about artificial intelligence (AI) has become increasingly emphasized, the types of AI educational curricula are not well understood. We performed a systematic review of original studies describing curricula used to teach AI concepts and practical applications for radiology residents and fellows. MATERIALS AND METHODS: We performed a PubMed search for original studies published as of July 22, 2022, describing AI curricula geared toward radiology residents or fellows. Studies meeting inclusion criteria were evaluated for curricula design, implementation details, and outcomes. Descriptive statistics were used to summarize these curricula. RESULTS: Five studies were included describing an AI curriculum, all geared toward radiology residents. All five curricula were led by radiologists, mostly by individual academic radiology departments (4; 80%) with one led by the ACR Resident and Fellow Section. Curricula design included didactic sessions (5; 100%), assigned readings (4; 80%), hands-on learning (3; 60%), and journal clubs (3; 60%); only one had individualized learning plans. All four studies that evaluated the impact of the curricula on participants' knowledge or attitudes showed positive effects. DISCUSSION: Amid increasing recognition of the importance of AI education for radiologists-in-training, several AI curricula for radiology residents have been implemented. Although curricula designs varied and it is unclear if one type is superior, they have had a positive impact on residents' knowledge and attitudes toward AI. As AI becomes increasingly adopted in radiology, these curricula serve as models for other departments and programs to develop AI educational initiatives to prepare the next generation of radiologists for the AI era.


Asunto(s)
Internado y Residencia , Radiología , Humanos , Inteligencia Artificial , Radiología/educación , Radiólogos , Curriculum
16.
Cancers (Basel) ; 15(16)2023 Aug 15.
Artículo en Inglés | MEDLINE | ID: mdl-37627141

RESUMEN

We introduce tumor connectomics, a novel MRI-based complex graph theory framework that describes the intricate network of relationships within the tumor and surrounding tissue, and combine this with multiparametric radiomics (mpRad) in a machine-learning approach to distinguish radiation necrosis (RN) from true progression (TP). Pathologically confirmed cases of RN vs. TP in brain metastases treated with SRS were included from a single institution. The region of interest was manually segmented as the single largest diameter of the T1 post-contrast (T1C) lesion plus the corresponding area of T2 FLAIR hyperintensity. There were 40 mpRad features and 6 connectomics features extracted, as well as 5 clinical and treatment factors. We developed an Integrated Radiomics Informatics System (IRIS) based on an Isomap support vector machine (IsoSVM) model to distinguish TP from RN using leave-one-out cross-validation. Class imbalance was resolved with differential misclassification weighting during model training using the IRIS. In total, 135 lesions in 110 patients were analyzed, including 43 cases (31.9%) of pathologically proven RN and 92 cases (68.1%) of TP. The top-performing connectomics features were three centrality measures of degree, betweenness, and eigenvector centralities. Combining these with the 10 top-performing mpRad features, an optimized IsoSVM model was able to produce a sensitivity of 0.87, specificity of 0.84, AUC-ROC of 0.89 (95% CI: 0.82-0.94), and AUC-PR of 0.94 (95% CI: 0.87-0.97).

17.
Cancers (Basel) ; 14(6)2022 Mar 14.
Artículo en Inglés | MEDLINE | ID: mdl-35326634

RESUMEN

The high-level relationships that form complex networks within tumors and between surrounding tissue is challenging and not fully understood. To better understand these tumoral networks, we developed a tumor connectomics framework (TCF) based on graph theory with machine learning to model the complex interactions within and around the tumor microenvironment that are detectable on imaging. The TCF characterization model was tested with independent datasets of breast, brain, and prostate lesions with corresponding validation datasets in breast and brain cancer. The TCF network connections were modeled using graph metrics of centrality, average path length (APL), and clustering from multiparametric MRI with IsoSVM. The Matthews Correlation Coefficient (MCC), Area Under the Curve-ROC, and Precision-Recall (AUC-ROC and AUC-PR) were used for statistical analysis. The TCF classified the breast and brain tumor cohorts with an IsoSVM AUC-PR and MCC of 0.86, 0.63 and 0.85, 0.65, respectively. The TCF benign breast lesions had a significantly higher clustering coefficient and degree centrality than malignant TCFs. Grade 2 brain tumors demonstrated higher connectivity compared to Grade 4 tumors with increased degree centrality and clustering coefficients. Gleason 7 prostate lesions had increased betweenness centrality and APL compared to Gleason 6 lesions with AUC-PR and MCC ranging from 0.90 to 0.99 and 0.73 to 0.87, respectively. These TCF findings were similar in the validation breast and brain datasets. In conclusion, we present a new method for tumor characterization and visualization that results in a better understanding of the global and regional connections within the lesion and surrounding tissue.

18.
Invest Radiol ; 56(6): 357-368, 2021 06 01.
Artículo en Inglés | MEDLINE | ID: mdl-33350717

RESUMEN

MATERIALS AND METHODS: This single-center study was approved by the institutional review board. Artificial intelligence-based FS MRI scans were created from non-FS images using a deep learning system with a modified convolutional neural network-based U-Net that used a training set of 25,920 images and validation set of 16,416 images. Three musculoskeletal radiologists reviewed 88 knee MR studies in 2 sessions, the original (proton density [PD] + FSPD) and the synthetic (PD + AFSMRI). Readers recorded AFSMRI quality (diagnostic/nondiagnostic) and the presence or absence of meniscal, ligament, and tendon tears; cartilage defects; and bone marrow abnormalities. Contrast-to-noise rate measurements were made among subcutaneous fat, fluid, bone marrow, cartilage, and muscle. The original MRI sequences were used as the reference standard to determine the diagnostic performance of AFSMRI (combined with the original PD sequence). This is a fully balanced study design, where all readers read all images the same number of times, which allowed the determination of the interchangeability of the original and synthetic protocols. Descriptive statistics, intermethod agreement, interobserver concordance, and interchangeability tests were applied. A P value less than 0.01 was considered statistically significant for the likelihood ratio testing, and P value less than 0.05 for all other statistical analyses. RESULTS: Artificial intelligence-based FS MRI quality was rated as diagnostic (98.9% [87/88] to 100% [88/88], all readers). Diagnostic performance (sensitivity/specificity) of the synthetic protocol was high, for tears of the menisci (91% [71/78], 86% [84/98]), cruciate ligaments (92% [12/13], 98% [160/163]), collateral ligaments (80% [16/20], 100% [156/156]), and tendons (90% [9/10], 100% [166/166]). For cartilage defects and bone marrow abnormalities, the synthetic protocol offered an overall sensitivity/specificity of 77% (170/221)/93% (287/307) and 76% (95/125)/90% (443/491), respectively. Intermethod agreement ranged from moderate to substantial for almost all evaluated structures (menisci, cruciate ligaments, collateral ligaments, and bone marrow abnormalities). No significant difference was observed between methods for all structural abnormalities by all readers (P > 0.05), except for cartilage assessment. Interobserver agreement ranged from moderate to substantial for almost all evaluated structures. Original and synthetic protocols were interchangeable for the diagnosis of all evaluated structures. There was no significant difference for the common exact match proportions for all combinations (P > 0.01). The conspicuity of all tissues assessed through contrast-to-noise rate was higher on AFSMRI than on original FSPD images (P < 0.05). CONCLUSIONS: Artificial intelligence-based FS MRI (3D AFSMRI) is feasible and offers a method for fast imaging, with similar detection rates for structural abnormalities of the knee, compared with original 3D MR sequences.


Asunto(s)
Aprendizaje Profundo , Traumatismos de la Rodilla , Inteligencia Artificial , Humanos , Imagenología Tridimensional , Articulación de la Rodilla/diagnóstico por imagen , Imagen por Resonancia Magnética , Espectroscopía de Resonancia Magnética , Reproducibilidad de los Resultados , Sensibilidad y Especificidad
19.
Neurooncol Adv ; 3(1): vdab150, 2021.
Artículo en Inglés | MEDLINE | ID: mdl-34901857

RESUMEN

BACKGROUND: Stereotactic radiosurgery (SRS) may cause radiation necrosis (RN) that is difficult to distinguish from tumor progression (TP) by conventional MRI. We hypothesize that MRI-based multiparametric radiomics (mpRad) and machine learning (ML) can differentiate TP from RN in a multi-institutional cohort. METHODS: Patients with growing brain metastases after SRS at 2 institutions underwent surgery, and RN or TP were confirmed by histopathology. A radiomic tissue signature (RTS) was selected from mpRad, as well as single T1 post-contrast (T1c) and T2 fluid-attenuated inversion recovery (T2-FLAIR) radiomic features. Feature selection and supervised ML were performed in a randomly selected training cohort (N = 95) and validated in the remaining cases (N = 40) using surgical pathology as the gold standard. RESULTS: One hundred and thirty-five discrete lesions (37 RN, 98 TP) from 109 patients were included. Radiographic diagnoses by an experienced neuroradiologist were concordant with histopathology in 67% of cases (sensitivity 69%, specificity 59% for TP). Radiomic analysis indicated institutional origin as a significant confounding factor for diagnosis. A random forest model incorporating 1 mpRad, 4 T1c, and 4 T2-FLAIR features had an AUC of 0.77 (95% confidence interval [CI]: 0.66-0.88), sensitivity of 67% and specificity of 86% in the training cohort, and AUC of 0.71 (95% CI: 0.51-0.91), sensitivity of 52% and specificity of 90% in the validation cohort. CONCLUSIONS: MRI-based mpRad and ML can distinguish TP from RN with high specificity, which may facilitate the triage of patients with growing brain metastases after SRS for repeat radiation versus surgical intervention.

20.
Med Phys ; 47(1): 75-88, 2020 Jan.
Artículo en Inglés | MEDLINE | ID: mdl-31598978

RESUMEN

PURPOSE: Deep learning is emerging in radiology due to the increased computational capabilities available to reading rooms. These computational developments have the ability to mimic the radiologist and may allow for more accurate tissue characterization of normal and pathological lesion tissue to assist radiologists in defining different diseases. We introduce a novel tissue signature model based on tissue characteristics in breast tissue from multiparametric magnetic resonance imaging (mpMRI). The breast tissue signatures are used as inputs in a stacked sparse autoencoder (SSAE) multiparametric deep learning (MPDL) network for segmentation of breast mpMRI. METHODS: We constructed the MPDL network from SSAE with 5 layers with 10 nodes at each layer. A total cohort of 195 breast cancer subjects were used for training and testing of the MPDL network. The cohort consisted of a training dataset of 145 subjects and an independent validation set of 50 subjects. After segmentation, we used a combined SAE-support vector machine (SAE-SVM) learning method for classification. Dice similarity (DS) metrics were calculated between the segmented MPDL and dynamic contrast enhancement (DCE) MRI-defined lesions. Sensitivity, specificity, and area under the curve (AUC) metrics were used to classify benign from malignant lesions. RESULTS: The MPDL segmentation resulted in a high DS of 0.87 ± 0.05 for malignant lesions and 0.84 ± 0.07 for benign lesions. The MPDL had excellent sensitivity and specificity of 86% and 86% with positive predictive and negative predictive values of 92% and 73%, respectively, and an AUC of 0.90. CONCLUSIONS: Using a new tissue signature model as inputs into the MPDL algorithm, we have successfully validated MPDL in a large cohort of subjects and achieved results similar to radiologists.


Asunto(s)
Neoplasias de la Mama/diagnóstico por imagen , Neoplasias de la Mama/patología , Aprendizaje Profundo , Procesamiento de Imagen Asistido por Computador/métodos , Imagen por Resonancia Magnética , Adulto , Anciano , Anciano de 80 o más Años , Femenino , Humanos , Persona de Mediana Edad , Radiología , Adulto Joven
SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA