Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 119
Filtrar
1.
J Imaging Inform Med ; 2024 Apr 08.
Artículo en Inglés | MEDLINE | ID: mdl-38587770

RESUMEN

Uptake segmentation and classification on PSMA PET/CT are important for automating whole-body tumor burden determinations. We developed and evaluated an automated deep learning (DL)-based framework that segments and classifies uptake on PSMA PET/CT. We identified 193 [18F] DCFPyL PET/CT scans of patients with biochemically recurrent prostate cancer from two institutions, including 137 [18F] DCFPyL PET/CT scans for training and internally testing, and 56 scans from another institution for external testing. Two radiologists segmented and labelled foci as suspicious or non-suspicious for malignancy. A DL-based segmentation was developed with two independent CNNs. An anatomical prior guidance was applied to make the DL framework focus on PSMA-avid lesions. Segmentation performance was evaluated by Dice, IoU, precision, and recall. Classification model was constructed with multi-modal decision fusion framework evaluated by accuracy, AUC, F1 score, precision, and recall. Automatic segmentation of suspicious lesions was improved under prior guidance, with mean Dice, IoU, precision, and recall of 0.700, 0.566, 0.809, and 0.660 on the internal test set and 0.680, 0.548, 0.749, and 0.740 on the external test set. Our multi-modal decision fusion framework outperformed single-modal and multi-modal CNNs with accuracy, AUC, F1 score, precision, and recall of 0.764, 0.863, 0.844, 0.841, and 0.847 in distinguishing suspicious and non-suspicious foci on the internal test set and 0.796, 0.851, 0.865, 0.814, and 0.923 on the external test set. DL-based lesion segmentation on PSMA PET is facilitated through our anatomical prior guidance strategy. Our classification framework differentiates suspicious foci from those not suspicious for cancer with good accuracy.

2.
J Thorac Imaging ; 39(3): 194-199, 2024 May 01.
Artículo en Inglés | MEDLINE | ID: mdl-38640144

RESUMEN

PURPOSE: To develop and evaluate a deep convolutional neural network (DCNN) model for the classification of acute and chronic lung nodules from nontuberculous mycobacterial-lung disease (NTM-LD) on computed tomography (CT). MATERIALS AND METHODS: We collected a data set of 650 nodules (316 acute and 334 chronic) from the CT scans of 110 patients with NTM-LD. The data set was divided into training, validation, and test sets in a ratio of 4:1:1. Bounding boxes were used to crop the 2D CT images down to the area of interest. A DCNN model was built using 11 convolutional layers and trained on these images. The performance of the model was evaluated on the hold-out test set and compared with that of 3 radiologists who independently reviewed the images. RESULTS: The DCNN model achieved an area under the receiver operating characteristic curve of 0.806 for differentiating acute and chronic NTM-LD nodules, corresponding to sensitivity, specificity, and accuracy of 76%, 68%, and 72%, respectively. The performance of the model was comparable to that of the 3 radiologists, who had area under the receiver operating characteristic curve, sensitivity, specificity, and accuracy of 0.693 to 0.771, 61% to 82%, 59% to 73%, and 60% to 73%, respectively. CONCLUSIONS: This study demonstrated the feasibility of using a DCNN model for the classification of the activity of NTM-LD nodules on chest CT. The model performance was comparable to that of radiologists. This approach can potentially and efficiently improve the diagnosis and management of NTM-LD.


Asunto(s)
Aprendizaje Profundo , Neoplasias Pulmonares , Neumonía , Humanos , Redes Neurales de la Computación , Tomografía Computarizada por Rayos X/métodos , Interpretación de Imagen Radiográfica Asistida por Computador/métodos , Estudios Retrospectivos , Neoplasias Pulmonares/diagnóstico por imagen
3.
J Imaging Inform Med ; 2024 Mar 21.
Artículo en Inglés | MEDLINE | ID: mdl-38514595

RESUMEN

Deep learning models have demonstrated great potential in medical imaging but are limited by the expensive, large volume of annotations required. To address this, we compared different active learning strategies by training models on subsets of the most informative images using real-world clinical datasets for brain tumor segmentation and proposing a framework that minimizes the data needed while maintaining performance. Then, 638 multi-institutional brain tumor magnetic resonance imaging scans were used to train three-dimensional U-net models and compare active learning strategies. Uncertainty estimation techniques including Bayesian estimation with dropout, bootstrapping, and margins sampling were compared to random query. Strategies to avoid annotating similar images were also considered. We determined the minimum data necessary to achieve performance equivalent to the model trained on the full dataset (α = 0.05). Bayesian approximation with dropout at training and testing showed results equivalent to that of the full data model (target) with around 30% of the training data needed by random query to achieve target performance (p = 0.018). Annotation redundancy restriction techniques can reduce the training data needed by random query to achieve target performance by 20%. We investigated various active learning strategies to minimize the annotation burden for three-dimensional brain tumor segmentation. Dropout uncertainty estimation achieved target performance with the least annotated data.

4.
Clin Cancer Res ; 30(1): 150-158, 2024 01 05.
Artículo en Inglés | MEDLINE | ID: mdl-37916978

RESUMEN

PURPOSE: We aimed to develop and validate a deep learning (DL) model to automatically segment posterior fossa ependymoma (PF-EPN) and predict its molecular subtypes [Group A (PFA) and Group B (PFB)] from preoperative MR images. EXPERIMENTAL DESIGN: We retrospectively identified 227 PF-EPNs (development and internal test sets) with available preoperative T2-weighted (T2w) MR images and molecular status to develop and test a 3D nnU-Net (referred to as T2-nnU-Net) for tumor segmentation and molecular subtype prediction. The network was externally tested using an external independent set [n = 40; subset-1 (n = 31) and subset-2 (n =9)] and prospectively enrolled cases [prospective validation set (n = 27)]. The Dice similarity coefficient was used to evaluate the segmentation performance. Receiver operating characteristic analysis for molecular subtype prediction was performed. RESULTS: For tumor segmentation, the T2-nnU-Net achieved a Dice score of 0.94 ± 0.02 in the internal test set. For molecular subtype prediction, the T2-nnU-Net achieved an AUC of 0.93 and accuracy of 0.89 in the internal test set, an AUC of 0.99 and accuracy of 0.93 in the external test set. In the prospective validation set, the model achieved an AUC of 0.93 and an accuracy of 0.89. The predictive performance of T2-nnU-Net was superior or comparable to that of demographic and multiple radiologic features (AUCs ranging from 0.87 to 0.95). CONCLUSIONS: A fully automated DL model was developed and validated to accurately segment PF-EPNs and predict molecular subtypes using only T2w MR images, which could help in clinical decision-making.


Asunto(s)
Aprendizaje Profundo , Ependimoma , Humanos , Estudios Retrospectivos , Área Bajo la Curva , Toma de Decisiones Clínicas , Ácido Fenilfosfonotioico, 2-Etil 2-(4-Nitrofenil) Éster , Ependimoma/diagnóstico por imagen , Ependimoma/genética , Imagen por Resonancia Magnética
5.
Radiology ; 309(2): e222891, 2023 11.
Artículo en Inglés | MEDLINE | ID: mdl-37934098

RESUMEN

Interventional oncology is a rapidly growing field with advances in minimally invasive image-guided local-regional treatments for hepatocellular carcinoma (HCC), including transarterial chemoembolization, transarterial radioembolization, and thermal ablation. However, current standardized clinical staging systems for HCC are limited in their ability to optimize patient selection for treatment as they rely primarily on serum markers and radiologist-defined imaging features. Given the variation in treatment responses, an updated scoring system that includes multidimensional aspects of the disease, including quantitative imaging features, serum markers, and functional biomarkers, is needed to optimally triage patients. With the vast amounts of numerical medical record data and imaging features, researchers have turned to image-based methods, such as radiomics and artificial intelligence (AI), to automatically extract and process multidimensional data from images. The synthesis of these data can provide clinically relevant results to guide personalized treatment plans and optimize resource utilization. Machine learning (ML) is a branch of AI in which a model learns from training data and makes effective predictions by teaching itself. This review article outlines the basics of ML and provides a comprehensive overview of its potential value in the prediction of treatment response in patients with HCC after minimally invasive image-guided therapy.


Asunto(s)
Carcinoma Hepatocelular , Quimioembolización Terapéutica , Neoplasias Hepáticas , Humanos , Inteligencia Artificial , Aprendizaje Automático , Biomarcadores
6.
Eur Radiol ; 2023 Nov 06.
Artículo en Inglés | MEDLINE | ID: mdl-37930412

RESUMEN

Conventional transarterial chemoembolization (cTACE) utilizing ethiodized oil as a chemotherapy carrier has become a standard treatment for intermediate-stage hepatocellular carcinoma (HCC) and has been adopted as a bridging and downstaging therapy for liver transplantation. Water-in-oil emulsion made up of ethiodized oil and chemotherapy solution is retained in tumor vasculature resulting in high tissue drug concentration and low systemic chemotherapy doses. The density and distribution pattern of ethiodized oil within the tumor on post-treatment imaging are predictive of the extent of tumor necrosis and duration of response to treatment. This review describes the multiple roles of ethiodized oil, particularly in its role as a biomarker of tumor response to cTACE. CLINICAL RELEVANCE: With the increasing complexity of locoregional therapy options, including the use of combination therapies, treatment response assessment has become challenging; Ethiodized oil deposition patterns can serve as an imaging biomarker for the prediction of treatment response, and perhaps predict post-treatment prognosis. KEY POINTS: • Treatment response assessment after locoregional therapy to hepatocellular carcinoma is fraught with multiple challenges given the varied post-treatment imaging appearance. • Ethiodized oil is unique in that its' radiopacity can serve as an imaging biomarker to help predict treatment response. • The pattern of deposition of ethiodozed oil has served as a mechanism to detect portions of tumor that are undertreated and can serve as an adjunct to enhancement in order to improve management in patients treated with intraarterial embolization with ethiodized oil.

7.
Eur J Radiol ; 168: 111136, 2023 Nov.
Artículo en Inglés | MEDLINE | ID: mdl-37832194

RESUMEN

PURPOSE: The study was aimed to develop and evaluate a deep learning-based radiomics to predict the histological risk categorization of thymic epithelial tumors (TETs), which can be highly informative for patient treatment planning and prognostic assessment. METHOD: A total of 681 patients with TETs from three independent hospitals were included and separated into derivation cohort and external test cohort. Handcrafted and deep learning features were extracted from preoperative contrast-enhanced CT images and selected to build three radiomics signatures (radiomics signature [Rad_Sig], deep learning signature [DL_Sig] and deep learning radiomics signature [DLR_Sig]) to predict risk categorization of TETs. A deep learning-based radiomic nomogram (DLRN) was then depicted to visualize the classification evaluation. The performance of predictive models was compared using the receiver operating characteristic and decision curve analysis (DCA). RESULTS: Among three radiomics signatures, DLR_Sig demonstrated optimum performance with an AUC of 0.883 for the derivation cohort and 0.749 for the external test cohort. Combining DLR_Sig with age and gender, DLRN was depict and exhibited optimum performance among all radiomics models with an AUC of 0.965, accuracy of 0.911, sensitivity of 0.921 and specificity of 0.902 in the derivation cohort, and an AUC of 0.786, accuracy of 0.774, sensitivity of 0.778 and specificity of 0.771 in the external test cohort. The DCA showed that DLRN had greater clinical benefit than other radiomics signatures. CONCLUSIONS: Our study developed and validated a DLRN to accurately predict the risk categorization of TETs, which has potential to facilitate individualized treatment and improve patient prognosis evaluation.


Asunto(s)
Aprendizaje Profundo , Neoplasias Glandulares y Epiteliales , Neoplasias del Timo , Humanos , Nomogramas , Neoplasias Glandulares y Epiteliales/diagnóstico por imagen , Neoplasias del Timo/diagnóstico por imagen , Estudios Retrospectivos
8.
Biomaterials ; 299: 122164, 2023 08.
Artículo en Inglés | MEDLINE | ID: mdl-37229807

RESUMEN

It is a challenging task to develop a contrast agent that not only provides excellent image contrast but also protects impaired kidneys from oxidative-related stress during angiography. Clinically approved iodinated CT contrast media are associated with potential renal toxicity, making it necessary to develop a renoprotective contrast agent. Here, we develop a CeO2 nanoparticles (NPs)-mediated three-in-one renoprotective imaging strategy, namely, i) renal clearable CeO2 NPs serve as a one-stone-two-birds antioxidative contrast agent, ii) low contrast media dose, and iii) spectral CT, for in vivo CT angiography (CTA). Benefiting from the merits of advanced sensitivity of spectral CT and K-edge energy of Cerium (Ce, 40.4 keV), an improved image quality of in vivo CTA is successfully achieved with a 10 times reduction of contrast agent dosage. In parallel, the sizes of CeO2 NPs and broad catalytic activities are suitable to be filtered via glomerulus thus directly alleviating the oxidative stress and the accompanying inflammatory injury of the kidney tubules. In addition, the low dosage of CeO2 NPs reduces the hypoperfusion stress of renal tubules induced by concentrated contrast agents used in angiography. This three-in-one renoprotective imaging strategy helps prevent kidney injury from being worsened during the CTA examination.


Asunto(s)
Cerio , Nanopartículas , Angiografía por Tomografía Computarizada , Medios de Contraste , Antioxidantes , Riñón/diagnóstico por imagen
9.
IEEE J Biomed Health Inform ; 27(8): 4052-4061, 2023 08.
Artículo en Inglés | MEDLINE | ID: mdl-37204947

RESUMEN

Segmentation of liver from CT scans is essential in computer-aided liver disease diagnosis and treatment. However, the 2DCNN ignores the 3D context, and the 3DCNN suffers from numerous learnable parameters and high computational cost. In order to overcome this limitation, we propose an Attentive Context-Enhanced Network (AC-E Network) consisting of 1) an attentive context encoding module (ACEM) that can be integrated into the 2D backbone to extract 3D context without a sharp increase in the number of learnable parameters; 2) a dual segmentation branch including complemental loss making the network attend to both the liver region and boundary so that getting the segmented liver surface with high accuracy. Extensive experiments on the LiTS and the 3D-IRCADb datasets demonstrate that our method outperforms existing approaches and is competitive to the state-of-the-art 2D-3D hybrid method on the equilibrium of the segmentation precision and the number of model parameters.


Asunto(s)
Abdomen , Neoplasias Hepáticas , Humanos , Tomografía Computarizada por Rayos X/métodos , Diagnóstico por Computador , Procesamiento de Imagen Asistido por Computador/métodos
12.
J Stroke Cerebrovasc Dis ; 31(11): 106753, 2022 Nov.
Artículo en Inglés | MEDLINE | ID: mdl-36115105

RESUMEN

OBJECTIVES: In this study, we developed a deep learning pipeline that detects large vessel occlusion (LVO) and predicts functional outcome based on computed tomography angiography (CTA) images to improve the management of the LVO patients. METHODS: A series identifier picked out 8650 LVO-protocoled studies from 2015 to 2019 at Rhode Island Hospital with an identified thin axial series that served as the data pool. Data were annotated into 2 classes: 1021 LVOs and 7629 normal. The Inception-V1 I3D architecture was applied for LVO detection. For outcome prediction, 323 patients undergoing thrombectomy were selected. A 3D convolution neural network (CNN) was used for outcome prediction (30-day mRS) with CTA volumes and embedded pre-treatment variables as inputs. RESULT: For LVO-detection model, CTAs from 8,650 patients (median age 68 years, interquartile range (IQR): 58-81; 3934 females) were analyzed. The cross-validated AUC for LVO vs. not was 0.74 (95% CI: 0.72-0.75). For the mRS classification model, CTAs from 323 patients (median age 75 years, IQR: 63-84; 164 females) were analyzed. The algorithm achieved a test AUC of 0.82 (95% CI: 0.79-0.84), sensitivity of 89%, and specificity 66%. The two models were then integrated with hospital infrastructure where CTA was collected in real-time and processed by the model. If LVO was detected, interventionists were notified and provided with predicted clinical outcome information. CONCLUSION: 3D CNNs based on CTA were effective in selecting LVO and predicting LVO mechanical thrombectomy short-term prognosis. End-to-end AI platform allows users to receive immediate prognosis prediction and facilitates clinical workflow.


Asunto(s)
Isquemia Encefálica , Accidente Cerebrovascular , Femenino , Humanos , Anciano , Inteligencia Artificial , Trombectomía/efectos adversos , Angiografía por Tomografía Computarizada/métodos , Arteria Cerebral Media , Estudios Retrospectivos
13.
Sci Rep ; 12(1): 7924, 2022 05 13.
Artículo en Inglés | MEDLINE | ID: mdl-35562532

RESUMEN

With modern management of primary liver cancer shifting towards non-invasive diagnostics, accurate tumor classification on medical imaging is increasingly critical for disease surveillance and appropriate targeting of therapy. Recent advancements in machine learning raise the possibility of automated tools that can accelerate workflow, enhance performance, and increase the accessibility of artificial intelligence to clinical researchers. We explore the use of an automated Tree-Based Optimization Tool that leverages a genetic programming algorithm for differentiation of the two common primary liver cancers on multiphasic MRI. Manual and automated analyses were performed to select an optimal machine learning model, with an accuracy of 73-75% (95% CI 0.59-0.85), sensitivity of 70-75% (95% CI 0.48-0.89), and specificity of 71-79% (95% CI 0.52-0.90) on manual optimization, and an accuracy of 73-75% (95% CI 0.59-0.85), sensitivity of 65-75% (95% CI 0.43-0.89) and specificity of 75-79% (95% CI 0.56-0.90) for automated machine learning. We found that automated machine learning performance was similar to that of manual optimization, and it could classify hepatocellular carcinoma and intrahepatic cholangiocarcinoma with an sensitivity and specificity comparable to that of radiologists. However, automated machine learning performance was poor on a subset of scans that met LI-RADS criteria for LR-M. Exploration of additional feature selection and classifier methods with automated machine learning to improve performance on LR-M cases as well as prospective validation in the clinical setting are needed prior to implementation.


Asunto(s)
Neoplasias de los Conductos Biliares , Carcinoma Hepatocelular , Colangiocarcinoma , Neoplasias Hepáticas , Inteligencia Artificial , Neoplasias de los Conductos Biliares/diagnóstico por imagen , Conductos Biliares Intrahepáticos , Carcinoma Hepatocelular/diagnóstico por imagen , Colangiocarcinoma/diagnóstico por imagen , Medios de Contraste , Humanos , Neoplasias Hepáticas/diagnóstico por imagen , Aprendizaje Automático , Imagen por Resonancia Magnética , Estudios Retrospectivos , Sensibilidad y Especificidad
14.
Med Image Anal ; 79: 102458, 2022 07.
Artículo en Inglés | MEDLINE | ID: mdl-35500497

RESUMEN

Pixel-wise error correction of initial segmentation results provides an effective way for quality improvement. The additional error segmentation network learns to identify correct predictions and incorrect ones. The performance on error segmentation directly affects the accuracy on the test set and the subsequent self-training with the error-corrected pseudo labels. In this paper, we propose a novel label rectification method based on error correction, namely ECLR, which can be directly added after the fully-supervised segmentation framework. Moreover, it can be used to guide the semi-supervised learning (SSL) process, constituting an error correction guided SSL framework, called ECGSSL. Specifically, we analyze the types and causes of segmentation error, and divide it into intra-class error and inter-class error caused by intra-class inconsistency and inter-class similarity problems in segmentation, respectively. Further, we propose a collaborative multi-task discriminative error prediction network (DEP-Net) to highlight two error types. For better training of DEP-Net, we propose specific mask degradation methods representing typical segmentation errors. Under the fully-supervised regime, the pre-trained DEP-Net is used to directly rectify the initial segmentation results of the test set. While, under the semi-supervised regime, a dual error correction method is proposed for unlabeled data to obtain more reliable network re-training. Our method is easy to apply to different segmentation models. Extensive experiments on gland segmentation verify that ECLR yields substantial improvements based on initial segmentation predictions. ECGSSL shows consistent improvements over a supervised baseline learned only from labeled data and achieves competitive performance compared with other popular semi-supervised methods.


Asunto(s)
Colon , Aprendizaje Automático Supervisado , Humanos
15.
Epilepsy Res ; 182: 106861, 2022 05.
Artículo en Inglés | MEDLINE | ID: mdl-35364483

RESUMEN

Given improvements in computing power, artificial intelligence (AI) with deep learning has emerged as the state-of-the art method for the analysis of medical imaging data and will increasingly be used in the clinical setting. Recent work in epilepsy research has aimed to use AI methods to improve diagnosis, prognosis, and treatment, with the ultimate goal of developing highly accurate and reliable tools to aid clinical decision making. Here, we review how researchers are currently using AI methods in the analysis of neuroimaging data in epilepsy, focusing on challenges unique to each imaging modality with an emphasis on clinical significance. We further provide critical analyses of existing techniques and recommend areas for future work. We call for: (1) a multimodal approach that leverages the strengths of different modalities while compensating for their individual weaknesses, and (2) widespread implementation of generalizability testing of proposed models, a needed step before their introduction into clinical workflows. To achieve both goals, more collaborations among research groups and institutions in this field will be required.


Asunto(s)
Inteligencia Artificial , Epilepsia , Toma de Decisiones Clínicas , Epilepsia/diagnóstico por imagen , Humanos
17.
Eur Radiol ; 32(7): 4446-4456, 2022 Jul.
Artículo en Inglés | MEDLINE | ID: mdl-35184218

RESUMEN

OBJECTIVES: We aimed to develop deep learning models using longitudinal chest X-rays (CXRs) and clinical data to predict in-hospital mortality of COVID-19 patients in the intensive care unit (ICU). METHODS: Six hundred fifty-four patients (212 deceased, 442 alive, 5645 total CXRs) were identified across two institutions. Imaging and clinical data from one institution were used to train five longitudinal transformer-based networks applying five-fold cross-validation. The models were tested on data from the other institution, and pairwise comparisons were used to determine the best-performing models. RESULTS: A higher proportion of deceased patients had elevated white blood cell count, decreased absolute lymphocyte count, elevated creatine concentration, and incidence of cardiovascular and chronic kidney disease. A model based on pre-ICU CXRs achieved an AUC of 0.632 and an accuracy of 0.593, and a model based on ICU CXRs achieved an AUC of 0.697 and an accuracy of 0.657. A model based on all longitudinal CXRs (both pre-ICU and ICU) achieved an AUC of 0.702 and an accuracy of 0.694. A model based on clinical data alone achieved an AUC of 0.653 and an accuracy of 0.657. The addition of longitudinal imaging to clinical data in a combined model significantly improved performance, reaching an AUC of 0.727 (p = 0.039) and an accuracy of 0.732. CONCLUSIONS: The addition of longitudinal CXRs to clinical data significantly improves mortality prediction with deep learning for COVID-19 patients in the ICU. KEY POINTS: • Deep learning was used to predict mortality in COVID-19 ICU patients. • Serial radiographs and clinical data were used. • The models could inform clinical decision-making and resource allocation.


Asunto(s)
COVID-19 , Aprendizaje Profundo , Humanos , Unidades de Cuidados Intensivos , Radiografía , Rayos X
18.
JAMA Netw Open ; 5(1): e2144742, 2022 01 04.
Artículo en Inglés | MEDLINE | ID: mdl-35072720

RESUMEN

Importance: Despite the rapid growth of interest and diversity in applications of artificial intelligence (AI) to biomedical research, there are limited objective ways to characterize the potential for use of AI in clinical practice. Objective: To examine what types of medical AI have the greatest estimated translational impact (ie, ability to lead to development that has measurable value for human health) potential. Design, Setting, and Participants: In this cohort study, research grants related to AI awarded between January 1, 1985, and December 31, 2020, were identified from a National Institutes of Health (NIH) award database. The text content for each award was entered into a Natural Language Processing (NLP) clustering algorithm. An NIH database was also used to extract citation data, including the number of citations and approximate potential to translate (APT) score for published articles associated with the granted awards to create proxies for translatability. Exposures: Unsupervised assignment of AI-related research awards to application topics using NLP. Main Outcomes and Measures: Annualized citations per $1 million funding (ACOF) and average APT score for award-associated articles, grouped by application topic. The APT score is a machine-learning based metric created by the NIH Office of Portfolio Analysis that quantifies the likelihood of future citation by a clinical article. Results: A total of 16 629 NIH awards related to AI were included in the analysis, and 75 applications of AI were identified. Total annual funding for AI grew from $17.4 million in 1985 to $1.43 billion in 2020. By average APT, interpersonal communication technologies (0.488; 95% CI, 0.472-0.504) and population genetics (0.463; 95% CI, 0.453-0.472) had the highest translatability; environmental health (ACOF, 1038) and applications focused on the electronic health record (ACOF, 489) also had high translatability. The category of applications related to biochemical analysis was found to have low translatability by both metrics (average APT, 0.393; 95% CI, 0.388-0.398; ACOF, 246). Conclusions and Relevance: Based on this study's findings, data on grants from the NIH can apparently be used to identify and characterize medical applications of AI to understand changes in academic productivity, funding support, and potential for translational impact. This method may be extended to characterize other research domains.


Asunto(s)
Inteligencia Artificial/economía , Distinciones y Premios , Investigación Biomédica/economía , National Institutes of Health (U.S.)/economía , Estudios de Cohortes , Financiación Gubernamental , Organización de la Financiación , Humanos , Apoyo a la Investigación como Asunto/economía , Estados Unidos
19.
NPJ Digit Med ; 5(1): 5, 2022 Jan 14.
Artículo en Inglés | MEDLINE | ID: mdl-35031687

RESUMEN

While COVID-19 diagnosis and prognosis artificial intelligence models exist, very few can be implemented for practical use given their high risk of bias. We aimed to develop a diagnosis model that addresses notable shortcomings of prior studies, integrating it into a fully automated triage pipeline that examines chest radiographs for the presence, severity, and progression of COVID-19 pneumonia. Scans were collected using the DICOM Image Analysis and Archive, a system that communicates with a hospital's image repository. The authors collected over 6,500 non-public chest X-rays comprising diverse COVID-19 severities, along with radiology reports and RT-PCR data. The authors provisioned one internally held-out and two external test sets to assess model generalizability and compare performance to traditional radiologist interpretation. The pipeline was evaluated on a prospective cohort of 80 radiographs, reporting a 95% diagnostic accuracy. The study mitigates bias in AI model development and demonstrates the value of an end-to-end COVID-19 triage platform.

20.
Eur Radiol ; 32(1): 205-212, 2022 Jan.
Artículo en Inglés | MEDLINE | ID: mdl-34223954

RESUMEN

OBJECTIVES: Early recognition of coronavirus disease 2019 (COVID-19) severity can guide patient management. However, it is challenging to predict when COVID-19 patients will progress to critical illness. This study aimed to develop an artificial intelligence system to predict future deterioration to critical illness in COVID-19 patients. METHODS: An artificial intelligence (AI) system in a time-to-event analysis framework was developed to integrate chest CT and clinical data for risk prediction of future deterioration to critical illness in patients with COVID-19. RESULTS: A multi-institutional international cohort of 1,051 patients with RT-PCR confirmed COVID-19 and chest CT was included in this study. Of them, 282 patients developed critical illness, which was defined as requiring ICU admission and/or mechanical ventilation and/or reaching death during their hospital stay. The AI system achieved a C-index of 0.80 for predicting individual COVID-19 patients' to critical illness. The AI system successfully stratified the patients into high-risk and low-risk groups with distinct progression risks (p < 0.0001). CONCLUSIONS: Using CT imaging and clinical data, the AI system successfully predicted time to critical illness for individual patients and identified patients with high risk. AI has the potential to accurately triage patients and facilitate personalized treatment. KEY POINT: • AI system can predict time to critical illness for patients with COVID-19 by using CT imaging and clinical data.


Asunto(s)
COVID-19 , Inteligencia Artificial , Humanos , Estudios Retrospectivos , SARS-CoV-2 , Tomografía Computarizada por Rayos X
SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA
...