Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 7 de 7
Filter
Add more filters










Database
Language
Publication year range
1.
Sci Rep ; 14(1): 7551, 2024 03 30.
Article in English | MEDLINE | ID: mdl-38555414

ABSTRACT

Transfer learning plays a pivotal role in addressing the paucity of data, expediting training processes, and enhancing model performance. Nonetheless, the prevailing practice of transfer learning predominantly relies on pre-trained models designed for the natural image domain, which may not be well-suited for the medical image domain in grayscale. Recognizing the significance of leveraging transfer learning in medical research, we undertook the construction of class-balanced pediatric radiograph datasets collectively referred to as PedXnets, grounded in radiographic views using the pediatric radiographs collected over 24 years at Asan Medical Center. For PedXnets pre-training, approximately 70,000 X-ray images were utilized. Three different pre-training weights of PedXnet were constructed using Inception V3 for various radiation perspective classifications: Model-PedXnet-7C, Model-PedXnet-30C, and Model-PedXnet-68C. We validated the transferability and positive effects of transfer learning of PedXnets through pediatric downstream tasks including fracture classification and bone age assessment (BAA). The evaluation of transfer learning effects through classification and regression metrics showed superior performance of Model-PedXnets in quantitative assessments. Additionally, visual analyses confirmed that the Model-PedXnets were more focused on meaningful regions of interest.


Subject(s)
Deep Learning , Fractures, Bone , Humans , Child , Machine Learning , Radiography
2.
Korean J Radiol ; 25(3): 224-242, 2024 Mar.
Article in English | MEDLINE | ID: mdl-38413108

ABSTRACT

The emergence of Chat Generative Pre-trained Transformer (ChatGPT), a chatbot developed by OpenAI, has garnered interest in the application of generative artificial intelligence (AI) models in the medical field. This review summarizes different generative AI models and their potential applications in the field of medicine and explores the evolving landscape of Generative Adversarial Networks and diffusion models since the introduction of generative AI models. These models have made valuable contributions to the field of radiology. Furthermore, this review also explores the significance of synthetic data in addressing privacy concerns and augmenting data diversity and quality within the medical domain, in addition to emphasizing the role of inversion in the investigation of generative models and outlining an approach to replicate this process. We provide an overview of Large Language Models, such as GPTs and bidirectional encoder representations (BERTs), that focus on prominent representatives and discuss recent initiatives involving language-vision models in radiology, including innovative large language and vision assistant for biomedicine (LLaVa-Med), to illustrate their practical application. This comprehensive review offers insights into the wide-ranging applications of generative AI models in clinical research and emphasizes their transformative potential.


Subject(s)
Artificial Intelligence , Radiology , Humans , Diagnostic Imaging , Software , Language
3.
Korean J Radiol ; 24(11): 1061-1080, 2023 11.
Article in English | MEDLINE | ID: mdl-37724586

ABSTRACT

Artificial intelligence (AI) in radiology is a rapidly developing field with several prospective clinical studies demonstrating its benefits in clinical practice. In 2022, the Korean Society of Radiology held a forum to discuss the challenges and drawbacks in AI development and implementation. Various barriers hinder the successful application and widespread adoption of AI in radiology, such as limited annotated data, data privacy and security, data heterogeneity, imbalanced data, model interpretability, overfitting, and integration with clinical workflows. In this review, some of the various possible solutions to these challenges are presented and discussed; these include training with longitudinal and multimodal datasets, dense training with multitask learning and multimodal learning, self-supervised contrastive learning, various image modifications and syntheses using generative models, explainable AI, causal learning, federated learning with large data models, and digital twins.


Subject(s)
Artificial Intelligence , Radiology , Humans , Prospective Studies , Radiology/methods , Supervised Machine Learning
4.
J Digit Imaging ; 36(5): 2003-2014, 2023 10.
Article in English | MEDLINE | ID: mdl-37268839

ABSTRACT

In medicine, confounding variables in a generalized linear model are often adjusted; however, these variables have not yet been exploited in a non-linear deep learning model. Sex plays important role in bone age estimation, and non-linear deep learning model reported their performances comparable to human experts. Therefore, we investigate the properties of using confounding variables in a non-linear deep learning model for bone age estimation in pediatric hand X-rays. The RSNA Pediatric Bone Age Challenge (2017) dataset is used to train deep learning models. The RSNA test dataset is used for internal validation, and 227 pediatric hand X-ray images with bone age, chronological age, and sex information from Asan Medical Center (AMC) for external validation. U-Net based autoencoder, U-Net multi-task learning (MTL), and auxiliary-accelerated MTL (AA-MTL) models are chosen. Bone age estimations adjusted by input, output prediction, and without adjusting the confounding variables are compared. Additionally, ablation studies for model size, auxiliary task hierarchy, and multiple tasks are conducted. Correlation and Bland-Altman plots between ground truth and model-predicted bone ages are evaluated. Averaged saliency maps based on image registration are superimposed on representative images according to puberty stage. In the RSNA test dataset, adjusting by input shows the best performances regardless of model size, with mean average errors (MAEs) of 5.740, 5.478, and 5.434 months for the U-Net backbone, U-Net MTL, and AA-MTL models, respectively. However, in the AMC dataset, the AA-MTL model that adjusts the confounding variable by prediction shows the best performance with an MAE of 8.190 months, whereas the other models show the best performances by adjusting the confounding variables by input. Ablation studies of task hierarchy reveal no significant differences in the results of the RSNA dataset. However, predicting the confounding variable in the second encoder layer and estimating bone age in the bottleneck layer shows the best performance in the AMC dataset. Ablations studies of multiple tasks reveal that leveraging confounding variables plays an important role regardless of multiple tasks. To estimate bone age in pediatric X-rays, the clinical setting and balance between model size, task hierarchy, and confounding adjustment method play important roles in performance and generalizability; therefore, proper adjusting methods of confounding variables to train deep learning-based models are required for improved models.


Subject(s)
Deep Learning , Radiology , Humans , Child , X-Rays , Confounding Factors, Epidemiologic , Radiography
5.
Radiology ; 306(1): 140-149, 2023 01.
Article in English | MEDLINE | ID: mdl-35997607

ABSTRACT

Background Deep learning (DL) may facilitate the diagnosis of various pancreatic lesions at imaging. Purpose To develop and validate a DL-based approach for automatic identification of patients with various solid and cystic pancreatic neoplasms at abdominal CT and compare its diagnostic performance with that of radiologists. Materials and Methods In this retrospective study, a three-dimensional nnU-Net-based DL model was trained using the CT data of patients who underwent resection for pancreatic lesions between January 2014 and March 2015 and a subset of patients without pancreatic abnormality who underwent CT in 2014. Performance of the DL-based approach to identify patients with pancreatic lesions was evaluated in a temporally independent cohort (test set 1) and a temporally and spatially independent cohort (test set 2) and was compared with that of two board-certified radiologists. Performance was assessed using receiver operating characteristic analysis. Results The study included 852 patients in the training set (median age, 60 years [range, 19-85 years]; 462 men), 603 patients in test set 1 (median age, 58 years [range, 18-82 years]; 376 men), and 589 patients in test set 2 (median age, 63 years [range, 18-99 years]; 343 men). In test set 1, the DL-based approach had an area under the receiver operating characteristic curve (AUC) of 0.91 (95% CI: 0.89, 0.94) and showed slightly worse performance in test set 2 (AUC, 0.87 [95% CI: 0.84, 0.89]). The DL-based approach showed high sensitivity in identifying patients with solid lesions of any size (98%-100%) or cystic lesions measuring 1.0 cm or larger (92%-93%), which was comparable with the radiologists (95%-100% for solid lesions [P = .51 to P > .99]; 93%-98% for cystic lesions ≥1.0 cm [P = .38 to P > .99]). Conclusion The deep learning-based approach demonstrated high performance in identifying patients with various solid and cystic pancreatic lesions at CT. © RSNA, 2022 Online supplemental material is available for this article.


Subject(s)
Deep Learning , Pancreatic Cyst , Pancreatic Neoplasms , Male , Humans , Middle Aged , Retrospective Studies , Pancreatic Neoplasms/surgery , Tomography, X-Ray Computed/methods
6.
Med Image Anal ; 81: 102489, 2022 10.
Article in English | MEDLINE | ID: mdl-35939912

ABSTRACT

With the recent development of deep learning, the classification and segmentation tasks of computer-aided diagnosis (CAD) using non-contrast head computed tomography (NCCT) for intracranial hemorrhage (ICH) has become popular in emergency medical care. However, a few challenges remain, such as the difficulty of training due to the heterogeneity of ICH, the requirement for high performance in both sensitivity and specificity, patient-level predictions demanding excessive costs, and vulnerability to real-world external data. In this study, we proposed a supervised multi-task aiding representation transfer learning network (SMART-Net) for ICH to overcome these challenges. The proposed framework consists of upstream and downstream components. In the upstream, a weight-shared encoder of the model is trained as a robust feature extractor that captures global features by performing slice-level multi-pretext tasks (classification, segmentation, and reconstruction). Adding a consistency loss to regularize discrepancies between classification and segmentation heads has significantly improved representation and transferability. In the downstream, the transfer learning was conducted with a pre-trained encoder and 3D operator (classifier or segmenter) for volume-level tasks. Excessive ablation studies were conducted and the SMART-Net was developed with optimal multi-pretext task combinations and a 3D operator. Experimental results based on four test sets (one internal and two external test sets that reflect a natural incidence of ICH, and one public test set with a relatively small amount of ICH cases) indicate that SMART-Net has better robustness and performance in terms of volume-level ICH classification and segmentation over previous methods. All code is available at https://github.com/babbu3682/SMART-Net.


Subject(s)
Intracranial Hemorrhages , Tomography, X-Ray Computed , Diagnosis, Computer-Assisted , Humans , Image Processing, Computer-Assisted/methods , Intracranial Hemorrhages/diagnostic imaging , Sensitivity and Specificity
7.
Comput Methods Programs Biomed ; 215: 106627, 2022 Mar.
Article in English | MEDLINE | ID: mdl-35032722

ABSTRACT

BACKGROUND AND OBJECTIVE: Bone suppression images (BSIs) of chest radiographs (CXRs) have been proven to improve diagnosis of pulmonary diseases. To acquire BSIs, dual-energy subtraction (DES) or a deep-learning-based model trained with DES-based BSIs have been used. However, neither technique could be applied to pediatric patients owing to the harmful effects of DES. In this study, we developed a novel method for bone suppression in pediatric CXRs. METHODS: First, a model using digitally reconstructed radiographs (DRRs) of adults, which were used to generate pseudo-CXRs from computed tomography images, was developed by training a 2-channel contrastive-unpaired-image-translation network. Second, this model was applied to 129 pediatric DRRs to generate the paired training data of pseudo-pediatric CXRs. Finally, by training a U-Net with these paired data, a bone suppression model for pediatric CXRs was developed. RESULTS: The evaluation metrics were peak signal to noise ratio, root mean absolute error and structural similarity index measure at soft-tissue and bone region of the lung. In addition, an expert radiologist scored the effectiveness of BSIs on a scale of 1-5. The obtained result of 3.31 ± 0.48 indicates that the BSIs show homogeneous bone removal despite subtle residual bone shadow. CONCLUSION: Our method shows that the pixel intensity at soft-tissue regions was preserved, and bones were well subtracted; this can be useful for detecting early pulmonary disease in pediatric CXRs.


Subject(s)
Deep Learning , Lung Diseases , Adult , Bone and Bones/diagnostic imaging , Child , Humans , Radiography, Thoracic , Tomography, X-Ray Computed
SELECTION OF CITATIONS
SEARCH DETAIL
...