Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 11 de 11
Filtrar
Más filtros










Base de datos
Intervalo de año de publicación
1.
Korean J Radiol ; 25(3): 224-242, 2024 Mar.
Artículo en Inglés | MEDLINE | ID: mdl-38413108

RESUMEN

The emergence of Chat Generative Pre-trained Transformer (ChatGPT), a chatbot developed by OpenAI, has garnered interest in the application of generative artificial intelligence (AI) models in the medical field. This review summarizes different generative AI models and their potential applications in the field of medicine and explores the evolving landscape of Generative Adversarial Networks and diffusion models since the introduction of generative AI models. These models have made valuable contributions to the field of radiology. Furthermore, this review also explores the significance of synthetic data in addressing privacy concerns and augmenting data diversity and quality within the medical domain, in addition to emphasizing the role of inversion in the investigation of generative models and outlining an approach to replicate this process. We provide an overview of Large Language Models, such as GPTs and bidirectional encoder representations (BERTs), that focus on prominent representatives and discuss recent initiatives involving language-vision models in radiology, including innovative large language and vision assistant for biomedicine (LLaVa-Med), to illustrate their practical application. This comprehensive review offers insights into the wide-ranging applications of generative AI models in clinical research and emphasizes their transformative potential.


Asunto(s)
Inteligencia Artificial , Radiología , Humanos , Diagnóstico por Imagen , Programas Informáticos , Lenguaje
3.
Sci Rep ; 13(1): 13420, 2023 08 17.
Artículo en Inglés | MEDLINE | ID: mdl-37591967

RESUMEN

The Coronavirus Disease 2019 (COVID-19) is transitioning into the endemic phase. Nonetheless, it is crucial to remain mindful that pandemics related to infectious respiratory diseases (IRDs) can emerge unpredictably. Therefore, we aimed to develop and validate a severity assessment model for IRDs, including COVID-19, influenza, and novel influenza, using CT images on a multi-centre data set. Of the 805 COVID-19 patients collected from a single centre, 649 were used for training and 156 were used for internal validation (D1). Additionally, three external validation sets were obtained from 7 cohorts: 1138 patients with COVID-19 (D2), and 233 patients with influenza and novel influenza (D3). A hybrid model, referred to as Hybrid-DDM, was constructed by combining two deep learning models and a machine learning model. Across datasets D1, D2, and D3, the Hybrid-DDM exhibited significantly improved performance compared to the baseline model. The areas under the receiver operating curves (AUCs) were 0.830 versus 0.767 (p = 0.036) in D1, 0.801 versus 0.753 (p < 0.001) in D2, and 0.774 versus 0.668 (p < 0.001) in D3. This study indicates that the Hybrid-DDM model, trained using COVID-19 patient data, is effective and can also be applicable to patients with other types of viral pneumonia.


Asunto(s)
COVID-19 , Aprendizaje Profundo , Gripe Humana , Neumonía Viral , Humanos , Neumonía Viral/diagnóstico , Aprendizaje Automático
4.
Korean J Radiol ; 24(8): 807-820, 2023 08.
Artículo en Inglés | MEDLINE | ID: mdl-37500581

RESUMEN

OBJECTIVE: To assess whether computed tomography (CT) conversion across different scan parameters and manufacturers using a routable generative adversarial network (RouteGAN) can improve the accuracy and variability in quantifying interstitial lung disease (ILD) using a deep learning-based automated software. MATERIALS AND METHODS: This study included patients with ILD who underwent thin-section CT. Unmatched CT images obtained using scanners from four manufacturers (vendors A-D), standard- or low-radiation doses, and sharp or medium kernels were classified into groups 1-7 according to acquisition conditions. CT images in groups 2-7 were converted into the target CT style (Group 1: vendor A, standard dose, and sharp kernel) using a RouteGAN. ILD was quantified on original and converted CT images using a deep learning-based software (Aview, Coreline Soft). The accuracy of quantification was analyzed using the dice similarity coefficient (DSC) and pixel-wise overlap accuracy metrics against manual quantification by a radiologist. Five radiologists evaluated quantification accuracy using a 10-point visual scoring system. RESULTS: Three hundred and fifty CT slices from 150 patients (mean age: 67.6 ± 10.7 years; 56 females) were included. The overlap accuracies for quantifying total abnormalities in groups 2-7 improved after CT conversion (original vs. converted: 0.63 vs. 0.68 for DSC, 0.66 vs. 0.70 for pixel-wise recall, and 0.68 vs. 0.73 for pixel-wise precision; P < 0.002 for all). The DSCs of fibrosis score, honeycombing, and reticulation significantly increased after CT conversion (0.32 vs. 0.64, 0.19 vs. 0.47, and 0.23 vs. 0.54, P < 0.002 for all), whereas those of ground-glass opacity, consolidation, and emphysema did not change significantly or decreased slightly. The radiologists' scores were significantly higher (P < 0.001) and less variable on converted CT. CONCLUSION: CT conversion using a RouteGAN can improve the accuracy and variability of CT images obtained using different scan parameters and manufacturers in deep learning-based quantification of ILD.


Asunto(s)
Enfisema , Enfermedades Pulmonares Intersticiales , Enfisema Pulmonar , Femenino , Humanos , Persona de Mediana Edad , Anciano , Enfermedades Pulmonares Intersticiales/diagnóstico por imagen , Tomografía Computarizada por Rayos X/métodos , Pulmón/diagnóstico por imagen
5.
Nat Commun ; 13(1): 4251, 2022 07 22.
Artículo en Inglés | MEDLINE | ID: mdl-35869112

RESUMEN

Triage is essential for the early diagnosis and reporting of neurologic emergencies. Herein, we report the development of an anomaly detection algorithm (ADA) with a deep generative model trained on brain computed tomography (CT) images of healthy individuals that reprioritizes radiology worklists and provides lesion attention maps for brain CT images with critical findings. In the internal and external validation datasets, the ADA achieved area under the curve values (95% confidence interval) of 0.85 (0.81-0.89) and 0.87 (0.85-0.89), respectively, for detecting emergency cases. In a clinical simulation test of an emergency cohort, the median wait time was significantly shorter post-ADA triage than pre-ADA triage by 294 s (422.5 s [interquartile range, IQR 299] to 70.5 s [IQR 168]), and the median radiology report turnaround time was significantly faster post-ADA triage than pre-ADA triage by 297.5 s (445.0 s [IQR 298] to 88.5 s [IQR 179]) (all p < 0.001).


Asunto(s)
Servicio de Urgencia en Hospital , Triaje , Algoritmos , Humanos , Radiografía , Tomografía Computarizada por Rayos X/métodos , Triaje/métodos
6.
Sci Rep ; 11(1): 19997, 2021 10 07.
Artículo en Inglés | MEDLINE | ID: mdl-34620976

RESUMEN

Despite being the gold standard for diagnosis of osteoporosis, dual-energy X-ray absorptiometry (DXA) could not be widely used as a screening tool for osteoporosis. This study aimed to predict osteoporosis via simple hip radiography using deep learning algorithm. A total of 1001 datasets of proximal femur DXA with matched same-side cropped simple hip bone radiographic images of female patients aged ≥ 55 years were collected. Of these, 504 patients had osteoporosis (T-score ≤ - 2.5), and 497 patients did not have osteoporosis. The 1001 images were randomly divided into three sets: 800 images for the training, 100 images for the validation, and 101 images for the test. Based on VGG16 equipped with nonlocal neural network, we developed a deep neural network (DNN) model. We calculated the confusion matrix and evaluated the accuracy, sensitivity, specificity, positive predictive value (PPV), and negative predictive value (NPV). We drew the receiver operating characteristic (ROC) curve. A gradient-based class activation map (Grad-CAM) overlapping the original image was also used to visualize the model performance. Additionally, we performed external validation using 117 datasets. Our final DNN model showed an overall accuracy of 81.2%, sensitivity of 91.1%, and specificity of 68.9%. The PPV was 78.5%, and the NPV was 86.1%. The area under the ROC curve value was 0.867, indicating a reasonable performance for screening osteoporosis by simple hip radiography. The external validation set confirmed a model performance with an overall accuracy of 71.8% and an AUC value of 0.700. All Grad-CAM results from both internal and external validation sets appropriately matched the proximal femur cortex and trabecular patterns of the radiographs. The DNN model could be considered as one of the useful screening tools for easy prediction of osteoporosis in the real-world clinical setting.


Asunto(s)
Cadera/diagnóstico por imagen , Osteoporosis/diagnóstico por imagen , Radiografía/métodos , Absorciometría de Fotón/métodos , Anciano , Anciano de 80 o más Años , Aprendizaje Profundo , Femenino , Humanos , Persona de Mediana Edad , Redes Neurales de la Computación , Estudios Retrospectivos , Sensibilidad y Especificidad
7.
JMIR Med Inform ; 8(8): e18089, 2020 Aug 04.
Artículo en Inglés | MEDLINE | ID: mdl-32749222

RESUMEN

BACKGROUND: Computer-aided diagnosis on chest x-ray images using deep learning is a widely studied modality in medicine. Many studies are based on public datasets, such as the National Institutes of Health (NIH) dataset and the Stanford CheXpert dataset. However, these datasets are preprocessed by classical natural language processing, which may cause a certain extent of label errors. OBJECTIVE: This study aimed to investigate the robustness of deep convolutional neural networks (CNNs) for binary classification of posteroanterior chest x-ray through random incorrect labeling. METHODS: We trained and validated the CNN architecture with different noise levels of labels in 3 datasets, namely, Asan Medical Center-Seoul National University Bundang Hospital (AMC-SNUBH), NIH, and CheXpert, and tested the models with each test set. Diseases of each chest x-ray in our dataset were confirmed by a thoracic radiologist using computed tomography (CT). Receiver operating characteristic (ROC) and area under the curve (AUC) were evaluated in each test. Randomly chosen chest x-rays of public datasets were evaluated by 3 physicians and 1 thoracic radiologist. RESULTS: In comparison with the public datasets of NIH and CheXpert, where AUCs did not significantly drop to 16%, the AUC of the AMC-SNUBH dataset significantly decreased from 2% label noise. Evaluation of the public datasets by 3 physicians and 1 thoracic radiologist showed an accuracy of 65%-80%. CONCLUSIONS: The deep learning-based computer-aided diagnosis model is sensitive to label noise, and computer-aided diagnosis with inaccurate labels is not credible. Furthermore, open datasets such as NIH and CheXpert need to be distilled before being used for deep learning-based computer-aided diagnosis.

8.
Sci Rep ; 10(1): 13950, 2020 08 18.
Artículo en Inglés | MEDLINE | ID: mdl-32811848

RESUMEN

While high-resolution proton density-weighted magnetic resonance imaging (MRI) of intracranial vessel walls is significant for a precise diagnosis of intracranial artery disease, its long acquisition time is a clinical burden. Compressed sensing MRI is a prospective technology with acceleration factors that could potentially reduce the scan time. However, high acceleration factors result in degraded image quality. Although recent advances in deep-learning-based image restoration algorithms can alleviate this problem, clinical image pairs used in deep learning training typically do not align pixel-wise. Therefore, in this study, two different deep-learning-based denoising algorithms-self-supervised learning and unsupervised learning-are proposed; these algorithms are applicable to clinical datasets that are not aligned pixel-wise. The two approaches are compared quantitatively and qualitatively. Both methods produced promising results in terms of image denoising and visual grading. While the image noise and signal-to-noise ratio of self-supervised learning were superior to those of unsupervised learning, unsupervised learning was preferable over self-supervised learning in terms of radiomic feature reproducibility.


Asunto(s)
Arterias Cerebrales/diagnóstico por imagen , Aumento de la Imagen/métodos , Procesamiento de Imagen Asistido por Computador/métodos , Adulto , Anciano , Algoritmos , Aprendizaje Profundo , Femenino , Voluntarios Sanos , Humanos , Imagenología Tridimensional/métodos , Aprendizaje Automático , Imagen por Resonancia Magnética/métodos , Masculino , Persona de Mediana Edad , Estudios Prospectivos , Reproducibilidad de los Resultados , Relación Señal-Ruido
9.
Neurospine ; 17(2): 471-472, 2020 Jun.
Artículo en Inglés | MEDLINE | ID: mdl-32615703
10.
Eur Radiol ; 30(9): 4943-4951, 2020 Sep.
Artículo en Inglés | MEDLINE | ID: mdl-32350657

RESUMEN

OBJECTIVES: To investigate the optimal input matrix size for deep learning-based computer-aided detection (CAD) of nodules and masses on chest radiographs. METHODS: We retrospectively collected 2088 abnormal (nodule/mass) and 352 normal chest radiographs from two institutions. Three thoracic radiologists drew 2758 abnormalities regions. A total of 1736 abnormal chest radiographs were used for training and tuning convolutional neural networks (CNNs). The remaining 352 abnormal and 352 normal chest radiographs were used as a test set. Two CNNs (Mask R-CNN and RetinaNet) were selected to validate the effects of the squared different matrix size of chest radiograph (256, 448, 896, 1344, and 1792). For comparison, figure of merit (FOM) of jackknife free-response receiver operating curve and sensitivity were obtained. RESULTS: In Mask R-CNN, matrix size 896 and 1344 achieved significantly higher FOM (0.869 and 0.856, respectively) for detecting abnormalities than 256, 448, and 1792 (0.667-0.820) (p < 0.05). In RetinaNet, matrix size 896 was significantly higher FOM (0.906) than others (0.329-0.832) (p < 0.05). For sensitivity of abnormalities, there was a tendency to increase sensitivity when lesion size increases. For small nodules (< 10 mm), the sensitivities were 0.418 and 0.409, whereas the sensitivities were 0.937 and 0.956 for masses. Matrix size 896 and 1344 in Mask R-CNN and matrix size 896 in RetinaNet showed significantly higher sensitivity than others (p < 0.05). CONCLUSIONS: Matrix size 896 had the highest performance for various sizes of abnormalities using different CNNs. The optimal matrix size of chest radiograph could improve CAD performance without additional training data. KEY POINTS: • Input matrix size significantly affected the performance of a deep learning-based CAD for detection of nodules or masses on chest radiographs. • The matrix size 896 showed the best performance in two different CNN detection models. • The optimal matrix size of chest radiographs could enhance CAD performance without additional training data.


Asunto(s)
Aprendizaje Profundo , Neoplasias Pulmonares/diagnóstico , Pulmón/diagnóstico por imagen , Lesiones Precancerosas/diagnóstico , Interpretación de Imagen Radiográfica Asistida por Computador/métodos , Radiografía Torácica/métodos , Nódulo Pulmonar Solitario/diagnóstico , Anciano , Diagnóstico por Computador , Femenino , Humanos , Masculino , Persona de Mediana Edad , Redes Neurales de la Computación , Radiografía , Estudios Retrospectivos
11.
Neurospine ; 16(4): 657-668, 2019 Dec.
Artículo en Inglés | MEDLINE | ID: mdl-31905454

RESUMEN

The artificial neural network (ANN), one of the machine learning (ML) algorithms, inspired by the human brain system, was developed by connecting layers with artificial neurons. However, due to the low computing power and insufficient learnable data, ANN has suffered from overfitting and vanishing gradient problems for training deep networks. The advancement of computing power with graphics processing units and the availability of large data acquisition, deep neural network outperforms human or other ML capabilities in computer vision and speech recognition tasks. These potentials are recently applied to healthcare problems, including computer-aided detection/diagnosis, disease prediction, image segmentation, image generation, etc. In this review article, we will explain the history, development, and applications in medical imaging.

SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA
...