Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 15 de 15
Filtrar
1.
J Digit Imaging ; 36(5): 2003-2014, 2023 10.
Artigo em Inglês | MEDLINE | ID: mdl-37268839

RESUMO

In medicine, confounding variables in a generalized linear model are often adjusted; however, these variables have not yet been exploited in a non-linear deep learning model. Sex plays important role in bone age estimation, and non-linear deep learning model reported their performances comparable to human experts. Therefore, we investigate the properties of using confounding variables in a non-linear deep learning model for bone age estimation in pediatric hand X-rays. The RSNA Pediatric Bone Age Challenge (2017) dataset is used to train deep learning models. The RSNA test dataset is used for internal validation, and 227 pediatric hand X-ray images with bone age, chronological age, and sex information from Asan Medical Center (AMC) for external validation. U-Net based autoencoder, U-Net multi-task learning (MTL), and auxiliary-accelerated MTL (AA-MTL) models are chosen. Bone age estimations adjusted by input, output prediction, and without adjusting the confounding variables are compared. Additionally, ablation studies for model size, auxiliary task hierarchy, and multiple tasks are conducted. Correlation and Bland-Altman plots between ground truth and model-predicted bone ages are evaluated. Averaged saliency maps based on image registration are superimposed on representative images according to puberty stage. In the RSNA test dataset, adjusting by input shows the best performances regardless of model size, with mean average errors (MAEs) of 5.740, 5.478, and 5.434 months for the U-Net backbone, U-Net MTL, and AA-MTL models, respectively. However, in the AMC dataset, the AA-MTL model that adjusts the confounding variable by prediction shows the best performance with an MAE of 8.190 months, whereas the other models show the best performances by adjusting the confounding variables by input. Ablation studies of task hierarchy reveal no significant differences in the results of the RSNA dataset. However, predicting the confounding variable in the second encoder layer and estimating bone age in the bottleneck layer shows the best performance in the AMC dataset. Ablations studies of multiple tasks reveal that leveraging confounding variables plays an important role regardless of multiple tasks. To estimate bone age in pediatric X-rays, the clinical setting and balance between model size, task hierarchy, and confounding adjustment method play important roles in performance and generalizability; therefore, proper adjusting methods of confounding variables to train deep learning-based models are required for improved models.


Assuntos
Aprendizado Profundo , Radiologia , Humanos , Criança , Raios X , Fatores de Confusão Epidemiológicos , Radiografia
2.
BMC Public Health ; 20(1): 1402, 2020 Sep 14.
Artigo em Inglês | MEDLINE | ID: mdl-32928163

RESUMO

BACKGROUND: The association between long-term exposure to air pollutants, including nitrogen dioxide (NO2), carbon monoxide (CO), sulfur dioxide (SO2), ozone (O3), and particulate matter 10 µm or less in diameter (PM10), and mortality by ischemic heart disease (IHD), cerebrovascular disease (CVD), pneumonia (PN), and chronic lower respiratory disease (CLRD) is unclear. We investigated whether living in an administrative district with heavy air pollution is associated with an increased risk of mortality by the diseases through an ecological study using South Korean administrative data over 19 years. METHODS: A total of 249 Si-Gun-Gus, unit of administrative districts in South Korea were studied. In each district, the daily concentrations of CO, SO2, NO2, O3, and PM10 were averaged over 19 years (2001-2018). Age-adjusted mortality rates by IHD, CVD, PN and CLRD for each district were averaged for the same study period. Multivariate beta-regression analysis was performed to estimate the associations between air pollutant concentrations and mortality rates, after adjusting for confounding factors including altitude, population density, higher education rate, smoking rate, obesity rate, and gross regional domestic product per capita. Associations were also estimated for two subgrouping schema: Capital and non-Capital areas (77:172 districts) and urban and rural areas (168:81 districts). RESULTS: For IHD, higher SO2 concentrations were significantly associated with a higher mortality rate, whereas other air pollutants had null associations. For CVD, SO2 and PM10 concentrations were significantly associated with a higher mortality rate. For PN, O3 concentrations had significant positive associations with a higher mortality rate, while SO2, NO2, and PM10 concentrations had significant negative associations. For CLRD, O3 concentrations were associated with an increased mortality rate, while CO, NO2, and PM10 concentrations had negative associations. In the subgroup analysis, positive associations between SO2 concentrations and IHD mortality were consistently observed in all subgroups, while other pollutant-disease pairs showed null, or mixed associations. CONCLUSION: Long-term exposure to high SO2 concentration was significantly and consistently associated with a high mortality rate nationwide and in Capital and non-Capital areas, and in urban and rural areas. Associations between other air pollutants and disease-related mortalities need to be investigated in further studies.


Assuntos
Poluentes Atmosféricos , Poluição do Ar , Ozônio , Poluentes Atmosféricos/efeitos adversos , Poluentes Atmosféricos/análise , Poluição do Ar/efeitos adversos , Poluição do Ar/análise , Exposição Ambiental/efeitos adversos , Exposição Ambiental/análise , Humanos , Dióxido de Nitrogênio/efeitos adversos , Dióxido de Nitrogênio/análise , Ozônio/análise , Material Particulado/efeitos adversos , Material Particulado/análise , República da Coreia/epidemiologia , Dióxido de Enxofre/análise
3.
BMC Endocr Disord ; 18(1): 61, 2018 Sep 05.
Artigo em Inglês | MEDLINE | ID: mdl-30185190

RESUMO

BACKGROUND: The aim of the present study is to evaluate the association between BMD and type 2 DM status in middle-aged and elderly men. To investigate a possible correlation, the present study used the BMD dataset of the Korea National Health and Nutrition Examination Survey (KNHANES) from 2008 to 2011. METHODS: In total, 37,753 individuals participated in health examination surveys between 2008 and 2011. A total of 3383 males aged ≥50 years were eligible. They underwent BMD measurement through dual-energy X-ray absorptiometry (DXA). The fasting plasma glucose and insulin levels of participants were also measured. RESULTS: Men with prediabetes and diabetes had significantly higher mean BMD at all measured sites than control men did, irrespective of DM status. This was confirmed by multivariable linear regression analyses. DM duration was an important factor affecting BMD. Patients with DM for > 5 years had lower mean BMD in the total hip and femoral neck than those with DM for ≤5 years. Per multivariable linear regression analyses, patients with DM for > 5 years had significantly lower mean BMD at the femoral neck than those with DM ≤5 years. CONCLUSIONS: DM duration was significantly associated with reduced femoral neck BMD.


Assuntos
Densidade Óssea/fisiologia , Diabetes Mellitus Tipo 2/diagnóstico por imagem , Diabetes Mellitus Tipo 2/epidemiologia , Progressão da Doença , Inquéritos Nutricionais/tendências , Vigilância da População , Idoso , Estudos Transversais , Diabetes Mellitus Tipo 2/sangue , Ásia Oriental/epidemiologia , Colo do Fêmur/diagnóstico por imagem , Humanos , Masculino , Pessoa de Meia-Idade , Vigilância da População/métodos , República da Coreia/epidemiologia
4.
Sci Rep ; 14(1): 7551, 2024 03 30.
Artigo em Inglês | MEDLINE | ID: mdl-38555414

RESUMO

Transfer learning plays a pivotal role in addressing the paucity of data, expediting training processes, and enhancing model performance. Nonetheless, the prevailing practice of transfer learning predominantly relies on pre-trained models designed for the natural image domain, which may not be well-suited for the medical image domain in grayscale. Recognizing the significance of leveraging transfer learning in medical research, we undertook the construction of class-balanced pediatric radiograph datasets collectively referred to as PedXnets, grounded in radiographic views using the pediatric radiographs collected over 24 years at Asan Medical Center. For PedXnets pre-training, approximately 70,000 X-ray images were utilized. Three different pre-training weights of PedXnet were constructed using Inception V3 for various radiation perspective classifications: Model-PedXnet-7C, Model-PedXnet-30C, and Model-PedXnet-68C. We validated the transferability and positive effects of transfer learning of PedXnets through pediatric downstream tasks including fracture classification and bone age assessment (BAA). The evaluation of transfer learning effects through classification and regression metrics showed superior performance of Model-PedXnets in quantitative assessments. Additionally, visual analyses confirmed that the Model-PedXnets were more focused on meaningful regions of interest.


Assuntos
Aprendizado Profundo , Fraturas Ósseas , Humanos , Criança , Aprendizado de Máquina , Radiografia
5.
J Rheum Dis ; 31(2): 97-107, 2024 Apr 01.
Artigo em Inglês | MEDLINE | ID: mdl-38559800

RESUMO

Objective: Ankylosing spondylitis (AS) is chronic inflammatory arthritis causing structural damage and radiographic progression to the spine due to repeated and continuous inflammation over a long period. This study establishes the application of machine learning models to predict radiographic progression in AS patients using time-series data from electronic medical records (EMRs). Methods: EMR data, including baseline characteristics, laboratory findings, drug administration, and modified Stoke AS Spine Score (mSASSS), were collected from 1,123 AS patients between January 2001 and December 2018 at a single center at the time of first (T1), second (T2), and third (T3) visits. The radiographic progression of the (n+1)th visit (Pn+1=(mSASSSn+1-mSASSSn)/(Tn+1-Tn)≥1 unit per year) was predicted using follow-up visit datasets from T1 to Tn. We used three machine learning methods (logistic regression with the least absolute shrinkage and selection operation, random forest, and extreme gradient boosting algorithms) with three-fold cross-validation. Results: The random forest model using the T1 EMR dataset best predicted the radiographic progression P2 among the machine learning models tested with a mean accuracy and area under the curves of 73.73% and 0.79, respectively. Among the T1 variables, the most important variables for predicting radiographic progression were in the order of total mSASSS, age, and alkaline phosphatase. Conclusion: Prognosis predictive models using time-series data showed reasonable performance with clinical features of the first visit dataset when predicting radiographic progression.

6.
Comput Struct Biotechnol J ; 21: 3452-3458, 2023.
Artigo em Inglês | MEDLINE | ID: mdl-37457807

RESUMO

Recent studies of automatic diagnosis of vertebral compression fractures (VCFs) using deep learning mainly focus on segmentation and vertebral level detection in lumbar spine lateral radiographs (LSLRs). Herein, we developed a model for simultaneous VCF diagnosis and vertebral level detection without using adjacent vertebral bodies. In total, 1102 patients with VCF, 1171 controls were enrolled. The 1865, 208, and 198 LSLRS were divided into training, validation, and test dataset. A ground truth label with a 4-point trapezoidal shape was made based on radiological reports showing normal or VCF at some vertebral level. We applied a modified U-Net architecture, in which decoders were trained to detect VCF and vertebral levels, sharing the same encoder. The multi-task model was significantly better than the single-task model in sensitivity and area under the receiver operating characteristic curve. In the internal dataset, the accuracy, sensitivity, and specificity of fracture detection per patient or vertebral body were 0.929, 0.944, and 0.917 or 0.947, 0.628, and 0.977, respectively. In external validation, those of fracture detection per patient or vertebral body were 0.713, 0.979, and 0.447 or 0.828, 0.936, and 0.820, respectively. The success rates were 96 % and 94 % for vertebral level detection in internal and external validation, respectively. The multi-task-shared encoder was significantly better than the single-task encoder. Furthermore, both fracture and vertebral level detection was good in internal and external validation. Our deep learning model may help radiologists perform real-life medical examinations.

7.
Sci Rep ; 13(1): 2356, 2023 02 09.
Artigo em Inglês | MEDLINE | ID: mdl-36759636

RESUMO

The generative adversarial network (GAN) is a promising deep learning method for generating images. We evaluated the generation of highly realistic and high-resolution chest radiographs (CXRs) using progressive growing GAN (PGGAN). We trained two PGGAN models using normal and abnormal CXRs, solely relying on normal CXRs to demonstrate the quality of synthetic CXRs that were 1000 × 1000 pixels in size. Image Turing tests were evaluated by six radiologists in a binary fashion using two independent validation sets to judge the authenticity of each CXR, with a mean accuracy of 67.42% and 69.92% for the first and second trials, respectively. Inter-reader agreements were poor for the first (κ = 0.10) and second (κ = 0.14) Turing tests. Additionally, a convolutional neural network (CNN) was used to classify normal or abnormal CXR using only real images and/or synthetic images mixed datasets. The accuracy of the CNN model trained using a mixed dataset of synthetic and real data was 93.3%, compared to 91.0% for the model built using only the real data. PGGAN was able to generate CXRs that were identical to real CXRs, and this showed promise to overcome imbalances between classes in CNN training.


Assuntos
Redes Neurais de Computação , Radiologistas , Humanos , Radiografia
8.
Korean J Radiol ; 24(11): 1061-1080, 2023 11.
Artigo em Inglês | MEDLINE | ID: mdl-37724586

RESUMO

Artificial intelligence (AI) in radiology is a rapidly developing field with several prospective clinical studies demonstrating its benefits in clinical practice. In 2022, the Korean Society of Radiology held a forum to discuss the challenges and drawbacks in AI development and implementation. Various barriers hinder the successful application and widespread adoption of AI in radiology, such as limited annotated data, data privacy and security, data heterogeneity, imbalanced data, model interpretability, overfitting, and integration with clinical workflows. In this review, some of the various possible solutions to these challenges are presented and discussed; these include training with longitudinal and multimodal datasets, dense training with multitask learning and multimodal learning, self-supervised contrastive learning, various image modifications and syntheses using generative models, explainable AI, causal learning, federated learning with large data models, and digital twins.


Assuntos
Inteligência Artificial , Radiologia , Humanos , Estudos Prospectivos , Radiologia/métodos , Aprendizado de Máquina Supervisionado
9.
J Bone Miner Res ; 37(2): 369-377, 2022 02.
Artigo em Inglês | MEDLINE | ID: mdl-34812546

RESUMO

Osteoporosis is a common, but silent disease until it is complicated by fractures that are associated with morbidity and mortality. Over the past few years, although deep learning-based disease diagnosis on chest radiographs has yielded promising results, osteoporosis screening remains unexplored. Paired data with 13,026 chest radiographs and dual-energy X-ray absorptiometry (DXA) results from the Health Screening and Promotion Center of Asan Medical Center, between 2012 and 2019, were used as the primary dataset in this study. For the external test, we additionally used the Asan osteoporosis cohort dataset (1089 chest radiographs, 2010 and 2017). Using a well-performed deep learning model, we trained the OsPor-screen model with labels defined by DXA based diagnosis of osteoporosis (lumbar spine, femoral neck, or total hip T-score ≤ -2.5) in a supervised learning manner. The OsPor-screen model was assessed in the internal and external test sets. We performed substudies for evaluating the effect of various anatomical subregions and image sizes of input images. OsPor-screen model performances including sensitivity, specificity, and area under the curve (AUC) were measured in the internal and external test sets. In addition, visual explanations of the model to predict each class were expressed in gradient-weighted class activation maps (Grad-CAMs). The OsPor-screen model showed promising performances. Osteoporosis screening with the OsPor-screen model achieved an AUC of 0.91 (95% confidence interval [CI], 0.90-0.92) and an AUC of 0.88 (95% CI, 0.85-0.90) in the internal and external test set, respectively. Even though the medical relevance of these average Grad-CAMs is unclear, these results suggest that a deep learning-based model using chest radiographs could have the potential to be used for opportunistic automated screening of patients with osteoporosis in clinical settings. © 2021 American Society for Bone and Mineral Research (ASBMR).


Assuntos
Aprendizado Profundo , Osteoporose , Absorciometria de Fóton/métodos , Densidade Óssea , Humanos , Vértebras Lombares , Programas de Rastreamento , Osteoporose/diagnóstico por imagem
10.
Sci Rep ; 12(1): 17307, 2022 10 15.
Artigo em Inglês | MEDLINE | ID: mdl-36243746

RESUMO

Realistic image synthesis based on deep learning is an invaluable technique for developing high-performance computer aided diagnosis systems while protecting patient privacy. However, training a generative adversarial network (GAN) for image synthesis remains challenging because of the large amounts of data required for training various kinds of image features. This study aims to synthesize retinal images indistinguishable from real images and evaluate the efficacy of the synthesized images having a specific disease for augmenting class imbalanced datasets. The synthesized images were validated via image Turing tests, qualitative analysis by retinal specialists, and quantitative analyses on amounts and signal-to-noise ratios of vessels. The efficacy of synthesized images was verified by deep learning-based classification performance. Turing test shows that accuracy, sensitivity, and specificity of 54.0 ± 12.3%, 71.1 ± 18.8%, and 36.9 ± 25.5%, respectively. Here, sensitivity represents correctness to find real images among real datasets. Vessel amounts and average SNR comparisons show 0.43% and 1.5% difference between real and synthesized images. The classification performance after augmenting synthesized images outperforms every ratio of imbalanced real datasets. Our study shows the realistic retina images were successfully generated with insignificant differences between the real and synthesized images and shows great potential for practical applications.


Assuntos
Processamento de Imagem Assistida por Computador , Retina , Humanos , Processamento de Imagem Assistida por Computador/métodos , Retina/diagnóstico por imagem , Razão Sinal-Ruído
11.
Korean J Radiol ; 22(12): 2073-2081, 2021 Dec.
Artigo em Inglês | MEDLINE | ID: mdl-34719891

RESUMO

Deep learning-based applications have great potential to enhance the quality of medical services. The power of deep learning depends on open databases and innovation. Radiologists can act as important mediators between deep learning and medicine by simultaneously playing pioneering and gatekeeping roles. The application of deep learning technology in medicine is sometimes restricted by ethical or legal issues, including patient privacy and confidentiality, data ownership, and limitations in patient agreement. In this paper, we present an open platform, MI2RLNet, for sharing source code and various pre-trained weights for models to use in downstream tasks, including education, application, and transfer learning, to encourage deep learning research in radiology. In addition, we describe how to use this open platform in the GitHub environment. Our source code and models may contribute to further deep learning research in radiology, which may facilitate applications in medicine and healthcare, especially in medical imaging, in the near future. All code is available at https://github.com/mi2rl/MI2RLNet.


Assuntos
Aprendizado Profundo , Radiologia , Bases de Dados Factuais , Humanos , Radiologistas , Software
12.
JMIR Med Inform ; 8(8): e18089, 2020 Aug 04.
Artigo em Inglês | MEDLINE | ID: mdl-32749222

RESUMO

BACKGROUND: Computer-aided diagnosis on chest x-ray images using deep learning is a widely studied modality in medicine. Many studies are based on public datasets, such as the National Institutes of Health (NIH) dataset and the Stanford CheXpert dataset. However, these datasets are preprocessed by classical natural language processing, which may cause a certain extent of label errors. OBJECTIVE: This study aimed to investigate the robustness of deep convolutional neural networks (CNNs) for binary classification of posteroanterior chest x-ray through random incorrect labeling. METHODS: We trained and validated the CNN architecture with different noise levels of labels in 3 datasets, namely, Asan Medical Center-Seoul National University Bundang Hospital (AMC-SNUBH), NIH, and CheXpert, and tested the models with each test set. Diseases of each chest x-ray in our dataset were confirmed by a thoracic radiologist using computed tomography (CT). Receiver operating characteristic (ROC) and area under the curve (AUC) were evaluated in each test. Randomly chosen chest x-rays of public datasets were evaluated by 3 physicians and 1 thoracic radiologist. RESULTS: In comparison with the public datasets of NIH and CheXpert, where AUCs did not significantly drop to 16%, the AUC of the AMC-SNUBH dataset significantly decreased from 2% label noise. Evaluation of the public datasets by 3 physicians and 1 thoracic radiologist showed an accuracy of 65%-80%. CONCLUSIONS: The deep learning-based computer-aided diagnosis model is sensitive to label noise, and computer-aided diagnosis with inaccurate labels is not credible. Furthermore, open datasets such as NIH and CheXpert need to be distilled before being used for deep learning-based computer-aided diagnosis.

13.
PLoS One ; 13(7): e0200557, 2018.
Artigo em Inglês | MEDLINE | ID: mdl-29995931

RESUMO

For patients with chronic lower respiratory disease, hypobaric hypoxia at a high altitude is considered a risk factor for mortality. However, the effects of residing at moderately high altitudes remain unclear. We investigated the association between moderate altitude and chronic lower respiratory disease mortality. In particular, we examined the lower 48 United States counties for age-adjusted chronic lower respiratory disease mortality rates, altitude, and socioeconomic factors, including tobacco use, per capita income, population density, sex ratio, unemployment, poverty, and education between 1979 and 1998. The socioeconomic factors were incorporated into the correlation analysis as potential covariates. Considerable positive (R = 0.235; P <0.001) and partial (R = 0.260; P <0.001) correlations were observed between altitude and chronic lower respiratory disease mortality rate. In the subgroup with high COPD prevalence subgroup, even stronger positive (R = 0.346; P <0.001) and partial (R = 0.423, P <0.001) correlations were observed. Multivariate regression analysis of all available socioeconomic factors revealed that additional knowledge on altitude improved the adjusted R2 values from 0.128 to 0.186 for all counties and from 0.301 to 0.421 for counties with high COPD prevalence. We concluded that in the lower 48 United States counties, even a moderate altitude may pose considerable risks in patients with chronic lower respiratory disease.


Assuntos
Altitude , Bases de Dados Factuais , Doença Pulmonar Obstrutiva Crônica/mortalidade , Idoso , Feminino , Humanos , Masculino , Pessoa de Meia-Idade , Fatores Socioeconômicos , Estados Unidos/epidemiologia
15.
Korean J Fam Med ; 38(3): 122-129, 2017 May.
Artigo em Inglês | MEDLINE | ID: mdl-28572887

RESUMO

BACKGROUND: Cigarette smoking is a risk factor for cardiovascular disease (CVD) and has both beneficial and harmful effects in CVD. We hypothesized that weight gain following smoking cessation does not attenuate the CVD mortality of smoking cessation in the general Korean population. METHODS: Study subjects comprised 2.2% randomly selected patients from the Korean National Health Insurance Corporation, between 2002 and 2013. We identified 61,055 subjects who were classified as current smokers in 2003-2004. After excluding 21,956 subjects for missing data, we studied 30,004 subjects. We divided the 9,095 ex-smokers into two groups: those who gained over 2 kg (2,714), and those who did not gain over 2 kg (6,381, including weight loss), after smoking cessation. Cox proportional hazards regression models were used to estimate the association between weight gain following smoking cessation and CVD mortality. RESULTS: In the primary analysis, the hazard ratios of all-cause deaths and CVD deaths were assessed in the three groups. The CVD risk factors and Charlson comorbidity index adjusted hazard ratios (aHRs) for CVD deaths were 0.80 (95% confidence interval [CI], 0.37 to 1.75) for ex-smokers with weight gain and 0.80 (95% CI, 0.50 to 1.27) for ex-smokers with no weight gain, compared to one for sustained smokers. The associations were stronger for events other than mortality. The aHRs for CVD events were 0.69 (95% CI, 0.54 to 0.88) and 0.81 (95% CI, 0.70 to 0.94) for the ex-smokers with and without weight gain, respectively. CONCLUSION: Although smoking cessation leads to weight gain, it does not increase the risk of CVD death.

SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA