Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 340
Filtrar
Más filtros

Banco de datos
País/Región como asunto
Tipo del documento
Intervalo de año de publicación
1.
Artículo en Inglés | MEDLINE | ID: mdl-38766682

RESUMEN

BACKGROUND AND AIM: Reliable bowel preparation assessment is important in colonoscopy. However, current scoring systems are limited by laborious and time-consuming tasks and interobserver variability. We aimed to develop an artificial intelligence (AI) model to assess bowel cleanliness and evaluate its clinical applicability. METHODS: A still image-driven AI model to assess the Boston Bowel Preparation Scale (BBPS) was developed and validated using 2361 colonoscopy images. For evaluating real-world applicability, the model was validated using 113 10-s colonoscopy video clips and 30 full colonoscopy videos to identify "adequate (BBPS 2-3)" or "inadequate (BBPS 0-1)" preparation. The model was tested with an external dataset of 29 colonoscopy videos. The clinical applicability of the model was evaluated using 225 consecutive colonoscopies. Inter-rater variability was analyzed between the AI model and endoscopists. RESULTS: The AI model achieved an accuracy of 94.0% and an area under the receiver operating characteristic curve of 0.939 with the still images. Model testing with an external dataset showed an accuracy of 95.3%, an area under the receiver operating characteristic curve of 0.976, and a sensitivity of 100% for the detection of inadequate preparations. The clinical applicability study showed an overall agreement rate of 85.3% between endoscopists and the AI model, with Fleiss' kappa of 0.686. The agreement rate was lower for the right colon compared with the transverse and left colon, with Fleiss' kappa of 0.563, 0.575, and 0.789, respectively. CONCLUSIONS: The AI model demonstrated accurate bowel preparation assessment and substantial agreement with endoscopists. Further refinement of the AI model is warranted for effective monitoring of qualified colonoscopy in large-scale screening programs.

2.
Orthod Craniofac Res ; 27(1): 64-77, 2024 Feb.
Artículo en Inglés | MEDLINE | ID: mdl-37326233

RESUMEN

BACKGROUND: This study aimed to assess the error range of cephalometric measurements based on the landmarks detected using cascaded CNNs and determine how horizontal and vertical positional errors of individual landmarks affect lateral cephalometric measurements. METHODS: In total, 120 lateral cephalograms were obtained consecutively from patients (mean age, 32.5 ± 11.6) who visited the Asan Medical Center, Seoul, Korea, for orthodontic treatment between 2019 and 2021. An automated lateral cephalometric analysis model previously developed from a nationwide multi-centre database was used to digitize the lateral cephalograms. The horizontal and vertical landmark position error attributable to the AI model was defined as the distance between the landmark identified by the human and that identified by the AI model on the x- and y-axes. The differences between the cephalometric measurements based on the landmarks identified by the AI model vs those identified by the human examiner were assessed. The association between the lateral cephalometric measurements and the positioning errors in the landmarks comprising the cephalometric measurement was assessed. RESULTS: The mean difference in the angular and linear measurements based on AI vs human landmark localization was .99 ± 1.05°, and .80 ± .82 mm, respectively. Significant differences between the measurements derived from AI-based and human localization were observed for all cephalometric variables except SNA, pog-Nperp, facial angle, SN-GoGn, FMA, Bjork sum, U1-SN, U1-FH, IMPA, L1-NB (angular) and interincisal angle. CONCLUSIONS: The errors in landmark positions, especially those that define reference planes, may significantly affect cephalometric measurements. The possibility of errors generated by automated lateral cephalometric analysis systems should be considered when using such systems for orthodontic diagnoses.


Asunto(s)
Cara , Redes Neurales de la Computación , Humanos , Adulto Joven , Adulto , Cefalometría , Radiografía , Reproducibilidad de los Resultados
3.
J Clin Monit Comput ; 2024 Jun 19.
Artículo en Inglés | MEDLINE | ID: mdl-38896344

RESUMEN

Hand hygiene among anesthesia personnel is important to prevent hospital-acquired infections in operating rooms; however, an efficient monitoring system remains elusive. In this study, we leverage a deep learning approach based on operating room videos to detect alcohol-based hand hygiene actions of anesthesia providers. Videos were collected over a period of four months from November, 2018 to February, 2019, at a single operating room. Additional data was simulated and added to it. The proposed algorithm utilized a two-dimensional (2D) and three-dimensional (3D) convolutional neural networks (CNNs), sequentially. First, multi-person of the anesthesia personnel appearing in the target OR video were detected per image frame using the pre-trained 2D CNNs. Following this, each image frame detection of multi-person was linked and transmitted to a 3D CNNs to classify hand hygiene action. Optical flow was calculated and utilized as an additional input modality. Accuracy, sensitivity and specificity were evaluated hand hygiene detection. Evaluations of the binary classification of hand-hygiene actions revealed an accuracy of 0.88, a sensitivity of 0.78, a specificity of 0.93, and an area under the operating curve (AUC) of 0.91. A 3D CNN-based algorithm was developed for the detection of hand hygiene action. The deep learning approach has the potential to be applied in practical clinical scenarios providing continuous surveillance in a cost-effective way.

4.
Radiology ; 309(1): e230606, 2023 10.
Artículo en Inglés | MEDLINE | ID: mdl-37874243

RESUMEN

Background Most artificial intelligence algorithms that interpret chest radiographs are restricted to an image from a single time point. However, in clinical practice, multiple radiographs are used for longitudinal follow-up, especially in intensive care units (ICUs). Purpose To develop and validate a deep learning algorithm using thoracic cage registration and subtraction to triage pairs of chest radiographs showing no change by using longitudinal follow-up data. Materials and Methods A deep learning algorithm was retrospectively developed using baseline and follow-up chest radiographs in adults from January 2011 to December 2018 at a tertiary referral hospital. Two thoracic radiologists reviewed randomly selected pairs of "change" and "no change" images to establish the ground truth, including normal or abnormal status. Algorithm performance was evaluated using area under the receiver operating characteristic curve (AUC) analysis in a validation set and temporally separated internal test sets (January 2019 to August 2021) from the emergency department (ED) and ICU. Threshold calibration for the test sets was conducted, and performance with 40% and 60% triage thresholds was assessed. Results This study included 3 304 996 chest radiographs in 329 036 patients (mean age, 59 years ± 14 [SD]; 170 433 male patients). The training set included 550 779 pairs of radiographs. The validation set included 1620 pairs (810 no change, 810 change). The test sets included 533 pairs (ED; 265 no change, 268 change) and 600 pairs (ICU; 310 no change, 290 change). The algorithm had AUCs of 0.77 (validation), 0.80 (ED), and 0.80 (ICU). With a 40% triage threshold, specificity was 88.4% (237 of 268 pairs) and 90.0% (261 of 290 pairs) in the ED and ICU, respectively. With a 60% triage threshold, specificity was 79.9% (214 of 268 pairs) and 79.3% (230 of 290 pairs) in the ED and ICU, respectively. For urgent findings (consolidation, pleural effusion, pneumothorax), specificity was 78.6%-100% (ED) and 85.5%-93.9% (ICU) with a 40% triage threshold. Conclusion The deep learning algorithm could triage pairs of chest radiographs showing no change while detecting urgent interval changes during longitudinal follow-up. © RSNA, 2023 Supplemental material is available for this article. See also the editorial by Czum in this issue.


Asunto(s)
Inteligencia Artificial , Aprendizaje Profundo , Adulto , Humanos , Masculino , Persona de Mediana Edad , Estudios de Seguimiento , Estudios Retrospectivos , Triaje
5.
Radiology ; 307(2): e221488, 2023 04.
Artículo en Inglés | MEDLINE | ID: mdl-36786699

RESUMEN

Background Low-dose chest CT screening is recommended for smokers with the potential for lung function abnormality, but its role in predicting lung function remains unclear. Purpose To develop a deep learning algorithm to predict pulmonary function with low-dose CT images in participants using health screening services. Materials and Methods In this retrospective study, participants underwent health screening with same-day low-dose CT and pulmonary function testing with spirometry at a university affiliated tertiary referral general hospital between January 2015 and December 2018. The data set was split into a development set (model training, validation, and internal test sets) and temporally independent test set according to first visit year. A convolutional neural network was trained to predict the forced expiratory volume in the first second of expiration (FEV1) and forced vital capacity (FVC) from low-dose CT. The mean absolute error and concordance correlation coefficient (CCC) were used to evaluate agreement between spirometry as the reference standard and deep-learning prediction as the index test. FVC and FEV1 percent predicted (hereafter, FVC% and FEV1%) values less than 80% and percent of FVC exhaled in first second (hereafter, FEV1/FVC) less than 70% were used to classify participants at high risk. Results A total of 16 148 participants were included (mean age, 55 years ± 10 [SD]; 10 981 men) and divided into a development set (n = 13 428) and temporally independent test set (n = 2720). In the temporally independent test set, the mean absolute error and CCC were 0.22 L and 0.94, respectively, for FVC and 0.22 L and 0.91 for FEV1. For the prediction of the respiratory high-risk group, FVC%, FEV1%, and FEV1/FVC had respective accuracies of 89.6% (2436 of 2720 participants; 95% CI: 88.4, 90.7), 85.9% (2337 of 2720 participants; 95% CI: 84.6, 87.2), and 90.2% (2453 of 2720 participants; 95% CI: 89.1, 91.3) in the same testing data set. The sensitivities were 61.6% (242 of 393 participants; 95% CI: 59.7, 63.4), 46.9% (226 of 482 participants; 95% CI: 45.0, 48.8), and 36.1% (91 of 252 participants; 95% CI: 34.3, 37.9), respectively. Conclusion A deep learning model applied to volumetric chest CT predicted pulmonary function with relatively good performance. © RSNA, 2023 Supplemental material is available for this article.


Asunto(s)
Aprendizaje Profundo , Masculino , Humanos , Persona de Mediana Edad , Estudios Retrospectivos , Pulmón/diagnóstico por imagen , Capacidad Vital , Volumen Espiratorio Forzado , Espirometría/métodos , Tomografía Computarizada por Rayos X
6.
Radiology ; 306(1): 140-149, 2023 01.
Artículo en Inglés | MEDLINE | ID: mdl-35997607

RESUMEN

Background Deep learning (DL) may facilitate the diagnosis of various pancreatic lesions at imaging. Purpose To develop and validate a DL-based approach for automatic identification of patients with various solid and cystic pancreatic neoplasms at abdominal CT and compare its diagnostic performance with that of radiologists. Materials and Methods In this retrospective study, a three-dimensional nnU-Net-based DL model was trained using the CT data of patients who underwent resection for pancreatic lesions between January 2014 and March 2015 and a subset of patients without pancreatic abnormality who underwent CT in 2014. Performance of the DL-based approach to identify patients with pancreatic lesions was evaluated in a temporally independent cohort (test set 1) and a temporally and spatially independent cohort (test set 2) and was compared with that of two board-certified radiologists. Performance was assessed using receiver operating characteristic analysis. Results The study included 852 patients in the training set (median age, 60 years [range, 19-85 years]; 462 men), 603 patients in test set 1 (median age, 58 years [range, 18-82 years]; 376 men), and 589 patients in test set 2 (median age, 63 years [range, 18-99 years]; 343 men). In test set 1, the DL-based approach had an area under the receiver operating characteristic curve (AUC) of 0.91 (95% CI: 0.89, 0.94) and showed slightly worse performance in test set 2 (AUC, 0.87 [95% CI: 0.84, 0.89]). The DL-based approach showed high sensitivity in identifying patients with solid lesions of any size (98%-100%) or cystic lesions measuring 1.0 cm or larger (92%-93%), which was comparable with the radiologists (95%-100% for solid lesions [P = .51 to P > .99]; 93%-98% for cystic lesions ≥1.0 cm [P = .38 to P > .99]). Conclusion The deep learning-based approach demonstrated high performance in identifying patients with various solid and cystic pancreatic lesions at CT. © RSNA, 2022 Online supplemental material is available for this article.


Asunto(s)
Aprendizaje Profundo , Quiste Pancreático , Neoplasias Pancreáticas , Masculino , Humanos , Persona de Mediana Edad , Estudios Retrospectivos , Neoplasias Pancreáticas/cirugía , Tomografía Computarizada por Rayos X/métodos
7.
Eur Radiol ; 33(7): 4822-4832, 2023 Jul.
Artículo en Inglés | MEDLINE | ID: mdl-36856842

RESUMEN

OBJECTIVES: Diagnosis of flatfoot using a radiograph is subject to intra- and inter-observer variabilities. Here, we developed a cascade convolutional neural network (CNN)-based deep learning model (DLM) for an automated angle measurement for flatfoot diagnosis using landmark detection. METHODS: We used 1200 weight-bearing lateral foot radiographs from young adult Korean males for the model development. An experienced orthopedic surgeon identified 22 radiographic landmarks and measured three angles for flatfoot diagnosis that served as the ground truth (GT). Another orthopedic surgeon (OS) and a general physician (GP) independently identified the landmarks of the test dataset and measured the angles using the same method. External validation was performed using 100 and 17 radiographs acquired from a tertiary referral center and a public database, respectively. RESULTS: The DLM showed smaller absolute average errors from the GT for the three angle measurements for flatfoot diagnosis compared with both human observers. Under the guidance of the DLM, the average errors of observers OS and GP decreased from 2.35° ± 3.01° to 1.55° ± 2.09° and from 1.99° ± 2.76° to 1.56° ± 2.19°, respectively (both p < 0.001). The total measurement time decreased from 195 to 135 min in observer OS and from 205 to 155 min in observer GP. The absolute average errors of the DLM in the external validation sets were similar or superior to those of human observers in the original test dataset. CONCLUSIONS: Our CNN model had significantly better accuracy and reliability than human observers in diagnosing flatfoot, and notably improved the accuracy and reliability of human observers. KEY POINTS: • Development of deep learning model (DLM) that allows automated angle measurements for landmark detection based on 1200 weight-bearing lateral radiographs for diagnosing flatfoot. • Our DLM showed smaller absolute average errors for flatfoot diagnosis compared with two human observers. • Under the guidance of the model, the average errors of two human observers decreased and total measurement time also decreased from 195 to 135 min and from 205 to 155 min.


Asunto(s)
Pie Plano , Masculino , Adulto Joven , Humanos , Pie Plano/diagnóstico por imagen , Pie Plano/cirugía , Reproducibilidad de los Resultados , Radiografía , Redes Neurales de la Computación , Soporte de Peso
8.
Urol Int ; 107(6): 591-594, 2023.
Artículo en Inglés | MEDLINE | ID: mdl-36996784

RESUMEN

Partial nephrectomy (PN) is a common surgery for small renal masses. The goal is to remove the mass completely while preserving renal function. A precise incision is, therefore, important. However, no specific method for surgical incision in PN exists, although there are several guides for bony structures using three-dimensional (3D) printing methods. Therefore, we tested the 3D printing method to create a surgical guide for PN. We describe the workflow to make the guide, which comprises computed tomography data acquisition and segmentation, incision line creation, surgical guide design, and its use during surgery. The guide was designed with a mesh structure that could be fixed to the renal parenchyma, indicating the projected incision line. During the operation, the 3D-printed surgical guide accurately indicated the incision line, without distortion. An intraoperative sonography was performed to locate the renal mass, which confirmed that the guide was well placed. The mass was completely removed, and the surgical margin was negative. No inflammation or immune reaction occurred during and 1 month after the operation. This surgical guide proved useful during PN for indicating the incision line and was easy to handle, without complications. We, therefore, recommend this tool for PN with improved surgical outcome.


Asunto(s)
Neoplasias Renales , Humanos , Neoplasias Renales/diagnóstico por imagen , Neoplasias Renales/cirugía , Nefrectomía/métodos , Riñón/diagnóstico por imagen , Riñón/cirugía , Tomografía Computarizada por Rayos X , Impresión Tridimensional
9.
J Craniofac Surg ; 34(1): 159-167, 2023.
Artículo en Inglés | MEDLINE | ID: mdl-36100964

RESUMEN

The surgical resection margin in skin cancer is traditionally determined by the lesion's surface boundary without 3-dimensional information. Computed tomography (CT) can offer additional information, such as tumor invasion and the exact cancer extent. This study aimed to demonstrate the clinical application of and to evaluate the safety and accuracy of resection guides for skin cancer treatment. This prospective randomized comparison of skin cancer resection with (guide group; n=34) or without (control group; n=28) resection guide use was conducted between February 2020 and November 2021. Patients with squamous cell carcinoma or basal cell carcinoma were included. In the guide group, based on CT images, the surgical margin was defined, and a 3-dimensional-printed resection guide was fabricated. The intraoperative frozen biopsy results and distance from tumor boundary to resection margin were measured. The margin involvement rates were 8.8% and 17.9% in the guide and control groups, respectively. The margin involvement rate was nonsignificantly higher in the control group as compared with the guide group ( P =0.393). The margin distances of squamous cell carcinoma were 2.3±0.8 and 3.4±1.6 mm ( P =0.01) and those of basal cell carcinoma were 2.8±1.0 and 4.7±3.2 mm in the guide and control groups, respectively ( P =0.015). Margin distance was significantly lower in the guide group than the control group. The resection guide demonstrated similar safety to traditional surgical excision but enabled the minimal removal of normal tissue by precisely estimating the tumor border on CT scans.


Asunto(s)
Carcinoma Basocelular , Carcinoma de Células Escamosas , Neoplasias Cutáneas , Humanos , Carcinoma Basocelular/diagnóstico por imagen , Carcinoma Basocelular/cirugía , Carcinoma de Células Escamosas/diagnóstico por imagen , Carcinoma de Células Escamosas/cirugía , Carcinoma de Células Escamosas/patología , Simulación por Computador , Estudios de Factibilidad , Márgenes de Escisión , Estudios Prospectivos , Neoplasias Cutáneas/diagnóstico por imagen , Neoplasias Cutáneas/cirugía
10.
J Digit Imaging ; 36(5): 2003-2014, 2023 10.
Artículo en Inglés | MEDLINE | ID: mdl-37268839

RESUMEN

In medicine, confounding variables in a generalized linear model are often adjusted; however, these variables have not yet been exploited in a non-linear deep learning model. Sex plays important role in bone age estimation, and non-linear deep learning model reported their performances comparable to human experts. Therefore, we investigate the properties of using confounding variables in a non-linear deep learning model for bone age estimation in pediatric hand X-rays. The RSNA Pediatric Bone Age Challenge (2017) dataset is used to train deep learning models. The RSNA test dataset is used for internal validation, and 227 pediatric hand X-ray images with bone age, chronological age, and sex information from Asan Medical Center (AMC) for external validation. U-Net based autoencoder, U-Net multi-task learning (MTL), and auxiliary-accelerated MTL (AA-MTL) models are chosen. Bone age estimations adjusted by input, output prediction, and without adjusting the confounding variables are compared. Additionally, ablation studies for model size, auxiliary task hierarchy, and multiple tasks are conducted. Correlation and Bland-Altman plots between ground truth and model-predicted bone ages are evaluated. Averaged saliency maps based on image registration are superimposed on representative images according to puberty stage. In the RSNA test dataset, adjusting by input shows the best performances regardless of model size, with mean average errors (MAEs) of 5.740, 5.478, and 5.434 months for the U-Net backbone, U-Net MTL, and AA-MTL models, respectively. However, in the AMC dataset, the AA-MTL model that adjusts the confounding variable by prediction shows the best performance with an MAE of 8.190 months, whereas the other models show the best performances by adjusting the confounding variables by input. Ablation studies of task hierarchy reveal no significant differences in the results of the RSNA dataset. However, predicting the confounding variable in the second encoder layer and estimating bone age in the bottleneck layer shows the best performance in the AMC dataset. Ablations studies of multiple tasks reveal that leveraging confounding variables plays an important role regardless of multiple tasks. To estimate bone age in pediatric X-rays, the clinical setting and balance between model size, task hierarchy, and confounding adjustment method play important roles in performance and generalizability; therefore, proper adjusting methods of confounding variables to train deep learning-based models are required for improved models.


Asunto(s)
Aprendizaje Profundo , Radiología , Humanos , Niño , Rayos X , Factores de Confusión Epidemiológicos , Radiografía
11.
J Digit Imaging ; 36(4): 1760-1769, 2023 08.
Artículo en Inglés | MEDLINE | ID: mdl-36914855

RESUMEN

Generative adversarial networks (GAN) in medicine are valuable techniques for augmenting unbalanced rare data, anomaly detection, and avoiding patient privacy issues. However, there were limits to generating high-quality endoscopic images with various characteristics, such as peristalsis, viewpoints, light sources, and mucous patterns. This study used the progressive growing of GAN (PGGAN) within the normal distribution dataset to confirm the ability to generate high-quality gastrointestinal images and investigated what barriers PGGAN has to generate endoscopic images. We trained the PGGAN with 107,060 gastroscopy images from 4165 normal patients to generate highly realistic 5122 pixel-sized images. For the evaluation, visual Turing tests were conducted on 100 real and 100 synthetic images to distinguish the authenticity of images by 19 endoscopists. The endoscopists were divided into three groups based on their years of clinical experience for subgroup analysis. The overall accuracy, sensitivity, and specificity of the 19 endoscopist groups were 61.3%, 70.3%, and 52.4%, respectively. The mean accuracy of the three endoscopist groups was 62.4 [Group I], 59.8 [Group II], and 59.1% [Group III], which was not considered a significant difference. There were no statistically significant differences in the location of the stomach. However, the real images with the anatomical landmark pylorus had higher detection sensitivity. The images generated by PGGAN showed highly realistic depictions that were difficult to distinguish, regardless of their expertise as endoscopists. However, it was necessary to establish GANs that could better represent the rugal folds and mucous membrane texture.


Asunto(s)
Gastroscopía , Medicina , Humanos , Privacidad , Procesamiento de Imagen Asistido por Computador
12.
J Digit Imaging ; 36(3): 902-910, 2023 06.
Artículo en Inglés | MEDLINE | ID: mdl-36702988

RESUMEN

Training deep learning models on medical images heavily depends on experts' expensive and laborious manual labels. In addition, these images, labels, and even models themselves are not widely publicly accessible and suffer from various kinds of bias and imbalances. In this paper, chest X-ray pre-trained model via self-supervised contrastive learning (CheSS) was proposed to learn models with various representations in chest radiographs (CXRs). Our contribution is a publicly accessible pretrained model trained with a 4.8-M CXR dataset using self-supervised learning with a contrastive learning and its validation with various kinds of downstream tasks including classification on the 6-class diseases in internal dataset, diseases classification in CheXpert, bone suppression, and nodule generation. When compared to a scratch model, on the 6-class classification test dataset, we achieved 28.5% increase in accuracy. On the CheXpert dataset, we achieved 1.3% increase in mean area under the receiver operating characteristic curve on the full dataset and 11.4% increase only using 1% data in stress test manner. On bone suppression with perceptual loss, we achieved improvement in peak signal to noise ratio from 34.99 to 37.77, structural similarity index measure from 0.976 to 0.977, and root-square-mean error from 4.410 to 3.301 when compared to ImageNet pretrained model. Finally, on nodule generation, we achieved improvement in Fréchet inception distance from 24.06 to 17.07. Our study showed the decent transferability of CheSS weights. CheSS weights can help researchers overcome data imbalance, data shortage, and inaccessibility of medical image datasets. CheSS weight is available at https://github.com/mi2rl/CheSS .


Asunto(s)
Rayos X , Humanos , Curva ROC , Radiografía , Relación Señal-Ruido
13.
J Stroke Cerebrovasc Dis ; 32(11): 107348, 2023 Nov.
Artículo en Inglés | MEDLINE | ID: mdl-37783139

RESUMEN

BACKGROUND: Air pollutant concentrations in South Korea vary greatly by region and time. To assess temporal and spatial associations of stroke subtypes with long-term air pollution effects on stroke mortality, we studied ischemic stroke (IS), intracerebral hemorrhage (ICH), and subarachnoid hemorrhage (SAH). METHODS: This was an observational study conducted in South Korea from 2001-2018. Concentrations of carbon monoxide (CO), nitrogen dioxide (NO2), sulfur dioxide (SO2), and particulate matter ≤10 µm in diameter (PM10) were determined from 332 stations. Average air pollutant concentrations in each district were determined by distance-weighted linear interpolation. The nationwide stroke mortality rates in 249 districts were obtained from the Korean Statistical Information Service. Time intervals were divided into three consecutive 6-year periods: 2001-2006, 2007-2012, and 2013-2018. RESULTS: The concentrations of air pollutants gradually decreased from 2001-2018, along with decreases in IS and ICH mortality rates. However, mortality rates associated with SAH remained constant. From 2001-2006, NO2 (adjusted odds ratio [aOR]:1.13, 95% confidence interval: 1.08-1.19), SO2 (aOR: 1.10, 1.07-1.13), and PM10 (aOR: 1.12, 1.06-1.18) concentrations were associated with IS mortality, and SO2 (aOR: 1.07, 1.02-1.13) and PM10 (aOR:1.11,1.06-1.22) concentrations were associated with SAH-associated mortality. Air pollution was no longer associated with stroke mortality from 2007 onward, as the air pollution concentration continued to decline. Throughout the entire 18-year period, ICH-associated mortality was not associated with air pollution. CONCLUSIONS: Considering temporal and spatial trends, high concentrations of air pollutants were most likely to be associated with IS mortality. Our results strengthen the existing evidence of the deleterious effects of air pollution on IS mortality.


Asunto(s)
Contaminantes Atmosféricos , Contaminación del Aire , Accidente Cerebrovascular , Humanos , Dióxido de Nitrógeno/efectos adversos , Contaminación del Aire/efectos adversos , Contaminantes Atmosféricos/efectos adversos , República de Corea/epidemiología , Accidente Cerebrovascular/diagnóstico
14.
Radiology ; 302(1): 187-197, 2022 01.
Artículo en Inglés | MEDLINE | ID: mdl-34636634

RESUMEN

Background Evaluation of interstitial lung disease (ILD) at CT is a challenging task that requires experience and is subject to substantial interreader variability. Purpose To investigate whether a proposed content-based image retrieval (CBIR) of similar chest CT images by using deep learning can aid in the diagnosis of ILD by readers with different levels of experience. Materials and Methods This retrospective study included patients with confirmed ILD after multidisciplinary discussion and available CT images identified between January 2000 and December 2015. Database was composed of four disease classes: usual interstitial pneumonia (UIP), nonspecific interstitial pneumonia (NSIP), cryptogenic organizing pneumonia, and chronic hypersensitivity pneumonitis. Eighty patients were selected as queries from the database. The proposed CBIR retrieved the top three similar CT images with diagnosis from the database by comparing the extent and distribution of different regional disease patterns quantified by a deep learning algorithm. Eight readers with varying experience interpreted the query CT images and provided their most probable diagnosis in two reading sessions 2 weeks apart, before and after applying CBIR. Diagnostic accuracy was analyzed by using McNemar test and generalized estimating equation, and interreader agreement was analyzed by using Fleiss κ. Results A total of 288 patients were included (mean age, 58 years ± 11 [standard deviation]; 145 women). After applying CBIR, the overall diagnostic accuracy improved in all readers (before CBIR, 46.1% [95% CI: 37.1, 55.3]; after CBIR, 60.9% [95% CI: 51.8, 69.3]; P < .001). In terms of disease category, the diagnostic accuracy improved after applying CBIR in UIP (before vs after CBIR, 52.4% vs 72.8%, respectively; P < .001) and NSIP cases (before vs after CBIR, 42.9% vs 61.6%, respectively; P < .001). Interreader agreement improved after CBIR (before vs after CBIR Fleiss κ, 0.32 vs 0.47, respectively; P = .005). Conclusion The proposed content-based image retrieval system for chest CT images with deep learning improved the diagnostic accuracy of interstitial lung disease and interreader agreement in readers with different levels of experience. © RSNA, 2021 Online supplemental material is available for this article. See also the editorial by Wielpütz in this issue.


Asunto(s)
Aprendizaje Profundo , Enfermedades Pulmonares Intersticiales/diagnóstico por imagen , Interpretación de Imagen Radiográfica Asistida por Computador/métodos , Tomografía Computarizada por Rayos X/métodos , Diagnóstico Diferencial , Femenino , Humanos , Pulmón/diagnóstico por imagen , Masculino , Persona de Mediana Edad , Reproducibilidad de los Resultados , Estudios Retrospectivos
15.
Semin Respir Crit Care Med ; 43(6): 946-960, 2022 12.
Artículo en Inglés | MEDLINE | ID: mdl-36174647

RESUMEN

Recently, interest and advances in artificial intelligence (AI) including deep learning for medical images have surged. As imaging plays a major role in the assessment of pulmonary diseases, various AI algorithms have been developed for chest imaging. Some of these have been approved by governments and are now commercially available in the marketplace. In the field of chest radiology, there are various tasks and purposes that are suitable for AI: initial evaluation/triage of certain diseases, detection and diagnosis, quantitative assessment of disease severity and monitoring, and prediction for decision support. While AI is a powerful technology that can be applied to medical imaging and is expected to improve our current clinical practice, some obstacles must be addressed for the successful implementation of AI in workflows. Understanding and becoming familiar with the current status and potential clinical applications of AI in chest imaging, as well as remaining challenges, would be essential for radiologists and clinicians in the era of AI. This review introduces the potential clinical applications of AI in chest imaging and also discusses the challenges for the implementation of AI in daily clinical practice and future directions in chest imaging.


Asunto(s)
Inteligencia Artificial , Radiología , Humanos , Radiología/métodos , Radiólogos , Diagnóstico por Imagen , Pulmón/diagnóstico por imagen
16.
J Korean Med Sci ; 37(36): e271, 2022 Sep 19.
Artículo en Inglés | MEDLINE | ID: mdl-36123960

RESUMEN

BACKGROUND: To propose fully automatic segmentation of left atrium using active learning with limited dataset in late gadolinium enhancement in cardiac magnetic resonance imaging (LGE-CMRI). METHODS: An active learning framework was developed to segment the left atrium in cardiac LGE-CMRI. Patients (n = 98) with atrial fibrillation from the Korea University Anam Hospital were enrolled. First, 20 cases were delineated for ground truths by two experts and used for training a draft model. Second, the 20 cases from the first step and 50 new cases, corrected in a human-in-the-loop manner after predicting using the draft model, were used to train the next model; all 98 cases (70 cases from the second step and 28 new cases) were trained. An additional 20 LGE-CMRI were evaluated in each step. RESULTS: The Dice coefficients for the three steps were 0.85 ± 0.06, 0.89 ± 0.02, and 0.90 ± 0.02, respectively. The biases (95% confidence interval) in the Bland-Altman plots of each step were 6.36% (-14.90-27.61), 6.21% (-9.62-22.03), and 2.68% (-8.57-13.93). Deep active learning-based annotation times were 218 ± 31 seconds, 36.70 ± 18 seconds, and 36.56 ± 15 seconds, respectively. CONCLUSION: Deep active learning reduced annotation time and enabled efficient training on limited LGE-CMRI.


Asunto(s)
Medios de Contraste , Gadolinio , Atrios Cardíacos/diagnóstico por imagen , Atrios Cardíacos/patología , Humanos , Imagen por Resonancia Magnética/métodos , Redes Neurales de la Computación
17.
J Korean Med Sci ; 37(31): e244, 2022 Aug 08.
Artículo en Inglés | MEDLINE | ID: mdl-35942557

RESUMEN

BACKGROUND: To deliver therapeutics into the brain, it is imperative to overcome the issue of the blood-brain-barrier (BBB). One of the ways to circumvent the BBB is to administer therapeutics directly into the brain parenchyma. To enhance the treatment efficacy for chronic neurodegenerative disorders, repeated administration to the target location is required. However, this increases the number of operations that must be performed. In this study, we developed the IntraBrain Injector (IBI), a new implantable device to repeatedly deliver therapeutics into the brain parenchyma. METHODS: We designed and fabricated IBI with medical grade materials, and evaluated the efficacy and safety of IBI in 9 beagles. The trajectory of IBI to the hippocampus was simulated prior to surgery and the device was implanted using 3D-printed adaptor and surgical guides. Ferumoxytol-labeled mesenchymal stem cells (MSCs) were injected into the hippocampus via IBI, and magnetic resonance images were taken before and after the administration to analyze the accuracy of repeated injection. RESULTS: We compared the planned vs. insertion trajectory of IBI to the hippocampus. With a similarity of 0.990 ± 0.001 (mean ± standard deviation), precise targeting of IBI was confirmed by comparing planned vs. insertion trajectories of IBI. Multiple administrations of ferumoxytol-labeled MSCs into the hippocampus using IBI were both feasible and successful (success rate of 76.7%). Safety of initial IBI implantation, repeated administration of therapeutics, and long-term implantation have all been evaluated in this study. CONCLUSION: Precise and repeated delivery of therapeutics into the brain parenchyma can be done without performing additional surgeries via IBI implantation.


Asunto(s)
Óxido Ferrosoférrico , Células Madre Mesenquimatosas , Animales , Encéfalo/diagnóstico por imagen , Encéfalo/cirugía , Perros , Humanos , Imagenología Tridimensional , Imagen por Resonancia Magnética/métodos
18.
Am J Orthod Dentofacial Orthop ; 162(2): e53-e62, 2022 Aug.
Artículo en Inglés | MEDLINE | ID: mdl-35654686

RESUMEN

INTRODUCTION: This study aimed to evaluate a 3-dimensional (3D) U-Net-based convolutional neural networks model for the fully automatic segmentation of regional pharyngeal volume of interests (VOIs) in cone-beam computed tomography scans to compare the accuracy of the model performance across different skeletal patterns presenting with various pharyngeal dimensions. METHODS: Two-hundred sixteen cone-beam computed tomography scans of adult patients were randomly divided into training (n = 100), validation (n = 16), and test (n = 100) datasets. We trained the 3D U-Net model for fully automatic segmentation of pharyngeal VOIs and their measurements: nasopharyngeal, velopharyngeal, glossopharyngeal, and hypopharyngeal sections as well as total pharyngeal airway space (PAS). The test datasets were subdivided according to the sagittal and vertical skeletal patterns. The segmentation performance was assessed by dice similarity coefficient, volumetric similarity, precision, and recall values, compared with the ground truth created by 1 expert's manual processing using semiautomatic software. RESULTS: The proposed model achieved highly accurate performance, showing a mean dice similarity coefficient of 0.928 ± 0.023, the volumetric similarity of 0.928 ± 0.023, precision of 0.925 ± 0.030, and recall of 0.921 ± 0.029 for total PAS segmentation. The performance showed region-specific differences, revealing lower accuracy in the glossopharyngeal and hypopharyngeal sections than in the upper sections (P <0.001). However, the accuracy of model performance at each pharyngeal VOI showed no significant difference according to sagittal or vertical skeletal patterns. CONCLUSIONS: The 3D-convolutional neural network performance for region-specific PAS analysis is promising to substitute for laborious and time-consuming manual analysis in every skeletal and pharyngeal pattern.


Asunto(s)
Procesamiento de Imagen Asistido por Computador , Redes Neurales de la Computación , Adulto , Tomografía Computarizada de Haz Cónico , Humanos , Procesamiento de Imagen Asistido por Computador/métodos , Faringe/diagnóstico por imagen , Programas Informáticos
19.
Am J Orthod Dentofacial Orthop ; 161(4): e361-e371, 2022 Apr.
Artículo en Inglés | MEDLINE | ID: mdl-35074216

RESUMEN

INTRODUCTION: The purpose of this study was to evaluate the accuracy of auto-identification of the posteroanterior (PA) cephalometric landmarks using the cascade convolution neural network (CNN) algorithm and PA cephalogram images of a different quality from nationwide multiple centers nationwide. METHODS: Of the 2798 PA cephalograms from 9 university hospitals, 2418 images (2075 training set and 343 validation set) were used to train the CNN algorithm for auto-identification of 16 PA cephalometric landmarks. Subsequently, 99 pretreatment images from the remaining 380 test set images were used to evaluate the accuracy of auto-identification of the CNN algorithm by comparing with the identification by a human examiner (gold standard) using V-Ceph 8.0 (Ostem, Seoul, South Korea). Pretreatment images were used to eliminate the effects of orthodontic bracket, tube and wire, surgical plate, and surgical screws. Paired t test was performed to compare the x- and y-coordinates of each landmark. The point-to-point error and the successful detection rate (range, within 2.0 mm) were calculated. RESULTS: The number of landmarks without a significant difference between the location identified by the human examiner and by auto-identification by the CNN algorithm were 8 on the x-coordinate and 5 on the y-coordinate, respectively. The mean point-to-point error was 1.52 mm. The low point-to-point error (<1.0 mm) was observed at the left and right antegonion (0.96 mm and 0.99 mm, respectively) and the high point-to-point error (>2.0 mm) was observed at the maxillary right first molar root apex (2.18 mm). The mean successful detection rate of auto-identification was 83.3%. CONCLUSIONS: Cascade CNN algorithm for auto-identification of PA cephalometric landmarks showed a possibility of an effective alternative to manual identification.


Asunto(s)
Algoritmos , Redes Neurales de la Computación , Puntos Anatómicos de Referencia , Cefalometría/métodos , Humanos , Radiografía , Reproducibilidad de los Resultados
20.
Eur J Orthod ; 44(1): 66-77, 2022 01 25.
Artículo en Inglés | MEDLINE | ID: mdl-34379120

RESUMEN

OBJECTIVES: The aim of the study was to evaluate the accuracy of a cascaded two-stage convolutional neural network (CNN) model in detecting upper airway (UA) soft tissue landmarks in comparison with the skeletal landmarks on the lateral cephalometric images. MATERIALS AND METHODS: The dataset contained 600 lateral cephalograms of adult orthodontic patients, and the ground-truth positions of 16 landmarks (7 skeletal and 9 UA landmarks) were obtained from 500 learning dataset. We trained a UNet with EfficientNetB0 model through the region of interest-centred circular segmentation labelling process. Mean distance errors (MDEs, mm) of the CNN algorithm was compared with those from human examiners. Successful detection rates (SDRs, per cent) assessed within 1-4 mm precision ranges were compared between skeletal and UA landmarks. RESULTS: The proposed model achieved MDEs of 0.80 ± 0.55 mm for skeletal landmarks and 1.78 ± 1.21 mm for UA landmarks. The mean SDRs for UA landmarks were 72.22 per cent for 2 mm range, and 92.78 per cent for 4 mm range, contrasted with those for skeletal landmarks amounting to 93.43 and 98.71 per cent, respectively. As compared with mean interexaminer difference, however, this model showed higher detection accuracies for geometrically constructed UA landmarks on the nasopharynx (AD2 and Ss), while lower accuracies for anatomically located UA landmarks on the tongue (Td) and soft palate (Sb and St). CONCLUSION: The proposed CNN model suggests the availability of an automated cephalometric UA assessment to be integrated with dentoskeletal and facial analysis.


Asunto(s)
Cara , Redes Neurales de la Computación , Adulto , Algoritmos , Cefalometría , Humanos , Paladar Blando/diagnóstico por imagen
SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA