Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 54
Filtrar
1.
Ophthalmol Ther ; 13(10): 2789-2797, 2024 Oct.
Artículo en Inglés | MEDLINE | ID: mdl-39214946

RESUMEN

INTRODUCTION: The aim of this study was to evaluate the efficacy and safety of escalating the dosage of intravitreal brolucizumab in patients with refractory neovascular age-related macular degeneration (AMD). METHODS: This retrospective study included 17 eyes of 17 patients with refractory AMD treated with high-dose brolucizumab (12 mg/0.1 ml) for over 12 months. Patients initially received at least one anti-vascular endothelial growth factor (anti-VEGF) agent and were switched to standard-dose brolucizumab (6 mg/0.05 ml). Those who showed a suboptimal response to standard-dose treatment had their dosage of brolucizumab escalated. RESULTS: Visual acuity was maintained from 68.3 ± 3.4 letters to 70.7 ± 3.2 letters after 12 months of high-dose treatment (P = 0.128). Central subfield thickness was 343.7 ± 17.0 µm before high-dose treatment and 316.7 ± 18.5 µm at 12 months (P = 0.083). The proportions of patients with subretinal fluid and serous pigment epithelial detachment significantly decreased from 82.4% to 41.2% and from 52.9% to 17.6%, respectively, after high-dose treatment (P = 0.039 and P = 0.031, respectively). The treatment interval extended from 7.2 ± 2.4 weeks to 10.2 ± 2.2 weeks after switching to standard-dose brolucizumab (P < 0.001) and was maintained at 13.5 ± 2.8 weeks after increasing the dose (P = 0.154). No severe ocular adverse events were observed. CONCLUSIONS: High-dose brolucizumab was effective in patients who did not respond to standard-dose brolucizumab after switching from previous anti-VEGF agents. Increasing the dosage could offer sustained disease control and reduce the treatment burden for patients with refractory AMD.

2.
Sci Rep ; 14(1): 16600, 2024 07 18.
Artículo en Inglés | MEDLINE | ID: mdl-39025919

RESUMEN

This study constructed deep learning models using plain skull radiograph images to predict the accurate postnatal age of infants under 12 months. Utilizing the results of the trained deep learning models, it aimed to evaluate the feasibility of employing major changes visible in skull X-ray images for assessing postnatal cranial development through gradient-weighted class activation mapping. We developed DenseNet-121 and EfficientNet-v2-M convolutional neural network models to analyze 4933 skull X-ray images collected from 1343 infants. Notably, allowing for a ± 1 month error margin, DenseNet-121 reached a maximum corrected accuracy of 79.4% for anteroposterior (AP) views (average: 78.0 ± 1.5%) and 84.2% for lateral views (average: 81.1 ± 2.9%). EfficientNet-v2-M reached a maximum corrected accuracy 79.1% for AP views (average: 77.0 ± 2.3%) and 87.3% for lateral views (average: 85.1 ± 2.5%). Saliency maps identified critical discriminative areas in skull radiographs, including the coronal, sagittal, and metopic sutures in AP skull X-ray images, and the lambdoid suture and cortical bone density in lateral images, marking them as indicators for evaluating cranial development. These findings highlight the precision of deep learning in estimating infant age through non-invasive methods, offering the progress for clinical diagnostics and developmental assessment tools.


Asunto(s)
Aprendizaje Profundo , Cráneo , Humanos , Lactante , Cráneo/diagnóstico por imagen , Cráneo/crecimiento & desarrollo , Masculino , Femenino , Recién Nacido , Redes Neurales de la Computación , Radiografía/métodos , Procesamiento de Imagen Asistido por Computador/métodos
3.
Sci Rep ; 14(1): 16111, 2024 07 12.
Artículo en Inglés | MEDLINE | ID: mdl-38997328

RESUMEN

This retrospective study aimed to compare the outcomes of modified double-flanged sutureless scleral fixation versus sutured scleral fixation. Medical records of 65 eyes from 65 patients who underwent double-flanged scleral fixation (flange group) or conventional scleral fixation (suture group) between 2021 and 2022 were reviewed. Visual and refractive outcomes, as well as postoperative complications, were compared 1, 2, and 6 months after surgery. We included 31 eyes in the flange group and 34 eyes in the suture group. At 6 months postoperatively, the flange group showed better uncorrected visual acuity (0.251 ± 0.328 vs. 0.418 ± 0.339 logMAR, P = 0.041) and a smaller myopic shift (- 0.74 ± 0.93 vs. - 1.33 ± 1.15 diopter, P = 0.007) compared to the suture group. The flange group did not experience any instances of iris capture, while the suture group had iris capture in 10 eyes (29.4%; P < 0.001). In the flange group, all intraocular lenses remained centered, whereas in the suture group, they were decentered in 8 eyes (23.5%; P = 0.005). The double-flanged technique not only prevented iris capture and decentration of the intraocular lens but also reduced myopic shift by enhancing the stability of the intraocular lens.


Asunto(s)
Esclerótica , Técnicas de Sutura , Agudeza Visual , Humanos , Esclerótica/cirugía , Masculino , Femenino , Persona de Mediana Edad , Estudios Retrospectivos , Anciano , Resultado del Tratamiento , Suturas , Implantación de Lentes Intraoculares/métodos , Implantación de Lentes Intraoculares/efectos adversos , Procedimientos Quirúrgicos sin Sutura/métodos , Adulto , Complicaciones Posoperatorias/etiología
4.
Curr Oncol ; 31(4): 2278-2288, 2024 04 18.
Artículo en Inglés | MEDLINE | ID: mdl-38668072

RESUMEN

Background: Accurate detection of axillary lymph node (ALN) metastases in breast cancer is crucial for clinical staging and treatment planning. This study aims to develop a deep learning model using clinical implication-applied preprocessed computed tomography (CT) images to enhance the prediction of ALN metastasis in breast cancer patients. Methods: A total of 1128 axial CT images of ALN (538 malignant and 590 benign lymph nodes) were collected from 523 breast cancer patients who underwent preoperative CT scans between January 2012 and July 2022 at Hallym University Medical Center. To develop an optimal deep learning model for distinguishing metastatic ALN from benign ALN, a CT image preprocessing protocol with clinical implications and two different cropping methods (fixed size crop [FSC] method and adjustable square crop [ASC] method) were employed. The images were analyzed using three different convolutional neural network (CNN) architectures (ResNet, DenseNet, and EfficientNet). Ensemble methods involving and combining the selection of the two best-performing CNN architectures from each cropping method were applied to generate the final result. Results: For the two different cropping methods, DenseNet consistently outperformed ResNet and EfficientNet. The area under the receiver operating characteristic curve (AUROC) for DenseNet, using the FSC and ASC methods, was 0.934 and 0.939, respectively. The ensemble model, which combines the performance of the DenseNet121 architecture for both cropping methods, delivered outstanding results with an AUROC of 0.968, an accuracy of 0.938, a sensitivity of 0.980, and a specificity of 0.903. Furthermore, distinct trends observed in gradient-weighted class activation mapping images with the two cropping methods suggest that our deep learning model not only evaluates the lymph node itself, but also distinguishes subtler changes in lymph node margin and adjacent soft tissue, which often elude human interpretation. Conclusions: This research demonstrates the promising performance of a deep learning model in accurately detecting malignant ALNs in breast cancer patients using CT images. The integration of clinical considerations into image processing and the utilization of ensemble methods further improved diagnostic precision.


Asunto(s)
Axila , Neoplasias de la Mama , Aprendizaje Profundo , Metástasis Linfática , Tomografía Computarizada por Rayos X , Humanos , Neoplasias de la Mama/patología , Neoplasias de la Mama/diagnóstico por imagen , Femenino , Metástasis Linfática/diagnóstico por imagen , Tomografía Computarizada por Rayos X/métodos , Persona de Mediana Edad , Ganglios Linfáticos/patología , Ganglios Linfáticos/diagnóstico por imagen , Adulto , Anciano
5.
Ann Coloproctol ; 40(1): 13-26, 2024 Feb.
Artículo en Inglés | MEDLINE | ID: mdl-38414120

RESUMEN

PURPOSE: The integration of artificial intelligence (AI) and magnetic resonance imaging in rectal cancer has the potential to enhance diagnostic accuracy by identifying subtle patterns and aiding tumor delineation and lymph node assessment. According to our systematic review focusing on convolutional neural networks, AI-driven tumor staging and the prediction of treatment response facilitate tailored treat-ment strategies for patients with rectal cancer. METHODS: This paper summarizes the current landscape of AI in the imaging field of rectal cancer, emphasizing the performance reporting design based on the quality of the dataset, model performance, and external validation. RESULTS: AI-driven tumor segmentation has demonstrated promising results using various convolutional neural network models. AI-based predictions of staging and treatment response have exhibited potential as auxiliary tools for personalized treatment strategies. Some studies have indicated superior performance than conventional models in predicting microsatellite instability and KRAS status, offer-ing noninvasive and cost-effective alternatives for identifying genetic mutations. CONCLUSION: Image-based AI studies for rectal can-cer have shown acceptable diagnostic performance but face several challenges, including limited dataset sizes with standardized data, the need for multicenter studies, and the absence of oncologic relevance and external validation for clinical implantation. Overcoming these pitfalls and hurdles is essential for the feasible integration of AI models in clinical settings for rectal cancer, warranting further research.

6.
Sci Rep ; 13(1): 22237, 2023 12 14.
Artículo en Inglés | MEDLINE | ID: mdl-38097669

RESUMEN

Subconjunctival hemorrhage (SCH) is a benign eye condition that is often noticeable and leads to medical attention. Despite previous studies investigating the relationship between SCH and cardiovascular diseases, the relationship between SCH and bleeding disorders remains controversial. In order to gain further insight into this association, a nationwide cohort study was conducted using data from the National Health Insurance Service-National Sample Cohort version 2.0 from 2006 to 2015. The study defined SCH using a diagnostic code and compared the incidence and risk factors of intracerebral hemorrhage (ICH) and gastrointestinal (GI) bleeding in 36,772 SCH individuals and 147,088 propensity score (PS)-matched controls without SCH. The results showed that SCH was associated with a lower risk of ICH (HR = 0.76, 95% CI = 0.622-0.894, p = 0.002) and GI bleeding (HR = 0.816, 95% CI = 0.690-0.965, p = 0.018) when compared to the PS-matched control group. This reduced risk was more pronounced in females and in the older age group (≥ 50 years), but not observed in males or younger age groups. In conclusion, SCH dose not increase the risk of ICH and major GI bleeding and is associated with a decreased incidence in females and individuals aged ≥ 50 years.


Asunto(s)
Enfermedades de la Conjuntiva , Hemorragia del Ojo , Trastornos Hemorrágicos , Masculino , Femenino , Humanos , Anciano , Estudios de Cohortes , Hemorragia del Ojo/epidemiología , Hemorragia del Ojo/etiología , Hemorragia Cerebral , Hemorragia Gastrointestinal/epidemiología , Hemorragia Gastrointestinal/etiología , Factores de Riesgo , Enfermedades de la Conjuntiva/epidemiología , Enfermedades de la Conjuntiva/etiología
7.
J Clin Med ; 12(10)2023 May 12.
Artículo en Inglés | MEDLINE | ID: mdl-37240542

RESUMEN

This study aimed to investigate the clinical features and risk factors of uveitis in Korean children with juvenile idiopathic arthritis (JIA). The medical records of JIA patients diagnosed between 2006 and 2019 and followed up for ≥1 year were retrospectively reviewed, and various factors including laboratory findings were analyzed for the risk of developing uveitis. JIA-associated uveitis (JIA-U) developed in 30 (9.8%) of 306 JIA patients. The mean age at the first uveitis development was 12.4 ± 5.7 years, which was 5.6 ± 3.7 years after the JIA diagnosis. The common JIA subtypes in the uveitis group were oligoarthritis-persistent (33.3%) and enthesitis-related arthritis (30.0%). The uveitis group had more baseline knee joint involvement (76.7% vs. 51.4%), which increased the risk of JIA-U during follow-up (p = 0.008). Patients with the oligoarthritis-persistent subtype developed JIA-U more frequently than those without it (20.0% vs. 7.8%; p = 0.016). The final visual acuity of JIA-U was tolerable (0.041 ± 0.103 logMAR). In Korean children with JIA, JIA-U may be associated with the oligoarthritis-persistent subtype and knee joint involvement.

8.
Drug Saf ; 46(7): 647-660, 2023 07.
Artículo en Inglés | MEDLINE | ID: mdl-37243963

RESUMEN

INTRODUCTION: With the availability of retrospective pharmacovigilance data, the common data model (CDM) has been identified as an efficient approach towards anonymized multicenter analysis; however, the establishment of a suitable model for individual medical systems and applications supporting their analysis is a challenge. OBJECTIVE: The aim of this study was to construct a specialized Korean CDM (K-CDM) for pharmacovigilance systems based on a clinical scenario to detect adverse drug reactions (ADRs). METHODS: De-identified patient records (n = 5,402,129) from 13 institutions were converted to the K-CDM. From 2005 to 2017, 37,698,535 visits, 39,910,849 conditions, 259,594,727 drug exposures, and 30,176,929 procedures were recorded. The K-CDM, which comprises three layers, is compatible with existing models and is potentially adaptable to extended clinical research. Local codes for electronic medical records (EMRs), including diagnosis, drug prescriptions, and procedures, were mapped using standard vocabulary. Distributed queries based on clinical scenarios were developed and applied to K-CDM through decentralized or distributed networks. RESULTS: Meta-analysis of drug relative risk ratios from ten institutions revealed that non-steroidal anti-inflammatory drugs (NSAIDs) increased the risk of gastrointestinal hemorrhage by twofold compared with aspirin, and non-vitamin K anticoagulants decreased cerebrovascular bleeding risk by 0.18-fold compared with warfarin. CONCLUSION: These results are similar to those from previous studies and are conducive for new research, thereby demonstrating the feasibility of K-CDM for pharmacovigilance. However, the low quality of original EMR data, incomplete mapping, and heterogeneity between institutions reduced the validity of the analysis, thus necessitating continuous calibration among researchers, clinicians, and the government.


Asunto(s)
Registros Electrónicos de Salud , Farmacovigilancia , Humanos , Sistemas de Registro de Reacción Adversa a Medicamentos , Electrónica , Estudios Multicéntricos como Asunto , República de Corea/epidemiología , Estudios Retrospectivos
9.
Sci Rep ; 13(1): 4103, 2023 03 13.
Artículo en Inglés | MEDLINE | ID: mdl-36914694

RESUMEN

Artificial intelligence as a screening tool for eyelid lesions will be helpful for early diagnosis of eyelid malignancies and proper decision-making. This study aimed to evaluate the performance of a deep learning model in differentiating eyelid lesions using clinical eyelid photographs in comparison with human ophthalmologists. We included 4954 photographs from 928 patients in this retrospective cross-sectional study. Images were classified into three categories: malignant lesion, benign lesion, and no lesion. Two pre-trained convolutional neural network (CNN) models, DenseNet-161 and EfficientNetV2-M architectures, were fine-tuned to classify images into three or two (malignant versus benign) categories. For a ternary classification, the mean diagnostic accuracies of the CNNs were 82.1% and 83.0% using DenseNet-161 and EfficientNetV2-M, respectively, which were inferior to those of the nine clinicians (87.0-89.5%). For the binary classification, the mean accuracies were 87.5% and 92.5% using DenseNet-161 and EfficientNetV2-M models, which was similar to that of the clinicians (85.8-90.0%). The mean AUC of the two CNN models was 0.908 and 0.950, respectively. Gradient-weighted class activation map successfully highlighted the eyelid tumors on clinical photographs. Deep learning models showed a promising performance in discriminating malignant versus benign eyelid lesions on clinical photographs, reaching the level of human observers.


Asunto(s)
Aprendizaje Profundo , Humanos , Inteligencia Artificial , Estudios Retrospectivos , Estudios Transversales , Párpados
10.
Sci Rep ; 12(1): 12804, 2022 07 27.
Artículo en Inglés | MEDLINE | ID: mdl-35896791

RESUMEN

Colonoscopy is an effective tool to detect colorectal lesions and needs the support of pathological diagnosis. This study aimed to develop and validate deep learning models that automatically classify digital pathology images of colon lesions obtained from colonoscopy-related specimen. Histopathological slides of colonoscopic biopsy or resection specimens were collected and grouped into six classes by disease category: adenocarcinoma, tubular adenoma (TA), traditional serrated adenoma (TSA), sessile serrated adenoma (SSA), hyperplastic polyp (HP), and non-specific lesions. Digital photographs were taken of each pathological slide to fine-tune two pre-trained convolutional neural networks, and the model performances were evaluated. A total of 1865 images were included from 703 patients, of which 10% were used as a test dataset. For six-class classification, the mean diagnostic accuracy was 97.3% (95% confidence interval [CI], 96.0-98.6%) by DenseNet-161 and 95.9% (95% CI 94.1-97.7%) by EfficientNet-B7. The per-class area under the receiver operating characteristic curve (AUC) was highest for adenocarcinoma (1.000; 95% CI 0.999-1.000) by DenseNet-161 and TSA (1.000; 95% CI 1.000-1.000) by EfficientNet-B7. The lowest per-class AUCs were still excellent: 0.991 (95% CI 0.983-0.999) for HP by DenseNet-161 and 0.995 for SSA (95% CI 0.992-0.998) by EfficientNet-B7. Deep learning models achieved excellent performances for discriminating adenocarcinoma from non-adenocarcinoma lesions with an AUC of 0.995 or 0.998. The pathognomonic area for each class was appropriately highlighted in digital images by saliency map, particularly focusing epithelial lesions. Deep learning models might be a useful tool to help the diagnosis for pathologic slides of colonoscopy-related specimens.


Asunto(s)
Adenocarcinoma , Adenoma , Pólipos del Colon , Neoplasias Colorrectales , Aprendizaje Profundo , Adenocarcinoma/diagnóstico por imagen , Adenocarcinoma/patología , Adenoma/diagnóstico por imagen , Adenoma/patología , Pólipos del Colon/diagnóstico por imagen , Pólipos del Colon/patología , Colonoscopía/métodos , Neoplasias Colorrectales/diagnóstico por imagen , Neoplasias Colorrectales/patología , Humanos
11.
J Clin Med ; 11(12)2022 Jun 09.
Artículo en Inglés | MEDLINE | ID: mdl-35743380

RESUMEN

PURPOSE: We investigated whether a deep learning algorithm applied to retinal fundoscopic images could predict cerebral white matter hyperintensity (WMH), as represented by a modified Fazekas scale (FS), on brain magnetic resonance imaging (MRI). METHODS: Participants who had undergone brain MRI and health-screening fundus photography at Hallym University Sacred Heart Hospital between 2010 and 2020 were consecutively included. The subjects were divided based on the presence of WMH, then classified into three groups according to the FS grade (0 vs. 1 vs. 2+) using age matching. Two pre-trained convolutional neural networks were fine-tuned and evaluated for prediction performance using 10-fold cross-validation. RESULTS: A total of 3726 fundus photographs from 1892 subjects were included, of which 905 fundus photographs from 462 subjects were included in the age-matched balanced dataset. In predicting the presence of WMH, the mean area under the receiver operating characteristic curve was 0.736 ± 0.030 for DenseNet-201 and 0.724 ± 0.026 for EfficientNet-B7. For the prediction of FS grade, the mean accuracies reached 41.4 ± 5.7% with DenseNet-201 and 39.6 ± 5.6% with EfficientNet-B7. The deep learning models focused on the macula and retinal vasculature to detect an FS of 2+. CONCLUSIONS: Cerebral WMH might be partially predicted by non-invasive fundus photography via deep learning, which may suggest an eye-brain association.

12.
J Clin Med ; 11(9)2022 Apr 21.
Artículo en Inglés | MEDLINE | ID: mdl-35566432

RESUMEN

PURPOSE: We aimed to investigate orbital wall fracture incidence and risk factors in the general Korean population. METHOD: The Korea National Health Insurance Service-National Sample Cohort dataset was analyzed to find subjects with an orbital wall fracture between 2011 and 2015 (based on the diagnosis code) and to identify incident cases involving a preceding disease-free period of 8 years. The incidence of orbital wall fracture in the general population was estimated, and the type of orbital wall fracture was categorized. Sociodemographic risk factors were also examined using Cox regression analysis. RESULTS: Among 1,080,309 cohort subjects, 2415 individuals with newly diagnosed orbital wall fractures were identified. The overall incidence of orbital wall fractures was estimated as 46.19 (95% CI: 44.37-48.06) per 100,000 person-years. The incidence was high at 10-29 and 80+ years old and showed a male predominance with an average male-to-female ratio of 3.33. The most common type was isolated inferior orbital wall fracture (59.4%), followed by isolated medial orbital wall fracture (23.7%), combination fracture (15.0%), and naso-orbito-ethmoid fracture (1.5%). Of the fracture patients, 648 subjects (26.8%) underwent orbital wall fracture repair surgeries. Male sex, rural residence, and low income were associated with an increased risk of orbital wall fractures. CONCLUSIONS: The incidence of orbital wall fractures in Korea varied according to age groups and was positively associated with male sex, rural residency, and low economic income. The most common fracture type was an isolated inferior orbital wall fracture.

13.
Diagnostics (Basel) ; 12(2)2022 Feb 21.
Artículo en Inglés | MEDLINE | ID: mdl-35204638

RESUMEN

Artificial intelligence has enabled the automated diagnosis of several cancer types. We aimed to develop and validate deep learning models that automatically classify cervical intraepithelial neoplasia (CIN) based on histological images. Microscopic images of CIN3, CIN2, CIN1, and non-neoplasm were obtained. The performances of two pre-trained convolutional neural network (CNN) models adopting DenseNet-161 and EfficientNet-B7 architectures were evaluated and compared with those of pathologists. The dataset comprised 1106 images from 588 patients; images of 10% of patients were included in the test dataset. The mean accuracies for the four-class classification were 88.5% (95% confidence interval [CI], 86.3-90.6%) by DenseNet-161 and 89.5% (95% CI, 83.3-95.7%) by EfficientNet-B7, which were similar to human performance (93.2% and 89.7%). The mean per-class area under the receiver operating characteristic curve values by EfficientNet-B7 were 0.996, 0.990, 0.971, and 0.956 in the non-neoplasm, CIN3, CIN1, and CIN2 groups, respectively. The class activation map detected the diagnostic area for CIN lesions. In the three-class classification of CIN2 and CIN3 as one group, the mean accuracies of DenseNet-161 and EfficientNet-B7 increased to 91.4% (95% CI, 88.8-94.0%), and 92.6% (95% CI, 90.4-94.9%), respectively. CNN-based deep learning is a promising tool for diagnosing CIN lesions on digital histological images.

14.
Clin Exp Emerg Med ; 8(2): 120-127, 2021 Jun.
Artículo en Inglés | MEDLINE | ID: mdl-34237817

RESUMEN

OBJECTIVE: Recent studies have suggested that deep-learning models can satisfactorily assist in fracture diagnosis. We aimed to evaluate the performance of two of such models in wrist fracture detection. METHODS: We collected image data of patients who visited with wrist trauma at the emergency department. A dataset extracted from January 2018 to May 2020 was split into training (90%) and test (10%) datasets, and two types of convolutional neural networks (i.e., DenseNet-161 and ResNet-152) were trained to detect wrist fractures. Gradient-weighted class activation mapping was used to highlight the regions of radiograph scans that contributed to the decision of the model. Performance of the convolutional neural network models was evaluated using the area under the receiver operating characteristic curve. RESULTS: For model training, we used 4,551 radiographs from 798 patients and 4,443 radiographs from 1,481 patients with and without fractures, respectively. The remaining 10% (300 radiographs from 100 patients with fractures and 690 radiographs from 230 patients without fractures) was used as a test dataset. The sensitivity, specificity, positive predictive value, negative predictive value, and accuracy of DenseNet-161 and ResNet-152 in the test dataset were 90.3%, 90.3%, 80.3%, 95.6%, and 90.3% and 88.6%, 88.4%, 76.9%, 94.7%, and 88.5%, respectively. The area under the receiver operating characteristic curves of DenseNet-161 and ResNet-152 for wrist fracture detection were 0.962 and 0.947, respectively. CONCLUSION: We demonstrated that DenseNet-161 and ResNet-152 models could help detect wrist fractures in the emergency room with satisfactory performance.

15.
Sci Rep ; 11(1): 13850, 2021 07 05.
Artículo en Inglés | MEDLINE | ID: mdl-34226638

RESUMEN

Uncontrolled diabetes has been associated with progression of diabetic retinopathy (DR) in several studies. Therefore, we aimed to investigate systemic and ophthalmic factors related to worsening of DR even after completion of panretinal photocoagulation (PRP). We retrospectively reviewed DR patients who had completed PRP in at least one eye with a 3-year follow-up. A total of 243 eyes of 243 subjects (mean age 52.6 ± 11.6 years) were enrolled. Among them, 52 patients (21.4%) showed progression of DR after PRP (progression group), and the other 191 (78.6%) patients had stable DR (non-progression group). The progression group had higher proportion of proliferative DR (P = 0.019); lower baseline visual acuity (P < 0.001); and higher platelet count (P = 0.048), hemoglobin (P = 0.044), and hematocrit, (P = 0.042) than the non-progression group. In the multivariate logistic regression analysis for progression of DR, baseline visual acuity (HR: 0.053, P < 0.001) and platelet count (HR: 1.215, P = 0.031) were identified as risk factors for progression. Consequently, we propose that patients with low visual acuity or high platelet count are more likely to have progressive DR despite PRP and require careful observation. Also, the evaluation of hemorheological factors including platelet counts before PRP can be considered useful in predicting the prognosis of DR.


Asunto(s)
Retinopatía Diabética/epidemiología , Coagulación con Láser/efectos adversos , Fotocoagulación/efectos adversos , Retina/diagnóstico por imagen , Adulto , Coroides/patología , Coroides/efectos de la radiación , Retinopatía Diabética/diagnóstico por imagen , Retinopatía Diabética/etiología , Retinopatía Diabética/patología , Progresión de la Enfermedad , Femenino , Humanos , Masculino , Persona de Mediana Edad , Recuento de Plaquetas , Retina/patología , Retina/efectos de la radiación , Agudeza Visual/fisiología , Agudeza Visual/efectos de la radiación
16.
Medicine (Baltimore) ; 100(7): e24756, 2021 Feb 19.
Artículo en Inglés | MEDLINE | ID: mdl-33607821

RESUMEN

ABSTRACT: This study was conducted to develop a convolutional neural network (CNN)-based model to predict the sex and age of patients by identifying unique unknown features from paranasal sinus (PNS) X-ray images.We employed a retrospective study design and used anonymized patient imaging data. Two CNN models, adopting ResNet-152 and DenseNet-169 architectures, were trained to predict sex and age groups (20-39, 40-59, 60+ years). The area under the curve (AUC), algorithm accuracy, sensitivity, and specificity were assessed. Class-activation map (CAM) was used to detect deterministic areas. A total of 4160 PNS X-ray images were collected from 4160 patients. The PNS X-ray images of patients aged ≥20 years were retrieved from the picture archiving and communication database system of our institution. The classification performances in predicting the sex (male vs female) and 3 age groups (20-39, 40-59, 60+ years) for each established CNN model were evaluated.For sex prediction, ResNet-152 performed slightly better (accuracy = 98.0%, sensitivity = 96.9%, specificity = 98.7%, and AUC = 0.939) than DenseNet-169. CAM indicated that maxillary sinuses (males) and ethmoid sinuses (females) were major factors in identifying sex. Meanwhile, for age prediction, the DenseNet-169 model was slightly more accurate in predicting age groups (77.6 ±â€Š1.5% vs 76.3 ±â€Š1.1%). CAM suggested that the maxillary sinus and the periodontal area were primary factors in identifying age groups.Our deep learning model could predict sex and age based on PNS X-ray images. Therefore, it can assist in reducing the risk of patient misidentification in clinics.


Asunto(s)
Aprendizaje Profundo/estadística & datos numéricos , Senos Paranasales/diagnóstico por imagen , Radiografía/métodos , Adulto , Anciano , Algoritmos , Área Bajo la Curva , Manejo de Datos , Bases de Datos Factuales , Femenino , Humanos , Masculino , Seno Maxilar/diagnóstico por imagen , Persona de Mediana Edad , Redes Neurales de la Computación , Valor Predictivo de las Pruebas , Estudios Retrospectivos , Sensibilidad y Especificidad
17.
Eye (Lond) ; 35(11): 3012-3019, 2021 11.
Artículo en Inglés | MEDLINE | ID: mdl-33414536

RESUMEN

AIMS: To investigate the incidence and presumed aetiologies of fourth cranial nerve (CN4) palsy in Korea METHODS: Using the nationally representative dataset of the Korea National Health Insurance Service-National Sample Cohort from 2006 to 2015, newly developed CN4 palsy cases confirmed by a preceding disease-free period of ≥4 years were identified. The presumed aetiology of CN4 palsy was evaluated based on comorbidities around the CN4 palsy diagnosis. RESULTS: Among the 1,108,292 cohort subjects, CN4 palsy newly developed in 390 patients during 10-year follow-up, and the overall incidence of CN4 palsy was 3.74 per 100,000 person-years (95% confidence interval, 3.38-4.12). The incidence of CN4 palsy showed a male preponderance in nearly all age groups, and the overall male-to-female ratio was 2.30. A bimodality by age-group was observed, with two peaks at 0-4 years and at 75-79 years. The most common presumed aetiologies were vascular (51.3%), congenital (20.0%), and idiopathic (18.5%). The incidence rate of a first peak for 0-4 years of age was 6.17 per 100,000 person-years, and cases in this group were congenital. The second peak incidence rate for 75-79 years of age was 11.81 per 100,000 person-years, and the main cause was vascular disease. Strabismus surgery was performed in 48 (12.3%) patients, most of whom (72.9%) were younger than 20 years. CONCLUSION: The incidence of CN4 palsy has a male predominance in Koreans and shows bimodal peaks by age. The aetiology of CN4 palsy varies according to age-groups.


Asunto(s)
Enfermedades del Nervio Troclear , Anciano de 80 o más Años , Preescolar , Estudios de Cohortes , Femenino , Humanos , Incidencia , Lactante , Recién Nacido , Masculino , República de Corea/epidemiología , Estudios Retrospectivos , Enfermedades del Nervio Troclear/diagnóstico , Enfermedades del Nervio Troclear/epidemiología , Enfermedades del Nervio Troclear/etiología
18.
J Pers Med ; 10(4)2020 Nov 06.
Artículo en Inglés | MEDLINE | ID: mdl-33172076

RESUMEN

Mammography plays an important role in screening breast cancer among females, and artificial intelligence has enabled the automated detection of diseases on medical images. This study aimed to develop a deep learning model detecting breast cancer in digital mammograms of various densities and to evaluate the model performance compared to previous studies. From 1501 subjects who underwent digital mammography between February 2007 and May 2015, craniocaudal and mediolateral view mammograms were included and concatenated for each breast, ultimately producing 3002 merged images. Two convolutional neural networks were trained to detect any malignant lesion on the merged images. The performances were tested using 301 merged images from 284 subjects and compared to a meta-analysis including 12 previous deep learning studies. The mean area under the receiver-operating characteristic curve (AUC) for detecting breast cancer in each merged mammogram was 0.952 ± 0.005 by DenseNet-169 and 0.954 ± 0.020 by EfficientNet-B5, respectively. The performance for malignancy detection decreased as breast density increased (density A, mean AUC = 0.984 vs. density D, mean AUC = 0.902 by DenseNet-169). When patients' age was used as a covariate for malignancy detection, the performance showed little change (mean AUC, 0.953 ± 0.005). The mean sensitivity and specificity of the DenseNet-169 (87 and 88%, respectively) surpassed the mean values (81 and 82%, respectively) obtained in a meta-analysis. Deep learning would work efficiently in screening breast cancer in digital mammograms of various densities, which could be maximized in breasts with lower parenchyma density.

19.
Sci Rep ; 10(1): 14803, 2020 09 09.
Artículo en Inglés | MEDLINE | ID: mdl-32908182

RESUMEN

Febrile neutropenia (FN) is one of the most concerning complications of chemotherapy, and its prediction remains difficult. This study aimed to reveal the risk factors for and build the prediction models of FN using machine learning algorithms. Medical records of hospitalized patients who underwent chemotherapy after surgery for breast cancer between May 2002 and September 2018 were selectively reviewed for development of models. Demographic, clinical, pathological, and therapeutic data were analyzed to identify risk factors for FN. Using machine learning algorithms, prediction models were developed and evaluated for performance. Of 933 selected inpatients with a mean age of 51.8 ± 10.7 years, FN developed in 409 (43.8%) patients. There was a significant difference in FN incidence according to age, staging, taxane-based regimen, and blood count 5 days after chemotherapy. The area under the curve (AUC) built based on these findings was 0.870 on the basis of logistic regression. The AUC improved by machine learning was 0.908. Machine learning improves the prediction of FN in patients undergoing chemotherapy for breast cancer compared to the conventional statistical model. In these high-risk patients, primary prophylaxis with granulocyte colony-stimulating factor could be considered.


Asunto(s)
Neoplasias de la Mama/tratamiento farmacológico , Neutropenia Febril/epidemiología , Anciano , Algoritmos , Antineoplásicos/efectos adversos , Antineoplásicos/uso terapéutico , Protocolos de Quimioterapia Combinada Antineoplásica , Hidrocarburos Aromáticos con Puentes/efectos adversos , Hidrocarburos Aromáticos con Puentes/uso terapéutico , Femenino , Factor Estimulante de Colonias de Granulocitos/metabolismo , Humanos , Incidencia , Pacientes Internos/estadística & datos numéricos , Modelos Logísticos , Aprendizaje Automático , Masculino , Persona de Mediana Edad , República de Corea , Factores de Riesgo , Taxoides/efectos adversos , Taxoides/uso terapéutico
20.
Sci Rep ; 10(1): 13652, 2020 08 12.
Artículo en Inglés | MEDLINE | ID: mdl-32788635

RESUMEN

Colposcopy is widely used to detect cervical cancers, but experienced physicians who are needed for an accurate diagnosis are lacking in developing countries. Artificial intelligence (AI) has been recently used in computer-aided diagnosis showing remarkable promise. In this study, we developed and validated deep learning models to automatically classify cervical neoplasms on colposcopic photographs. Pre-trained convolutional neural networks were fine-tuned for two grading systems: the cervical intraepithelial neoplasia (CIN) system and the lower anogenital squamous terminology (LAST) system. The multi-class classification accuracies of the networks for the CIN system in the test dataset were 48.6 ± 1.3% by Inception-Resnet-v2 and 51.7 ± 5.2% by Resnet-152. The accuracies for the LAST system were 71.8 ± 1.8% and 74.7 ± 1.8%, respectively. The area under the curve (AUC) for discriminating high-risk lesions from low-risk lesions by Resnet-152 was 0.781 ± 0.020 for the CIN system and 0.708 ± 0.024 for the LAST system. The lesions requiring biopsy were also detected efficiently (AUC, 0.947 ± 0.030 by Resnet-152), and presented meaningfully on attention maps. These results may indicate the potential of the application of AI for automated reading of colposcopic photographs.


Asunto(s)
Colposcopía/métodos , Aprendizaje Profundo , Diagnóstico por Computador/métodos , Redes Neurales de la Computación , Displasia del Cuello del Útero/diagnóstico , Neoplasias del Cuello Uterino/clasificación , Neoplasias del Cuello Uterino/diagnóstico , Adolescente , Adulto , Anciano , Anciano de 80 o más Años , Inteligencia Artificial , Estudios de Casos y Controles , Femenino , Humanos , Persona de Mediana Edad , Estudios Retrospectivos , Adulto Joven
SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA