Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 51
Filtrar
1.
Curr Oncol ; 31(4): 2278-2288, 2024 Apr 18.
Artigo em Inglês | MEDLINE | ID: mdl-38668072

RESUMO

Background: Accurate detection of axillary lymph node (ALN) metastases in breast cancer is crucial for clinical staging and treatment planning. This study aims to develop a deep learning model using clinical implication-applied preprocessed computed tomography (CT) images to enhance the prediction of ALN metastasis in breast cancer patients. Methods: A total of 1128 axial CT images of ALN (538 malignant and 590 benign lymph nodes) were collected from 523 breast cancer patients who underwent preoperative CT scans between January 2012 and July 2022 at Hallym University Medical Center. To develop an optimal deep learning model for distinguishing metastatic ALN from benign ALN, a CT image preprocessing protocol with clinical implications and two different cropping methods (fixed size crop [FSC] method and adjustable square crop [ASC] method) were employed. The images were analyzed using three different convolutional neural network (CNN) architectures (ResNet, DenseNet, and EfficientNet). Ensemble methods involving and combining the selection of the two best-performing CNN architectures from each cropping method were applied to generate the final result. Results: For the two different cropping methods, DenseNet consistently outperformed ResNet and EfficientNet. The area under the receiver operating characteristic curve (AUROC) for DenseNet, using the FSC and ASC methods, was 0.934 and 0.939, respectively. The ensemble model, which combines the performance of the DenseNet121 architecture for both cropping methods, delivered outstanding results with an AUROC of 0.968, an accuracy of 0.938, a sensitivity of 0.980, and a specificity of 0.903. Furthermore, distinct trends observed in gradient-weighted class activation mapping images with the two cropping methods suggest that our deep learning model not only evaluates the lymph node itself, but also distinguishes subtler changes in lymph node margin and adjacent soft tissue, which often elude human interpretation. Conclusions: This research demonstrates the promising performance of a deep learning model in accurately detecting malignant ALNs in breast cancer patients using CT images. The integration of clinical considerations into image processing and the utilization of ensemble methods further improved diagnostic precision.


Assuntos
Axila , Neoplasias da Mama , Aprendizado Profundo , Metástase Linfática , Tomografia Computadorizada por Raios X , Humanos , Neoplasias da Mama/patologia , Neoplasias da Mama/diagnóstico por imagem , Feminino , Metástase Linfática/diagnóstico por imagem , Tomografia Computadorizada por Raios X/métodos , Pessoa de Meia-Idade , Linfonodos/patologia , Linfonodos/diagnóstico por imagem , Adulto , Idoso
2.
Ann Coloproctol ; 40(1): 13-26, 2024 Feb.
Artigo em Inglês | MEDLINE | ID: mdl-38414120

RESUMO

PURPOSE: The integration of artificial intelligence (AI) and magnetic resonance imaging in rectal cancer has the potential to enhance diagnostic accuracy by identifying subtle patterns and aiding tumor delineation and lymph node assessment. According to our systematic review focusing on convolutional neural networks, AI-driven tumor staging and the prediction of treatment response facilitate tailored treat-ment strategies for patients with rectal cancer. METHODS: This paper summarizes the current landscape of AI in the imaging field of rectal cancer, emphasizing the performance reporting design based on the quality of the dataset, model performance, and external validation. RESULTS: AI-driven tumor segmentation has demonstrated promising results using various convolutional neural network models. AI-based predictions of staging and treatment response have exhibited potential as auxiliary tools for personalized treatment strategies. Some studies have indicated superior performance than conventional models in predicting microsatellite instability and KRAS status, offer-ing noninvasive and cost-effective alternatives for identifying genetic mutations. CONCLUSION: Image-based AI studies for rectal can-cer have shown acceptable diagnostic performance but face several challenges, including limited dataset sizes with standardized data, the need for multicenter studies, and the absence of oncologic relevance and external validation for clinical implantation. Overcoming these pitfalls and hurdles is essential for the feasible integration of AI models in clinical settings for rectal cancer, warranting further research.

3.
Sci Rep ; 13(1): 22237, 2023 12 14.
Artigo em Inglês | MEDLINE | ID: mdl-38097669

RESUMO

Subconjunctival hemorrhage (SCH) is a benign eye condition that is often noticeable and leads to medical attention. Despite previous studies investigating the relationship between SCH and cardiovascular diseases, the relationship between SCH and bleeding disorders remains controversial. In order to gain further insight into this association, a nationwide cohort study was conducted using data from the National Health Insurance Service-National Sample Cohort version 2.0 from 2006 to 2015. The study defined SCH using a diagnostic code and compared the incidence and risk factors of intracerebral hemorrhage (ICH) and gastrointestinal (GI) bleeding in 36,772 SCH individuals and 147,088 propensity score (PS)-matched controls without SCH. The results showed that SCH was associated with a lower risk of ICH (HR = 0.76, 95% CI = 0.622-0.894, p = 0.002) and GI bleeding (HR = 0.816, 95% CI = 0.690-0.965, p = 0.018) when compared to the PS-matched control group. This reduced risk was more pronounced in females and in the older age group (≥ 50 years), but not observed in males or younger age groups. In conclusion, SCH dose not increase the risk of ICH and major GI bleeding and is associated with a decreased incidence in females and individuals aged ≥ 50 years.


Assuntos
Doenças da Túnica Conjuntiva , Hemorragia Ocular , Transtornos Hemorrágicos , Masculino , Feminino , Humanos , Idoso , Estudos de Coortes , Hemorragia Ocular/epidemiologia , Hemorragia Ocular/etiologia , Hemorragia Cerebral , Hemorragia Gastrointestinal/epidemiologia , Hemorragia Gastrointestinal/etiologia , Fatores de Risco , Doenças da Túnica Conjuntiva/epidemiologia , Doenças da Túnica Conjuntiva/etiologia
4.
J Clin Med ; 12(10)2023 May 12.
Artigo em Inglês | MEDLINE | ID: mdl-37240542

RESUMO

This study aimed to investigate the clinical features and risk factors of uveitis in Korean children with juvenile idiopathic arthritis (JIA). The medical records of JIA patients diagnosed between 2006 and 2019 and followed up for ≥1 year were retrospectively reviewed, and various factors including laboratory findings were analyzed for the risk of developing uveitis. JIA-associated uveitis (JIA-U) developed in 30 (9.8%) of 306 JIA patients. The mean age at the first uveitis development was 12.4 ± 5.7 years, which was 5.6 ± 3.7 years after the JIA diagnosis. The common JIA subtypes in the uveitis group were oligoarthritis-persistent (33.3%) and enthesitis-related arthritis (30.0%). The uveitis group had more baseline knee joint involvement (76.7% vs. 51.4%), which increased the risk of JIA-U during follow-up (p = 0.008). Patients with the oligoarthritis-persistent subtype developed JIA-U more frequently than those without it (20.0% vs. 7.8%; p = 0.016). The final visual acuity of JIA-U was tolerable (0.041 ± 0.103 logMAR). In Korean children with JIA, JIA-U may be associated with the oligoarthritis-persistent subtype and knee joint involvement.

5.
Drug Saf ; 46(7): 647-660, 2023 07.
Artigo em Inglês | MEDLINE | ID: mdl-37243963

RESUMO

INTRODUCTION: With the availability of retrospective pharmacovigilance data, the common data model (CDM) has been identified as an efficient approach towards anonymized multicenter analysis; however, the establishment of a suitable model for individual medical systems and applications supporting their analysis is a challenge. OBJECTIVE: The aim of this study was to construct a specialized Korean CDM (K-CDM) for pharmacovigilance systems based on a clinical scenario to detect adverse drug reactions (ADRs). METHODS: De-identified patient records (n = 5,402,129) from 13 institutions were converted to the K-CDM. From 2005 to 2017, 37,698,535 visits, 39,910,849 conditions, 259,594,727 drug exposures, and 30,176,929 procedures were recorded. The K-CDM, which comprises three layers, is compatible with existing models and is potentially adaptable to extended clinical research. Local codes for electronic medical records (EMRs), including diagnosis, drug prescriptions, and procedures, were mapped using standard vocabulary. Distributed queries based on clinical scenarios were developed and applied to K-CDM through decentralized or distributed networks. RESULTS: Meta-analysis of drug relative risk ratios from ten institutions revealed that non-steroidal anti-inflammatory drugs (NSAIDs) increased the risk of gastrointestinal hemorrhage by twofold compared with aspirin, and non-vitamin K anticoagulants decreased cerebrovascular bleeding risk by 0.18-fold compared with warfarin. CONCLUSION: These results are similar to those from previous studies and are conducive for new research, thereby demonstrating the feasibility of K-CDM for pharmacovigilance. However, the low quality of original EMR data, incomplete mapping, and heterogeneity between institutions reduced the validity of the analysis, thus necessitating continuous calibration among researchers, clinicians, and the government.


Assuntos
Registros Eletrônicos de Saúde , Farmacovigilância , Humanos , Sistemas de Notificação de Reações Adversas a Medicamentos , Eletrônica , Estudos Multicêntricos como Assunto , República da Coreia/epidemiologia , Estudos Retrospectivos
6.
Sci Rep ; 13(1): 4103, 2023 03 13.
Artigo em Inglês | MEDLINE | ID: mdl-36914694

RESUMO

Artificial intelligence as a screening tool for eyelid lesions will be helpful for early diagnosis of eyelid malignancies and proper decision-making. This study aimed to evaluate the performance of a deep learning model in differentiating eyelid lesions using clinical eyelid photographs in comparison with human ophthalmologists. We included 4954 photographs from 928 patients in this retrospective cross-sectional study. Images were classified into three categories: malignant lesion, benign lesion, and no lesion. Two pre-trained convolutional neural network (CNN) models, DenseNet-161 and EfficientNetV2-M architectures, were fine-tuned to classify images into three or two (malignant versus benign) categories. For a ternary classification, the mean diagnostic accuracies of the CNNs were 82.1% and 83.0% using DenseNet-161 and EfficientNetV2-M, respectively, which were inferior to those of the nine clinicians (87.0-89.5%). For the binary classification, the mean accuracies were 87.5% and 92.5% using DenseNet-161 and EfficientNetV2-M models, which was similar to that of the clinicians (85.8-90.0%). The mean AUC of the two CNN models was 0.908 and 0.950, respectively. Gradient-weighted class activation map successfully highlighted the eyelid tumors on clinical photographs. Deep learning models showed a promising performance in discriminating malignant versus benign eyelid lesions on clinical photographs, reaching the level of human observers.


Assuntos
Aprendizado Profundo , Humanos , Inteligência Artificial , Estudos Retrospectivos , Estudos Transversais , Pálpebras
7.
Sci Rep ; 12(1): 12804, 2022 07 27.
Artigo em Inglês | MEDLINE | ID: mdl-35896791

RESUMO

Colonoscopy is an effective tool to detect colorectal lesions and needs the support of pathological diagnosis. This study aimed to develop and validate deep learning models that automatically classify digital pathology images of colon lesions obtained from colonoscopy-related specimen. Histopathological slides of colonoscopic biopsy or resection specimens were collected and grouped into six classes by disease category: adenocarcinoma, tubular adenoma (TA), traditional serrated adenoma (TSA), sessile serrated adenoma (SSA), hyperplastic polyp (HP), and non-specific lesions. Digital photographs were taken of each pathological slide to fine-tune two pre-trained convolutional neural networks, and the model performances were evaluated. A total of 1865 images were included from 703 patients, of which 10% were used as a test dataset. For six-class classification, the mean diagnostic accuracy was 97.3% (95% confidence interval [CI], 96.0-98.6%) by DenseNet-161 and 95.9% (95% CI 94.1-97.7%) by EfficientNet-B7. The per-class area under the receiver operating characteristic curve (AUC) was highest for adenocarcinoma (1.000; 95% CI 0.999-1.000) by DenseNet-161 and TSA (1.000; 95% CI 1.000-1.000) by EfficientNet-B7. The lowest per-class AUCs were still excellent: 0.991 (95% CI 0.983-0.999) for HP by DenseNet-161 and 0.995 for SSA (95% CI 0.992-0.998) by EfficientNet-B7. Deep learning models achieved excellent performances for discriminating adenocarcinoma from non-adenocarcinoma lesions with an AUC of 0.995 or 0.998. The pathognomonic area for each class was appropriately highlighted in digital images by saliency map, particularly focusing epithelial lesions. Deep learning models might be a useful tool to help the diagnosis for pathologic slides of colonoscopy-related specimens.


Assuntos
Adenocarcinoma , Adenoma , Pólipos do Colo , Neoplasias Colorretais , Aprendizado Profundo , Adenocarcinoma/diagnóstico por imagem , Adenocarcinoma/patologia , Adenoma/diagnóstico por imagem , Adenoma/patologia , Pólipos do Colo/diagnóstico por imagem , Pólipos do Colo/patologia , Colonoscopia/métodos , Neoplasias Colorretais/diagnóstico por imagem , Neoplasias Colorretais/patologia , Humanos
8.
J Clin Med ; 11(12)2022 Jun 09.
Artigo em Inglês | MEDLINE | ID: mdl-35743380

RESUMO

PURPOSE: We investigated whether a deep learning algorithm applied to retinal fundoscopic images could predict cerebral white matter hyperintensity (WMH), as represented by a modified Fazekas scale (FS), on brain magnetic resonance imaging (MRI). METHODS: Participants who had undergone brain MRI and health-screening fundus photography at Hallym University Sacred Heart Hospital between 2010 and 2020 were consecutively included. The subjects were divided based on the presence of WMH, then classified into three groups according to the FS grade (0 vs. 1 vs. 2+) using age matching. Two pre-trained convolutional neural networks were fine-tuned and evaluated for prediction performance using 10-fold cross-validation. RESULTS: A total of 3726 fundus photographs from 1892 subjects were included, of which 905 fundus photographs from 462 subjects were included in the age-matched balanced dataset. In predicting the presence of WMH, the mean area under the receiver operating characteristic curve was 0.736 ± 0.030 for DenseNet-201 and 0.724 ± 0.026 for EfficientNet-B7. For the prediction of FS grade, the mean accuracies reached 41.4 ± 5.7% with DenseNet-201 and 39.6 ± 5.6% with EfficientNet-B7. The deep learning models focused on the macula and retinal vasculature to detect an FS of 2+. CONCLUSIONS: Cerebral WMH might be partially predicted by non-invasive fundus photography via deep learning, which may suggest an eye-brain association.

9.
J Clin Med ; 11(9)2022 Apr 21.
Artigo em Inglês | MEDLINE | ID: mdl-35566432

RESUMO

PURPOSE: We aimed to investigate orbital wall fracture incidence and risk factors in the general Korean population. METHOD: The Korea National Health Insurance Service-National Sample Cohort dataset was analyzed to find subjects with an orbital wall fracture between 2011 and 2015 (based on the diagnosis code) and to identify incident cases involving a preceding disease-free period of 8 years. The incidence of orbital wall fracture in the general population was estimated, and the type of orbital wall fracture was categorized. Sociodemographic risk factors were also examined using Cox regression analysis. RESULTS: Among 1,080,309 cohort subjects, 2415 individuals with newly diagnosed orbital wall fractures were identified. The overall incidence of orbital wall fractures was estimated as 46.19 (95% CI: 44.37-48.06) per 100,000 person-years. The incidence was high at 10-29 and 80+ years old and showed a male predominance with an average male-to-female ratio of 3.33. The most common type was isolated inferior orbital wall fracture (59.4%), followed by isolated medial orbital wall fracture (23.7%), combination fracture (15.0%), and naso-orbito-ethmoid fracture (1.5%). Of the fracture patients, 648 subjects (26.8%) underwent orbital wall fracture repair surgeries. Male sex, rural residence, and low income were associated with an increased risk of orbital wall fractures. CONCLUSIONS: The incidence of orbital wall fractures in Korea varied according to age groups and was positively associated with male sex, rural residency, and low economic income. The most common fracture type was an isolated inferior orbital wall fracture.

10.
Diagnostics (Basel) ; 12(2)2022 Feb 21.
Artigo em Inglês | MEDLINE | ID: mdl-35204638

RESUMO

Artificial intelligence has enabled the automated diagnosis of several cancer types. We aimed to develop and validate deep learning models that automatically classify cervical intraepithelial neoplasia (CIN) based on histological images. Microscopic images of CIN3, CIN2, CIN1, and non-neoplasm were obtained. The performances of two pre-trained convolutional neural network (CNN) models adopting DenseNet-161 and EfficientNet-B7 architectures were evaluated and compared with those of pathologists. The dataset comprised 1106 images from 588 patients; images of 10% of patients were included in the test dataset. The mean accuracies for the four-class classification were 88.5% (95% confidence interval [CI], 86.3-90.6%) by DenseNet-161 and 89.5% (95% CI, 83.3-95.7%) by EfficientNet-B7, which were similar to human performance (93.2% and 89.7%). The mean per-class area under the receiver operating characteristic curve values by EfficientNet-B7 were 0.996, 0.990, 0.971, and 0.956 in the non-neoplasm, CIN3, CIN1, and CIN2 groups, respectively. The class activation map detected the diagnostic area for CIN lesions. In the three-class classification of CIN2 and CIN3 as one group, the mean accuracies of DenseNet-161 and EfficientNet-B7 increased to 91.4% (95% CI, 88.8-94.0%), and 92.6% (95% CI, 90.4-94.9%), respectively. CNN-based deep learning is a promising tool for diagnosing CIN lesions on digital histological images.

11.
Clin Exp Emerg Med ; 8(2): 120-127, 2021 Jun.
Artigo em Inglês | MEDLINE | ID: mdl-34237817

RESUMO

OBJECTIVE: Recent studies have suggested that deep-learning models can satisfactorily assist in fracture diagnosis. We aimed to evaluate the performance of two of such models in wrist fracture detection. METHODS: We collected image data of patients who visited with wrist trauma at the emergency department. A dataset extracted from January 2018 to May 2020 was split into training (90%) and test (10%) datasets, and two types of convolutional neural networks (i.e., DenseNet-161 and ResNet-152) were trained to detect wrist fractures. Gradient-weighted class activation mapping was used to highlight the regions of radiograph scans that contributed to the decision of the model. Performance of the convolutional neural network models was evaluated using the area under the receiver operating characteristic curve. RESULTS: For model training, we used 4,551 radiographs from 798 patients and 4,443 radiographs from 1,481 patients with and without fractures, respectively. The remaining 10% (300 radiographs from 100 patients with fractures and 690 radiographs from 230 patients without fractures) was used as a test dataset. The sensitivity, specificity, positive predictive value, negative predictive value, and accuracy of DenseNet-161 and ResNet-152 in the test dataset were 90.3%, 90.3%, 80.3%, 95.6%, and 90.3% and 88.6%, 88.4%, 76.9%, 94.7%, and 88.5%, respectively. The area under the receiver operating characteristic curves of DenseNet-161 and ResNet-152 for wrist fracture detection were 0.962 and 0.947, respectively. CONCLUSION: We demonstrated that DenseNet-161 and ResNet-152 models could help detect wrist fractures in the emergency room with satisfactory performance.

12.
Sci Rep ; 11(1): 13850, 2021 07 05.
Artigo em Inglês | MEDLINE | ID: mdl-34226638

RESUMO

Uncontrolled diabetes has been associated with progression of diabetic retinopathy (DR) in several studies. Therefore, we aimed to investigate systemic and ophthalmic factors related to worsening of DR even after completion of panretinal photocoagulation (PRP). We retrospectively reviewed DR patients who had completed PRP in at least one eye with a 3-year follow-up. A total of 243 eyes of 243 subjects (mean age 52.6 ± 11.6 years) were enrolled. Among them, 52 patients (21.4%) showed progression of DR after PRP (progression group), and the other 191 (78.6%) patients had stable DR (non-progression group). The progression group had higher proportion of proliferative DR (P = 0.019); lower baseline visual acuity (P < 0.001); and higher platelet count (P = 0.048), hemoglobin (P = 0.044), and hematocrit, (P = 0.042) than the non-progression group. In the multivariate logistic regression analysis for progression of DR, baseline visual acuity (HR: 0.053, P < 0.001) and platelet count (HR: 1.215, P = 0.031) were identified as risk factors for progression. Consequently, we propose that patients with low visual acuity or high platelet count are more likely to have progressive DR despite PRP and require careful observation. Also, the evaluation of hemorheological factors including platelet counts before PRP can be considered useful in predicting the prognosis of DR.


Assuntos
Retinopatia Diabética/epidemiologia , Fotocoagulação a Laser/efeitos adversos , Fotocoagulação/efeitos adversos , Retina/diagnóstico por imagem , Adulto , Corioide/patologia , Corioide/efeitos da radiação , Retinopatia Diabética/diagnóstico por imagem , Retinopatia Diabética/etiologia , Retinopatia Diabética/patologia , Progressão da Doença , Feminino , Humanos , Masculino , Pessoa de Meia-Idade , Contagem de Plaquetas , Retina/patologia , Retina/efeitos da radiação , Acuidade Visual/fisiologia , Acuidade Visual/efeitos da radiação
13.
Medicine (Baltimore) ; 100(7): e24756, 2021 Feb 19.
Artigo em Inglês | MEDLINE | ID: mdl-33607821

RESUMO

ABSTRACT: This study was conducted to develop a convolutional neural network (CNN)-based model to predict the sex and age of patients by identifying unique unknown features from paranasal sinus (PNS) X-ray images.We employed a retrospective study design and used anonymized patient imaging data. Two CNN models, adopting ResNet-152 and DenseNet-169 architectures, were trained to predict sex and age groups (20-39, 40-59, 60+ years). The area under the curve (AUC), algorithm accuracy, sensitivity, and specificity were assessed. Class-activation map (CAM) was used to detect deterministic areas. A total of 4160 PNS X-ray images were collected from 4160 patients. The PNS X-ray images of patients aged ≥20 years were retrieved from the picture archiving and communication database system of our institution. The classification performances in predicting the sex (male vs female) and 3 age groups (20-39, 40-59, 60+ years) for each established CNN model were evaluated.For sex prediction, ResNet-152 performed slightly better (accuracy = 98.0%, sensitivity = 96.9%, specificity = 98.7%, and AUC = 0.939) than DenseNet-169. CAM indicated that maxillary sinuses (males) and ethmoid sinuses (females) were major factors in identifying sex. Meanwhile, for age prediction, the DenseNet-169 model was slightly more accurate in predicting age groups (77.6 ±â€Š1.5% vs 76.3 ±â€Š1.1%). CAM suggested that the maxillary sinus and the periodontal area were primary factors in identifying age groups.Our deep learning model could predict sex and age based on PNS X-ray images. Therefore, it can assist in reducing the risk of patient misidentification in clinics.


Assuntos
Aprendizado Profundo/estatística & dados numéricos , Seios Paranasais/diagnóstico por imagem , Radiografia/métodos , Adulto , Idoso , Algoritmos , Área Sob a Curva , Gerenciamento de Dados , Bases de Dados Factuais , Feminino , Humanos , Masculino , Seio Maxilar/diagnóstico por imagem , Pessoa de Meia-Idade , Redes Neurais de Computação , Valor Preditivo dos Testes , Estudos Retrospectivos , Sensibilidade e Especificidade
14.
Eye (Lond) ; 35(11): 3012-3019, 2021 11.
Artigo em Inglês | MEDLINE | ID: mdl-33414536

RESUMO

AIMS: To investigate the incidence and presumed aetiologies of fourth cranial nerve (CN4) palsy in Korea METHODS: Using the nationally representative dataset of the Korea National Health Insurance Service-National Sample Cohort from 2006 to 2015, newly developed CN4 palsy cases confirmed by a preceding disease-free period of ≥4 years were identified. The presumed aetiology of CN4 palsy was evaluated based on comorbidities around the CN4 palsy diagnosis. RESULTS: Among the 1,108,292 cohort subjects, CN4 palsy newly developed in 390 patients during 10-year follow-up, and the overall incidence of CN4 palsy was 3.74 per 100,000 person-years (95% confidence interval, 3.38-4.12). The incidence of CN4 palsy showed a male preponderance in nearly all age groups, and the overall male-to-female ratio was 2.30. A bimodality by age-group was observed, with two peaks at 0-4 years and at 75-79 years. The most common presumed aetiologies were vascular (51.3%), congenital (20.0%), and idiopathic (18.5%). The incidence rate of a first peak for 0-4 years of age was 6.17 per 100,000 person-years, and cases in this group were congenital. The second peak incidence rate for 75-79 years of age was 11.81 per 100,000 person-years, and the main cause was vascular disease. Strabismus surgery was performed in 48 (12.3%) patients, most of whom (72.9%) were younger than 20 years. CONCLUSION: The incidence of CN4 palsy has a male predominance in Koreans and shows bimodal peaks by age. The aetiology of CN4 palsy varies according to age-groups.


Assuntos
Doenças do Nervo Troclear , Idoso de 80 Anos ou mais , Pré-Escolar , Estudos de Coortes , Feminino , Humanos , Incidência , Lactente , Recém-Nascido , Masculino , República da Coreia/epidemiologia , Estudos Retrospectivos , Doenças do Nervo Troclear/diagnóstico , Doenças do Nervo Troclear/epidemiologia , Doenças do Nervo Troclear/etiologia
15.
J Pers Med ; 10(4)2020 Nov 06.
Artigo em Inglês | MEDLINE | ID: mdl-33172076

RESUMO

Mammography plays an important role in screening breast cancer among females, and artificial intelligence has enabled the automated detection of diseases on medical images. This study aimed to develop a deep learning model detecting breast cancer in digital mammograms of various densities and to evaluate the model performance compared to previous studies. From 1501 subjects who underwent digital mammography between February 2007 and May 2015, craniocaudal and mediolateral view mammograms were included and concatenated for each breast, ultimately producing 3002 merged images. Two convolutional neural networks were trained to detect any malignant lesion on the merged images. The performances were tested using 301 merged images from 284 subjects and compared to a meta-analysis including 12 previous deep learning studies. The mean area under the receiver-operating characteristic curve (AUC) for detecting breast cancer in each merged mammogram was 0.952 ± 0.005 by DenseNet-169 and 0.954 ± 0.020 by EfficientNet-B5, respectively. The performance for malignancy detection decreased as breast density increased (density A, mean AUC = 0.984 vs. density D, mean AUC = 0.902 by DenseNet-169). When patients' age was used as a covariate for malignancy detection, the performance showed little change (mean AUC, 0.953 ± 0.005). The mean sensitivity and specificity of the DenseNet-169 (87 and 88%, respectively) surpassed the mean values (81 and 82%, respectively) obtained in a meta-analysis. Deep learning would work efficiently in screening breast cancer in digital mammograms of various densities, which could be maximized in breasts with lower parenchyma density.

16.
Sci Rep ; 10(1): 14803, 2020 09 09.
Artigo em Inglês | MEDLINE | ID: mdl-32908182

RESUMO

Febrile neutropenia (FN) is one of the most concerning complications of chemotherapy, and its prediction remains difficult. This study aimed to reveal the risk factors for and build the prediction models of FN using machine learning algorithms. Medical records of hospitalized patients who underwent chemotherapy after surgery for breast cancer between May 2002 and September 2018 were selectively reviewed for development of models. Demographic, clinical, pathological, and therapeutic data were analyzed to identify risk factors for FN. Using machine learning algorithms, prediction models were developed and evaluated for performance. Of 933 selected inpatients with a mean age of 51.8 ± 10.7 years, FN developed in 409 (43.8%) patients. There was a significant difference in FN incidence according to age, staging, taxane-based regimen, and blood count 5 days after chemotherapy. The area under the curve (AUC) built based on these findings was 0.870 on the basis of logistic regression. The AUC improved by machine learning was 0.908. Machine learning improves the prediction of FN in patients undergoing chemotherapy for breast cancer compared to the conventional statistical model. In these high-risk patients, primary prophylaxis with granulocyte colony-stimulating factor could be considered.


Assuntos
Neoplasias da Mama/tratamento farmacológico , Neutropenia Febril/epidemiologia , Idoso , Algoritmos , Antineoplásicos/efeitos adversos , Antineoplásicos/uso terapêutico , Protocolos de Quimioterapia Combinada Antineoplásica , Hidrocarbonetos Aromáticos com Pontes/efeitos adversos , Hidrocarbonetos Aromáticos com Pontes/uso terapêutico , Feminino , Fator Estimulador de Colônias de Granulócitos/metabolismo , Humanos , Incidência , Pacientes Internados/estatística & dados numéricos , Modelos Logísticos , Aprendizado de Máquina , Masculino , Pessoa de Meia-Idade , República da Coreia , Fatores de Risco , Taxoides/efeitos adversos , Taxoides/uso terapêutico
17.
Sci Rep ; 10(1): 13652, 2020 08 12.
Artigo em Inglês | MEDLINE | ID: mdl-32788635

RESUMO

Colposcopy is widely used to detect cervical cancers, but experienced physicians who are needed for an accurate diagnosis are lacking in developing countries. Artificial intelligence (AI) has been recently used in computer-aided diagnosis showing remarkable promise. In this study, we developed and validated deep learning models to automatically classify cervical neoplasms on colposcopic photographs. Pre-trained convolutional neural networks were fine-tuned for two grading systems: the cervical intraepithelial neoplasia (CIN) system and the lower anogenital squamous terminology (LAST) system. The multi-class classification accuracies of the networks for the CIN system in the test dataset were 48.6 ± 1.3% by Inception-Resnet-v2 and 51.7 ± 5.2% by Resnet-152. The accuracies for the LAST system were 71.8 ± 1.8% and 74.7 ± 1.8%, respectively. The area under the curve (AUC) for discriminating high-risk lesions from low-risk lesions by Resnet-152 was 0.781 ± 0.020 for the CIN system and 0.708 ± 0.024 for the LAST system. The lesions requiring biopsy were also detected efficiently (AUC, 0.947 ± 0.030 by Resnet-152), and presented meaningfully on attention maps. These results may indicate the potential of the application of AI for automated reading of colposcopic photographs.


Assuntos
Colposcopia/métodos , Aprendizado Profundo , Diagnóstico por Computador/métodos , Redes Neurais de Computação , Displasia do Colo do Útero/diagnóstico , Neoplasias do Colo do Útero/classificação , Neoplasias do Colo do Útero/diagnóstico , Adolescente , Adulto , Idoso , Idoso de 80 Anos ou mais , Inteligência Artificial , Estudos de Casos e Controles , Feminino , Humanos , Pessoa de Meia-Idade , Estudos Retrospectivos , Adulto Jovem
18.
J Clin Med ; 9(6)2020 Jun 15.
Artigo em Inglês | MEDLINE | ID: mdl-32549190

RESUMO

Endoscopic resection is recommended for gastric neoplasms confined to mucosa or superficial submucosa. The determination of invasion depth is based on gross morphology assessed in endoscopic images, or on endoscopic ultrasound. These methods have limited accuracy and pose an inter-observer variability. Several studies developed deep-learning (DL) algorithms classifying invasion depth of gastric cancers. Nevertheless, these algorithms are intended to be used after definite diagnosis of gastric cancers, which is not always feasible in various gastric neoplasms. This study aimed to establish a DL algorithm for accurately predicting submucosal invasion in endoscopic images of gastric neoplasms. Pre-trained convolutional neural network models were fine-tuned with 2899 white-light endoscopic images. The prediction models were subsequently validated with an external dataset of 206 images. In the internal test, the mean area under the curve discriminating submucosal invasion was 0.887 (95% confidence interval: 0.849-0.924) by DenseNet-161 network. In the external test, the mean area under the curve reached 0.887 (0.863-0.910). Clinical simulation showed that 6.7% of patients who underwent gastrectomy in the external test were accurately qualified by the established algorithm for potential endoscopic resection, avoiding unnecessary operation. The established DL algorithm proves useful for the prediction of submucosal invasion in endoscopic images of gastric neoplasms.

19.
Ophthalmic Epidemiol ; 27(6): 460-467, 2020 12.
Artigo em Inglês | MEDLINE | ID: mdl-32506973

RESUMO

PURPOSE: This study aimed to determine the incidence, prevalence, and etiologies of third cranial nerve (CN3) palsy in Koreans. METHODS: Data were collected from the National Health Insurance Service-National Sample Cohort (NHIS-NSC) database of South Korea and analyzed. Incident CN3 palsy subjects in the cohort population were defined as cases occurring after the initial 4-year or longer washout period. The incidence and prevalence were analyzed by sex, age group, and year. The etiologies of CN3 palsy were evaluated using comorbidities. RESULTS: Of 1,108,253 subjects, 387 patients were newly diagnosed with CN3 palsy between 2006 and 2015. The incidence of CN3 palsy was 3.71 per 100,000 person-years (95% confidence interval, 3.35-4.09). The incidence of CN3 palsy increased with age and accelerated after the age of 60 years. The mean male-to-female incidence ratio was 1.16. The main cause was presumed to be vascular disease (52.7%), followed by idiopathic causes (25.8%), intracranial neoplasm (7.8%), unruptured cerebral aneurysm (5.4%), and trauma (5.2%). CONCLUSIONS: The incidence of CN3 palsy in Koreans increased with age and peaked between 75 and 79 years. The main cause of CN3 palsy was vascular disease.


Assuntos
Doenças do Nervo Oculomotor , Nervo Oculomotor , Idoso , Estudos de Coortes , Feminino , Humanos , Incidência , Masculino , Pessoa de Meia-Idade , Paralisia , República da Coreia , Estudos Retrospectivos
20.
J Clin Med ; 9(5)2020 May 24.
Artigo em Inglês | MEDLINE | ID: mdl-32456309

RESUMO

Background: Classification of colorectal neoplasms during colonoscopic examination is important to avoid unnecessary endoscopic biopsy or resection. This study aimed to develop and validate deep learning models that automatically classify colorectal lesions histologically on white-light colonoscopy images. Methods: White-light colonoscopy images of colorectal lesions exhibiting pathological results were collected and classified into seven categories: stages T1-4 colorectal cancer (CRC), high-grade dysplasia (HGD), tubular adenoma (TA), and non-neoplasms. The images were then re-classified into four categories including advanced CRC, early CRC/HGD, TA, and non-neoplasms. Two convolutional neural network models were trained, and the performances were evaluated in an internal test dataset and an external validation dataset. Results: In total, 3828 images were collected from 1339 patients. The mean accuracies of ResNet-152 model for the seven-category and four-category classification were 60.2% and 67.3% in the internal test dataset, and 74.7% and 79.2% in the external validation dataset, respectively, including 240 images. In the external validation, ResNet-152 outperformed two endoscopists for four-category classification, and showed a higher mean area under the curve (AUC) for detecting TA+ lesions (0.818) compared to the worst-performing endoscopist. The mean AUC for detecting HGD+ lesions reached 0.876 by Inception-ResNet-v2. Conclusions: A deep learning model presented promising performance in classifying colorectal lesions on white-light colonoscopy images; this model could help endoscopists build optimal treatment strategies.

SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA
...