Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 26
Filtrar
1.
Curr Oncol ; 31(4): 2278-2288, 2024 Apr 18.
Artigo em Inglês | MEDLINE | ID: mdl-38668072

RESUMO

Background: Accurate detection of axillary lymph node (ALN) metastases in breast cancer is crucial for clinical staging and treatment planning. This study aims to develop a deep learning model using clinical implication-applied preprocessed computed tomography (CT) images to enhance the prediction of ALN metastasis in breast cancer patients. Methods: A total of 1128 axial CT images of ALN (538 malignant and 590 benign lymph nodes) were collected from 523 breast cancer patients who underwent preoperative CT scans between January 2012 and July 2022 at Hallym University Medical Center. To develop an optimal deep learning model for distinguishing metastatic ALN from benign ALN, a CT image preprocessing protocol with clinical implications and two different cropping methods (fixed size crop [FSC] method and adjustable square crop [ASC] method) were employed. The images were analyzed using three different convolutional neural network (CNN) architectures (ResNet, DenseNet, and EfficientNet). Ensemble methods involving and combining the selection of the two best-performing CNN architectures from each cropping method were applied to generate the final result. Results: For the two different cropping methods, DenseNet consistently outperformed ResNet and EfficientNet. The area under the receiver operating characteristic curve (AUROC) for DenseNet, using the FSC and ASC methods, was 0.934 and 0.939, respectively. The ensemble model, which combines the performance of the DenseNet121 architecture for both cropping methods, delivered outstanding results with an AUROC of 0.968, an accuracy of 0.938, a sensitivity of 0.980, and a specificity of 0.903. Furthermore, distinct trends observed in gradient-weighted class activation mapping images with the two cropping methods suggest that our deep learning model not only evaluates the lymph node itself, but also distinguishes subtler changes in lymph node margin and adjacent soft tissue, which often elude human interpretation. Conclusions: This research demonstrates the promising performance of a deep learning model in accurately detecting malignant ALNs in breast cancer patients using CT images. The integration of clinical considerations into image processing and the utilization of ensemble methods further improved diagnostic precision.


Assuntos
Axila , Neoplasias da Mama , Aprendizado Profundo , Metástase Linfática , Tomografia Computadorizada por Raios X , Humanos , Neoplasias da Mama/patologia , Neoplasias da Mama/diagnóstico por imagem , Feminino , Metástase Linfática/diagnóstico por imagem , Tomografia Computadorizada por Raios X/métodos , Pessoa de Meia-Idade , Linfonodos/patologia , Linfonodos/diagnóstico por imagem , Adulto , Idoso
2.
Ann Coloproctol ; 40(1): 13-26, 2024 Feb.
Artigo em Inglês | MEDLINE | ID: mdl-38414120

RESUMO

PURPOSE: The integration of artificial intelligence (AI) and magnetic resonance imaging in rectal cancer has the potential to enhance diagnostic accuracy by identifying subtle patterns and aiding tumor delineation and lymph node assessment. According to our systematic review focusing on convolutional neural networks, AI-driven tumor staging and the prediction of treatment response facilitate tailored treat-ment strategies for patients with rectal cancer. METHODS: This paper summarizes the current landscape of AI in the imaging field of rectal cancer, emphasizing the performance reporting design based on the quality of the dataset, model performance, and external validation. RESULTS: AI-driven tumor segmentation has demonstrated promising results using various convolutional neural network models. AI-based predictions of staging and treatment response have exhibited potential as auxiliary tools for personalized treatment strategies. Some studies have indicated superior performance than conventional models in predicting microsatellite instability and KRAS status, offer-ing noninvasive and cost-effective alternatives for identifying genetic mutations. CONCLUSION: Image-based AI studies for rectal can-cer have shown acceptable diagnostic performance but face several challenges, including limited dataset sizes with standardized data, the need for multicenter studies, and the absence of oncologic relevance and external validation for clinical implantation. Overcoming these pitfalls and hurdles is essential for the feasible integration of AI models in clinical settings for rectal cancer, warranting further research.

3.
Sci Rep ; 13(1): 4103, 2023 03 13.
Artigo em Inglês | MEDLINE | ID: mdl-36914694

RESUMO

Artificial intelligence as a screening tool for eyelid lesions will be helpful for early diagnosis of eyelid malignancies and proper decision-making. This study aimed to evaluate the performance of a deep learning model in differentiating eyelid lesions using clinical eyelid photographs in comparison with human ophthalmologists. We included 4954 photographs from 928 patients in this retrospective cross-sectional study. Images were classified into three categories: malignant lesion, benign lesion, and no lesion. Two pre-trained convolutional neural network (CNN) models, DenseNet-161 and EfficientNetV2-M architectures, were fine-tuned to classify images into three or two (malignant versus benign) categories. For a ternary classification, the mean diagnostic accuracies of the CNNs were 82.1% and 83.0% using DenseNet-161 and EfficientNetV2-M, respectively, which were inferior to those of the nine clinicians (87.0-89.5%). For the binary classification, the mean accuracies were 87.5% and 92.5% using DenseNet-161 and EfficientNetV2-M models, which was similar to that of the clinicians (85.8-90.0%). The mean AUC of the two CNN models was 0.908 and 0.950, respectively. Gradient-weighted class activation map successfully highlighted the eyelid tumors on clinical photographs. Deep learning models showed a promising performance in discriminating malignant versus benign eyelid lesions on clinical photographs, reaching the level of human observers.


Assuntos
Aprendizado Profundo , Humanos , Inteligência Artificial , Estudos Retrospectivos , Estudos Transversais , Pálpebras
4.
Sci Rep ; 12(1): 12804, 2022 07 27.
Artigo em Inglês | MEDLINE | ID: mdl-35896791

RESUMO

Colonoscopy is an effective tool to detect colorectal lesions and needs the support of pathological diagnosis. This study aimed to develop and validate deep learning models that automatically classify digital pathology images of colon lesions obtained from colonoscopy-related specimen. Histopathological slides of colonoscopic biopsy or resection specimens were collected and grouped into six classes by disease category: adenocarcinoma, tubular adenoma (TA), traditional serrated adenoma (TSA), sessile serrated adenoma (SSA), hyperplastic polyp (HP), and non-specific lesions. Digital photographs were taken of each pathological slide to fine-tune two pre-trained convolutional neural networks, and the model performances were evaluated. A total of 1865 images were included from 703 patients, of which 10% were used as a test dataset. For six-class classification, the mean diagnostic accuracy was 97.3% (95% confidence interval [CI], 96.0-98.6%) by DenseNet-161 and 95.9% (95% CI 94.1-97.7%) by EfficientNet-B7. The per-class area under the receiver operating characteristic curve (AUC) was highest for adenocarcinoma (1.000; 95% CI 0.999-1.000) by DenseNet-161 and TSA (1.000; 95% CI 1.000-1.000) by EfficientNet-B7. The lowest per-class AUCs were still excellent: 0.991 (95% CI 0.983-0.999) for HP by DenseNet-161 and 0.995 for SSA (95% CI 0.992-0.998) by EfficientNet-B7. Deep learning models achieved excellent performances for discriminating adenocarcinoma from non-adenocarcinoma lesions with an AUC of 0.995 or 0.998. The pathognomonic area for each class was appropriately highlighted in digital images by saliency map, particularly focusing epithelial lesions. Deep learning models might be a useful tool to help the diagnosis for pathologic slides of colonoscopy-related specimens.


Assuntos
Adenocarcinoma , Adenoma , Pólipos do Colo , Neoplasias Colorretais , Aprendizado Profundo , Adenocarcinoma/diagnóstico por imagem , Adenocarcinoma/patologia , Adenoma/diagnóstico por imagem , Adenoma/patologia , Pólipos do Colo/diagnóstico por imagem , Pólipos do Colo/patologia , Colonoscopia/métodos , Neoplasias Colorretais/diagnóstico por imagem , Neoplasias Colorretais/patologia , Humanos
5.
Diagnostics (Basel) ; 12(2)2022 Feb 21.
Artigo em Inglês | MEDLINE | ID: mdl-35204638

RESUMO

Artificial intelligence has enabled the automated diagnosis of several cancer types. We aimed to develop and validate deep learning models that automatically classify cervical intraepithelial neoplasia (CIN) based on histological images. Microscopic images of CIN3, CIN2, CIN1, and non-neoplasm were obtained. The performances of two pre-trained convolutional neural network (CNN) models adopting DenseNet-161 and EfficientNet-B7 architectures were evaluated and compared with those of pathologists. The dataset comprised 1106 images from 588 patients; images of 10% of patients were included in the test dataset. The mean accuracies for the four-class classification were 88.5% (95% confidence interval [CI], 86.3-90.6%) by DenseNet-161 and 89.5% (95% CI, 83.3-95.7%) by EfficientNet-B7, which were similar to human performance (93.2% and 89.7%). The mean per-class area under the receiver operating characteristic curve values by EfficientNet-B7 were 0.996, 0.990, 0.971, and 0.956 in the non-neoplasm, CIN3, CIN1, and CIN2 groups, respectively. The class activation map detected the diagnostic area for CIN lesions. In the three-class classification of CIN2 and CIN3 as one group, the mean accuracies of DenseNet-161 and EfficientNet-B7 increased to 91.4% (95% CI, 88.8-94.0%), and 92.6% (95% CI, 90.4-94.9%), respectively. CNN-based deep learning is a promising tool for diagnosing CIN lesions on digital histological images.

6.
Clin Exp Emerg Med ; 8(2): 120-127, 2021 Jun.
Artigo em Inglês | MEDLINE | ID: mdl-34237817

RESUMO

OBJECTIVE: Recent studies have suggested that deep-learning models can satisfactorily assist in fracture diagnosis. We aimed to evaluate the performance of two of such models in wrist fracture detection. METHODS: We collected image data of patients who visited with wrist trauma at the emergency department. A dataset extracted from January 2018 to May 2020 was split into training (90%) and test (10%) datasets, and two types of convolutional neural networks (i.e., DenseNet-161 and ResNet-152) were trained to detect wrist fractures. Gradient-weighted class activation mapping was used to highlight the regions of radiograph scans that contributed to the decision of the model. Performance of the convolutional neural network models was evaluated using the area under the receiver operating characteristic curve. RESULTS: For model training, we used 4,551 radiographs from 798 patients and 4,443 radiographs from 1,481 patients with and without fractures, respectively. The remaining 10% (300 radiographs from 100 patients with fractures and 690 radiographs from 230 patients without fractures) was used as a test dataset. The sensitivity, specificity, positive predictive value, negative predictive value, and accuracy of DenseNet-161 and ResNet-152 in the test dataset were 90.3%, 90.3%, 80.3%, 95.6%, and 90.3% and 88.6%, 88.4%, 76.9%, 94.7%, and 88.5%, respectively. The area under the receiver operating characteristic curves of DenseNet-161 and ResNet-152 for wrist fracture detection were 0.962 and 0.947, respectively. CONCLUSION: We demonstrated that DenseNet-161 and ResNet-152 models could help detect wrist fractures in the emergency room with satisfactory performance.

7.
Eye (Lond) ; 35(11): 3012-3019, 2021 11.
Artigo em Inglês | MEDLINE | ID: mdl-33414536

RESUMO

AIMS: To investigate the incidence and presumed aetiologies of fourth cranial nerve (CN4) palsy in Korea METHODS: Using the nationally representative dataset of the Korea National Health Insurance Service-National Sample Cohort from 2006 to 2015, newly developed CN4 palsy cases confirmed by a preceding disease-free period of ≥4 years were identified. The presumed aetiology of CN4 palsy was evaluated based on comorbidities around the CN4 palsy diagnosis. RESULTS: Among the 1,108,292 cohort subjects, CN4 palsy newly developed in 390 patients during 10-year follow-up, and the overall incidence of CN4 palsy was 3.74 per 100,000 person-years (95% confidence interval, 3.38-4.12). The incidence of CN4 palsy showed a male preponderance in nearly all age groups, and the overall male-to-female ratio was 2.30. A bimodality by age-group was observed, with two peaks at 0-4 years and at 75-79 years. The most common presumed aetiologies were vascular (51.3%), congenital (20.0%), and idiopathic (18.5%). The incidence rate of a first peak for 0-4 years of age was 6.17 per 100,000 person-years, and cases in this group were congenital. The second peak incidence rate for 75-79 years of age was 11.81 per 100,000 person-years, and the main cause was vascular disease. Strabismus surgery was performed in 48 (12.3%) patients, most of whom (72.9%) were younger than 20 years. CONCLUSION: The incidence of CN4 palsy has a male predominance in Koreans and shows bimodal peaks by age. The aetiology of CN4 palsy varies according to age-groups.


Assuntos
Doenças do Nervo Troclear , Idoso de 80 Anos ou mais , Pré-Escolar , Estudos de Coortes , Feminino , Humanos , Incidência , Lactente , Recém-Nascido , Masculino , República da Coreia/epidemiologia , Estudos Retrospectivos , Doenças do Nervo Troclear/diagnóstico , Doenças do Nervo Troclear/epidemiologia , Doenças do Nervo Troclear/etiologia
8.
J Pers Med ; 10(4)2020 Nov 06.
Artigo em Inglês | MEDLINE | ID: mdl-33172076

RESUMO

Mammography plays an important role in screening breast cancer among females, and artificial intelligence has enabled the automated detection of diseases on medical images. This study aimed to develop a deep learning model detecting breast cancer in digital mammograms of various densities and to evaluate the model performance compared to previous studies. From 1501 subjects who underwent digital mammography between February 2007 and May 2015, craniocaudal and mediolateral view mammograms were included and concatenated for each breast, ultimately producing 3002 merged images. Two convolutional neural networks were trained to detect any malignant lesion on the merged images. The performances were tested using 301 merged images from 284 subjects and compared to a meta-analysis including 12 previous deep learning studies. The mean area under the receiver-operating characteristic curve (AUC) for detecting breast cancer in each merged mammogram was 0.952 ± 0.005 by DenseNet-169 and 0.954 ± 0.020 by EfficientNet-B5, respectively. The performance for malignancy detection decreased as breast density increased (density A, mean AUC = 0.984 vs. density D, mean AUC = 0.902 by DenseNet-169). When patients' age was used as a covariate for malignancy detection, the performance showed little change (mean AUC, 0.953 ± 0.005). The mean sensitivity and specificity of the DenseNet-169 (87 and 88%, respectively) surpassed the mean values (81 and 82%, respectively) obtained in a meta-analysis. Deep learning would work efficiently in screening breast cancer in digital mammograms of various densities, which could be maximized in breasts with lower parenchyma density.

9.
Sci Rep ; 10(1): 14803, 2020 09 09.
Artigo em Inglês | MEDLINE | ID: mdl-32908182

RESUMO

Febrile neutropenia (FN) is one of the most concerning complications of chemotherapy, and its prediction remains difficult. This study aimed to reveal the risk factors for and build the prediction models of FN using machine learning algorithms. Medical records of hospitalized patients who underwent chemotherapy after surgery for breast cancer between May 2002 and September 2018 were selectively reviewed for development of models. Demographic, clinical, pathological, and therapeutic data were analyzed to identify risk factors for FN. Using machine learning algorithms, prediction models were developed and evaluated for performance. Of 933 selected inpatients with a mean age of 51.8 ± 10.7 years, FN developed in 409 (43.8%) patients. There was a significant difference in FN incidence according to age, staging, taxane-based regimen, and blood count 5 days after chemotherapy. The area under the curve (AUC) built based on these findings was 0.870 on the basis of logistic regression. The AUC improved by machine learning was 0.908. Machine learning improves the prediction of FN in patients undergoing chemotherapy for breast cancer compared to the conventional statistical model. In these high-risk patients, primary prophylaxis with granulocyte colony-stimulating factor could be considered.


Assuntos
Neoplasias da Mama/tratamento farmacológico , Neutropenia Febril/epidemiologia , Idoso , Algoritmos , Antineoplásicos/efeitos adversos , Antineoplásicos/uso terapêutico , Protocolos de Quimioterapia Combinada Antineoplásica , Hidrocarbonetos Aromáticos com Pontes/efeitos adversos , Hidrocarbonetos Aromáticos com Pontes/uso terapêutico , Feminino , Fator Estimulador de Colônias de Granulócitos/metabolismo , Humanos , Incidência , Pacientes Internados/estatística & dados numéricos , Modelos Logísticos , Aprendizado de Máquina , Masculino , Pessoa de Meia-Idade , República da Coreia , Fatores de Risco , Taxoides/efeitos adversos , Taxoides/uso terapêutico
10.
Sci Rep ; 10(1): 13652, 2020 08 12.
Artigo em Inglês | MEDLINE | ID: mdl-32788635

RESUMO

Colposcopy is widely used to detect cervical cancers, but experienced physicians who are needed for an accurate diagnosis are lacking in developing countries. Artificial intelligence (AI) has been recently used in computer-aided diagnosis showing remarkable promise. In this study, we developed and validated deep learning models to automatically classify cervical neoplasms on colposcopic photographs. Pre-trained convolutional neural networks were fine-tuned for two grading systems: the cervical intraepithelial neoplasia (CIN) system and the lower anogenital squamous terminology (LAST) system. The multi-class classification accuracies of the networks for the CIN system in the test dataset were 48.6 ± 1.3% by Inception-Resnet-v2 and 51.7 ± 5.2% by Resnet-152. The accuracies for the LAST system were 71.8 ± 1.8% and 74.7 ± 1.8%, respectively. The area under the curve (AUC) for discriminating high-risk lesions from low-risk lesions by Resnet-152 was 0.781 ± 0.020 for the CIN system and 0.708 ± 0.024 for the LAST system. The lesions requiring biopsy were also detected efficiently (AUC, 0.947 ± 0.030 by Resnet-152), and presented meaningfully on attention maps. These results may indicate the potential of the application of AI for automated reading of colposcopic photographs.


Assuntos
Colposcopia/métodos , Aprendizado Profundo , Diagnóstico por Computador/métodos , Redes Neurais de Computação , Displasia do Colo do Útero/diagnóstico , Neoplasias do Colo do Útero/classificação , Neoplasias do Colo do Útero/diagnóstico , Adolescente , Adulto , Idoso , Idoso de 80 Anos ou mais , Inteligência Artificial , Estudos de Casos e Controles , Feminino , Humanos , Pessoa de Meia-Idade , Estudos Retrospectivos , Adulto Jovem
11.
J Clin Med ; 9(6)2020 Jun 15.
Artigo em Inglês | MEDLINE | ID: mdl-32549190

RESUMO

Endoscopic resection is recommended for gastric neoplasms confined to mucosa or superficial submucosa. The determination of invasion depth is based on gross morphology assessed in endoscopic images, or on endoscopic ultrasound. These methods have limited accuracy and pose an inter-observer variability. Several studies developed deep-learning (DL) algorithms classifying invasion depth of gastric cancers. Nevertheless, these algorithms are intended to be used after definite diagnosis of gastric cancers, which is not always feasible in various gastric neoplasms. This study aimed to establish a DL algorithm for accurately predicting submucosal invasion in endoscopic images of gastric neoplasms. Pre-trained convolutional neural network models were fine-tuned with 2899 white-light endoscopic images. The prediction models were subsequently validated with an external dataset of 206 images. In the internal test, the mean area under the curve discriminating submucosal invasion was 0.887 (95% confidence interval: 0.849-0.924) by DenseNet-161 network. In the external test, the mean area under the curve reached 0.887 (0.863-0.910). Clinical simulation showed that 6.7% of patients who underwent gastrectomy in the external test were accurately qualified by the established algorithm for potential endoscopic resection, avoiding unnecessary operation. The established DL algorithm proves useful for the prediction of submucosal invasion in endoscopic images of gastric neoplasms.

12.
Ophthalmic Epidemiol ; 27(6): 460-467, 2020 12.
Artigo em Inglês | MEDLINE | ID: mdl-32506973

RESUMO

PURPOSE: This study aimed to determine the incidence, prevalence, and etiologies of third cranial nerve (CN3) palsy in Koreans. METHODS: Data were collected from the National Health Insurance Service-National Sample Cohort (NHIS-NSC) database of South Korea and analyzed. Incident CN3 palsy subjects in the cohort population were defined as cases occurring after the initial 4-year or longer washout period. The incidence and prevalence were analyzed by sex, age group, and year. The etiologies of CN3 palsy were evaluated using comorbidities. RESULTS: Of 1,108,253 subjects, 387 patients were newly diagnosed with CN3 palsy between 2006 and 2015. The incidence of CN3 palsy was 3.71 per 100,000 person-years (95% confidence interval, 3.35-4.09). The incidence of CN3 palsy increased with age and accelerated after the age of 60 years. The mean male-to-female incidence ratio was 1.16. The main cause was presumed to be vascular disease (52.7%), followed by idiopathic causes (25.8%), intracranial neoplasm (7.8%), unruptured cerebral aneurysm (5.4%), and trauma (5.2%). CONCLUSIONS: The incidence of CN3 palsy in Koreans increased with age and peaked between 75 and 79 years. The main cause of CN3 palsy was vascular disease.


Assuntos
Doenças do Nervo Oculomotor , Nervo Oculomotor , Idoso , Estudos de Coortes , Feminino , Humanos , Incidência , Masculino , Pessoa de Meia-Idade , Paralisia , República da Coreia , Estudos Retrospectivos
13.
J Clin Med ; 9(5)2020 May 24.
Artigo em Inglês | MEDLINE | ID: mdl-32456309

RESUMO

Background: Classification of colorectal neoplasms during colonoscopic examination is important to avoid unnecessary endoscopic biopsy or resection. This study aimed to develop and validate deep learning models that automatically classify colorectal lesions histologically on white-light colonoscopy images. Methods: White-light colonoscopy images of colorectal lesions exhibiting pathological results were collected and classified into seven categories: stages T1-4 colorectal cancer (CRC), high-grade dysplasia (HGD), tubular adenoma (TA), and non-neoplasms. The images were then re-classified into four categories including advanced CRC, early CRC/HGD, TA, and non-neoplasms. Two convolutional neural network models were trained, and the performances were evaluated in an internal test dataset and an external validation dataset. Results: In total, 3828 images were collected from 1339 patients. The mean accuracies of ResNet-152 model for the seven-category and four-category classification were 60.2% and 67.3% in the internal test dataset, and 74.7% and 79.2% in the external validation dataset, respectively, including 240 images. In the external validation, ResNet-152 outperformed two endoscopists for four-category classification, and showed a higher mean area under the curve (AUC) for detecting TA+ lesions (0.818) compared to the worst-performing endoscopist. The mean AUC for detecting HGD+ lesions reached 0.876 by Inception-ResNet-v2. Conclusions: A deep learning model presented promising performance in classifying colorectal lesions on white-light colonoscopy images; this model could help endoscopists build optimal treatment strategies.

14.
Am J Gastroenterol ; 115(1): 70-72, 2020 01.
Artigo em Inglês | MEDLINE | ID: mdl-31770118

RESUMO

Most colorectal polyps are diminutive, and malignant potential for these polyps is uncommon, especially for those in the rectosigmoid. However, many diminutive polyps are still being resected to determine whether these are adenomas or serrated/hyperplastic polyps. Resecting all the diminutive polyps is not cost-effective. Therefore, gastroenterologists have proposed optical diagnosis using image-enhanced endoscopy for polyp characterization. These technologies have achieved favorable outcomes, but are not widely available. Artificial intelligence has been used in clinical medicine to classify lesions. Here, artificial intelligence technology for the characterization of colorectal polyps is discussed in a decision-making context regarding diminutive colorectal polyps.


Assuntos
Inteligência Artificial , Pólipos do Colo/diagnóstico , Colonoscopia/métodos , Tomada de Decisões , Gerenciamento Clínico , Guias de Prática Clínica como Assunto , Colo/diagnóstico por imagem , Humanos , Imagem de Banda Estreita/métodos , Reto/diagnóstico por imagem
15.
Sci Rep ; 9(1): 18419, 2019 12 05.
Artigo em Inglês | MEDLINE | ID: mdl-31804597

RESUMO

We aimed to investigate the incidence, prevalence, and etiology of sixth cranial nerve (CN6) palsy in the general Korean population. The nationally representative dataset of the Korea National Health Insurance Service-National Sample Cohort from 2006 through 2015 was analyzed. The incidence and prevalence of CN6 palsy were estimated in the cohort population, confirming that incident cases of CN6 palsy involved a preceding disease-free period of ≥4 years. The etiologies of CN6 palsy were presumed using comorbidity conditions. Among the 1,108,256 cohort subjects, CN6 palsy developed in 486 patients during the 10-year follow-up. The overall incidence of CN6 palsy was estimated to be 4.66 per 100,000 person-years (95% confidence interval [CI], 4.26-5.08) in the general population. This incidence increased with age, accelerating after 60 years of age and peaking at 70-74 years of age. The mean male-to-female incidence ratio was estimated as 1.41 in the whole population, and the incidence and prevalence of CN6 palsy showed an increasing trend over time in the study period. Surgical incidence for CN6 palsy was only 0.19 per 100,000 person-years (95% CI, 0.12-0.29). The etiologies were presumed to be vascular (56.6%), idiopathic (27.2%), neoplastic (5.6%), and traumatic (4.9%). In conclusion, the incidence of CN6 palsy increases with age, peaking at around 70 years, and shows a mild male predominance in Koreans.


Assuntos
Doenças do Nervo Abducente/epidemiologia , Transtornos Cerebrovasculares/epidemiologia , Traumatismos Craniocerebrais/epidemiologia , Neoplasias/epidemiologia , Doenças do Nervo Abducente/diagnóstico , Doenças do Nervo Abducente/etiologia , Doenças do Nervo Abducente/patologia , Adolescente , Adulto , Idoso , Idoso de 80 Anos ou mais , Transtornos Cerebrovasculares/complicações , Transtornos Cerebrovasculares/diagnóstico , Transtornos Cerebrovasculares/patologia , Criança , Pré-Escolar , Traumatismos Craniocerebrais/complicações , Traumatismos Craniocerebrais/diagnóstico , Traumatismos Craniocerebrais/patologia , Feminino , Humanos , Incidência , Lactente , Recém-Nascido , Masculino , Pessoa de Meia-Idade , Programas Nacionais de Saúde/estatística & dados numéricos , Neoplasias/complicações , Neoplasias/diagnóstico , Neoplasias/patologia , Prevalência , República da Coreia/epidemiologia , Estudos Retrospectivos , Fatores de Risco
16.
J Clin Sleep Med ; 15(9): 1293-1301, 2019 09 15.
Artigo em Inglês | MEDLINE | ID: mdl-31538600

RESUMO

STUDY OBJECTIVES: Several studies have reported an association between obstructive sleep apnea (OSA) and neuro-otologic diseases, such as Ménière's disease or sudden sensorineural hearing loss (SSNHL). However, the exact relationship between OSA and those diseases has not been fully evaluated. Therefore, the aim of this study was to investigate the prospective link between OSA and Ménière's disease or SSNHL. METHODS: We used a nationwide cohort sample of data for 2002-2013 representing approximately 1 million patients. The OSA group (n = 942) included patients diagnosed between 2004 and 2006; the comparison group was selected using propensity score matching (n = 3,768). We investigated Ménière's disease and SSNHL events over a 9-year follow-up period. Survival analysis, log-rank test, and Cox proportional hazards regression models were used to calculate incidence, survival rate, and hazard ratios for each group. RESULTS: In the OSA group, the incidences of Ménière's disease and SSNHL were 7,854.4 and 7,876.3 person-years, respectively. Cox proportional hazards analysis revealed no overall association between patients with OSA and the risk of subsequent Ménière's disease or SSNHL. In a subgroup analysis, female and middle-aged patients with OSA were independently associated with a two-fold higher incidence of subsequent Ménière's disease, compared to those without OSA. However, we could not find any significant association between patients with OSA and SSNHL even in the subgroup analysis. CONCLUSIONS: Our findings suggest that female or middle-aged patients with OSA are associated with an increased incidence of Ménière's disease. However, there was no association between OSA and SSNHL. CITATION: Kim J-Y, Ko I, Cho B-J, Kim D-K. Association of obstructive sleep apnea with the risk of Ménière's disease and sudden sensorineural hearing loss: a study using data from the Korean National Health Insurance Service. J Clin Sleep Med. 2019;15(9):1293-1301.


Assuntos
Perda Auditiva Neurossensorial/epidemiologia , Doença de Meniere/epidemiologia , Apneia Obstrutiva do Sono/epidemiologia , Adulto , Fatores Etários , Idoso , Estudos de Coortes , Comorbidade , Feminino , Humanos , Incidência , Masculino , Pessoa de Meia-Idade , Programas Nacionais de Saúde , Estudos Prospectivos , República da Coreia/epidemiologia , Fatores de Risco , Fatores Sexuais
17.
Endoscopy ; 51(12): 1121-1129, 2019 12.
Artigo em Inglês | MEDLINE | ID: mdl-31443108

RESUMO

BACKGROUND: Visual inspection, lesion detection, and differentiation between malignant and benign features are key aspects of an endoscopist's role. The use of machine learning for the recognition and differentiation of images has been increasingly adopted in clinical practice. This study aimed to establish convolutional neural network (CNN) models to automatically classify gastric neoplasms based on endoscopic images. METHODS: Endoscopic white-light images of pathologically confirmed gastric lesions were collected and classified into five categories: advanced gastric cancer, early gastric cancer, high grade dysplasia, low grade dysplasia, and non-neoplasm. Three pretrained CNN models were fine-tuned using a training dataset. The classifying performance of the models was evaluated using a test dataset and a prospective validation dataset. RESULTS: A total of 5017 images were collected from 1269 patients, among which 812 images from 212 patients were used as the test dataset. An additional 200 images from 200 patients were collected and used for prospective validation. For the five-category classification, the weighted average accuracy of the Inception-Resnet-v2 model reached 84.6 %. The mean area under the curve (AUC) of the model for differentiating gastric cancer and neoplasm was 0.877 and 0.927, respectively. In prospective validation, the Inception-Resnet-v2 model showed lower performance compared with the endoscopist with the best performance (five-category accuracy 76.4 % vs. 87.6 %; cancer 76.0 % vs. 97.5 %; neoplasm 73.5 % vs. 96.5 %; P  < 0.001). However, there was no statistical difference between the Inception-Resnet-v2 model and the endoscopist with the worst performance in the differentiation of gastric cancer (accuracy 76.0 % vs. 82.0 %) and neoplasm (AUC 0.776 vs. 0.865). CONCLUSION: The evaluated deep-learning models have the potential for clinical application in classifying gastric cancer or neoplasm on endoscopic white-light images.


Assuntos
Aprendizado Profundo/estatística & dados numéricos , Endoscopia/métodos , Neoplasias Gástricas , Bases de Dados Factuais/estatística & dados numéricos , Diagnóstico Diferencial , Detecção Precoce de Câncer , Humanos , Processamento de Imagem Assistida por Computador/métodos , Gradação de Tumores , Estadiamento de Neoplasias , Redes Neurais de Computação , Curva ROC , Reprodutibilidade dos Testes , Neoplasias Gástricas/classificação , Neoplasias Gástricas/diagnóstico por imagem , Neoplasias Gástricas/patologia
18.
JAMA Otolaryngol Head Neck Surg ; 145(4): 313-319, 2019 04 01.
Artigo em Inglês | MEDLINE | ID: mdl-30730537

RESUMO

Importance: Chronic rhinosinusitis (CRS) is associated with a decreased quality of life, affecting physical and emotional aspects of daily function, the latter of which could manifest as depression and anxiety. Objective: To evaluate the risk of depression and anxiety in CRS, depending on the CRS phenotype (CRS without nasal polyps [CRSsNP] and CRS with nasal polyps [CRSwNP]). Design, Setting, and Participants: This retrospective nationwide cohort study used population-based insurance data (consisting of data from approximately 1 million patients). The study population included 16 224 patients with CRS and 32 448 individuals without CRS, with propensity score matching between groups according to sociodemographic factors and enrollment year. Data were collected from January 1, 2002, through December 31, 2013, and analyzed from July 1 through November 15, 2018. Main Outcomes and Measures: Survival analysis, the log-rank test, and Cox proportional hazards regression models were used to calculate the incidence, survival rate, and hazard ratio (HR) of depression and anxiety for each group. Results: Among the 48 672 individuals included in the study population (58.8% female), the overall incidence of depression during the 11-year follow-up was 1.51-fold higher in the CRS group than in the non-CRS group (24.2 vs 16.0 per 1000 person-years; adjusted HR, 1.54; 95% CI, 1.48-1.61). The incidence of anxiety was also higher in the CRS group than in the comparison group (42.2 vs 27.8 per 1000 person-years; adjusted HR, 1.57; 95% CI, 1.52-1.62). Moreover, the adjusted HRs of developing depression (CRSsNP, 1.61 [95% CI, 1.54-1.69]; CRSwNP, 1.41 [95% CI, 1.32-1.50]) and anxiety (CRSsNP, 1.63 [95% CI, 1.57-1.69]; CRSwNP, 1.45 [95% CI, 1.38-1.52]) were greater in patients with CRSsNP than in those with CRSwNP. Conclusions and Relevance: This observational study suggests that CRS is associated with an increased incidence of depression and anxiety. Specifically, findings from this study found that patients without nasal polyps showed a higher risk of developing depression and anxiety than those with nasal polyps.


Assuntos
Ansiedade/epidemiologia , Depressão/epidemiologia , Pólipos Nasais/psicologia , Rinite/psicologia , Sinusite/psicologia , Adulto , Idoso , Doença Crônica , Feminino , Humanos , Incidência , Masculino , Pessoa de Meia-Idade , Pólipos Nasais/complicações , Programas Nacionais de Saúde , Pontuação de Propensão , Modelos de Riscos Proporcionais , Qualidade de Vida , República da Coreia , Estudos Retrospectivos , Rinite/complicações , Rinite/mortalidade , Sinusite/complicações , Sinusite/mortalidade , Taxa de Sobrevida
19.
Medicine (Baltimore) ; 97(50): e13656, 2018 Dec.
Artigo em Inglês | MEDLINE | ID: mdl-30558063

RESUMO

Both extremely long and short sleep durations have been associated with increased risk of numerous health problems. This study examined the association between self-reported sleep duration and reporting of musculoskeletal pain in the adult Korean population.This study included data from 17,108 adults aged ≥50 years, obtained from the Korea National Health and Nutrition Examination Survey 2010-2012 and 2013-2015. Self-reported daily hours slept and the presence of musculoskeletal pain in knee joint, hip joint, or low back were examined. Patients were stratified into 5 groups by their sleep duration: ≤5, 6, 7, 8, or ≥9 h. Multivariate logistic regression analysis was performed, adjusting for covariates including age, sex, marital status, smoking, alcohol use, family income level, education, physical exercise, body mass index (BMI), and stress level.A U-shaped relationship was observed between the length of sleep duration and the presence of musculoskeletal pain. After adjusting for covariates, sleep duration of ≤5 h or ≥9 h was significantly associated with musculoskeletal pain experienced for more than 30 days over a 3-month period. We also found that the presence of multi-site musculoskeletal pain was significantly higher among those who slept for ≤5 h or ≥9 h than in those who slept for 7 h.These findings suggest that either short or long sleep duration is associated with musculoskeletal pain among Korean adults.


Assuntos
Dor Musculoesquelética/psicologia , Autorrelato/estatística & dados numéricos , Sono/fisiologia , Idoso , Índice de Massa Corporal , Estudos Transversais , Feminino , Articulação do Quadril/patologia , Humanos , Articulação do Joelho/patologia , Dor Lombar/patologia , Dor Lombar/psicologia , Masculino , Pessoa de Meia-Idade , Dor Musculoesquelética/epidemiologia , República da Coreia/epidemiologia , Fatores de Tempo
20.
Curr Eye Res ; 43(8): 1052-1060, 2018 08.
Artigo em Inglês | MEDLINE | ID: mdl-29718719

RESUMO

PURPOSE: To investigate risk factors for high myopia in the general Korean population. METHODS: In this nationwide population study, the dataset of the Korea National Health and Nutrition Examination Survey 2008-2012 was analyzed. The study cohort included 11 703 participants, aged 25-49 years, who underwent neither refractive nor cataract surgery. The association between demographic, socioeconomic, behavioral, and systemic variables and high myopia was investigated. RESULTS: The mean participant age was 37.9 ± 6.8 years, and the prevalence of high myopia ≤-6.0D was 7.0 ± 0.3% in the study population. The right eyes (-1.76 ± 0.03 D) were more myopic than the left eyes (-1.70 ± 0.03 D; P < 0.001). In multivariate logistic regression analysis, high myopia was associated with age (odds ratio [OR], 0.97 per 1 year-increase) and female sex (OR, 1.24). Other identified risk factors included education level ≥ university graduation (OR, 1.91), the presence of hypertension (OR, 1.69), and serum glucose level (OR, 1.01 per 1 mg/dL). Sunlight exposure of ≥5 h/day (OR, 0.67) and serum 25-hydroxyvitamin D level (OR, 0.97 per 1 ng/mL) showed protective effect against high myopia. CONCLUSION: High myopia is associated with younger age, female sex, high education level, longer sunlight exposure, and some other systemic conditions.


Assuntos
Miopia Degenerativa/epidemiologia , Inquéritos Nutricionais/métodos , Refração Ocular/fisiologia , Adulto , Feminino , Humanos , Masculino , Pessoa de Meia-Idade , Miopia Degenerativa/fisiopatologia , Prevalência , República da Coreia/epidemiologia , Fatores de Risco , Fatores Socioeconômicos
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA