Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 4 de 4
Filtrar
1.
BMC Ophthalmol ; 22(1): 483, 2022 Dec 12.
Artigo em Inglês | MEDLINE | ID: mdl-36510171

RESUMO

BACKGROUND: To verify efficacy of automatic screening and classification of glaucoma with deep learning system. METHODS: A cross-sectional, retrospective study in a tertiary referral hospital. Patients with healthy optic disc, high-tension, or normal-tension glaucoma were enrolled. Complicated non-glaucomatous optic neuropathy was excluded. Colour and red-free fundus images were collected for development of DLS and comparison of their efficacy. The convolutional neural network with the pre-trained EfficientNet-b0 model was selected for machine learning. Glaucoma screening (Binary) and ternary classification with or without additional demographics (age, gender, high myopia) were evaluated, followed by creating confusion matrix and heatmaps. Area under receiver operating characteristic curve (AUC), accuracy, sensitivity, specificity, and F1 score were viewed as main outcome measures. RESULTS: Two hundred and twenty-two cases (421 eyes) were enrolled, with 1851 images in total (1207 normal and 644 glaucomatous disc). Train set and test set were comprised of 1539 and 312 images, respectively. If demographics were not provided, AUC, accuracy, precision, sensitivity, F1 score, and specificity of our deep learning system in eye-based glaucoma screening were 0.98, 0.91, 0.86, 0.86, 0.86, and 0.94 in test set. Same outcome measures in eye-based ternary classification without demographic data were 0.94, 0.87, 0.87, 0.87, 0.87, and 0.94 in our test set, respectively. Adding demographics has no significant impact on efficacy, but establishing a linkage between eyes and images is helpful for a better performance. Confusion matrix and heatmaps suggested that retinal lesions and quality of photographs could affect classification. Colour fundus images play a major role in glaucoma classification, compared to red-free fundus images. CONCLUSIONS: Promising results with high AUC and specificity were shown in distinguishing normal optic nerve from glaucomatous fundus images and doing further classification.


Assuntos
Aprendizado Profundo , Glaucoma , Disco Óptico , Humanos , Estudos de Casos e Controles , Estudos Retrospectivos , Estudos Transversais , Disco Óptico/diagnóstico por imagem , Disco Óptico/patologia , Fundo de Olho , Glaucoma/patologia , Curva ROC
2.
J Med Internet Res ; 23(6): e25247, 2021 06 08.
Artigo em Inglês | MEDLINE | ID: mdl-34100770

RESUMO

BACKGROUND: Dysphonia influences the quality of life by interfering with communication. However, a laryngoscopic examination is expensive and not readily accessible in primary care units. Experienced laryngologists are required to achieve an accurate diagnosis. OBJECTIVE: This study sought to detect various vocal fold diseases through pathological voice recognition using artificial intelligence. METHODS: We collected 189 normal voice samples and 552 samples of individuals with voice disorders, including vocal atrophy (n=224), unilateral vocal paralysis (n=50), organic vocal fold lesions (n=248), and adductor spasmodic dysphonia (n=30). The 741 samples were divided into 2 sets: 593 samples as the training set and 148 samples as the testing set. A convolutional neural network approach was applied to train the model, and findings were compared with those of human specialists. RESULTS: The convolutional neural network model achieved a sensitivity of 0.66, a specificity of 0.91, and an overall accuracy of 66.9% for distinguishing normal voice, vocal atrophy, unilateral vocal paralysis, organic vocal fold lesions, and adductor spasmodic dysphonia. Compared with the accuracy of human specialists, the overall accuracy rates were 60.1% and 56.1% for the 2 laryngologists and 51.4% and 43.2% for the 2 general ear, nose, and throat doctors. CONCLUSIONS: Voice alone could be used for common vocal fold disease recognition through a deep learning approach after training with our Mandarin pathological voice database. This approach involving artificial intelligence could be clinically useful for screening general vocal fold disease using the voice. The approach includes a quick survey and a general health examination. It can be applied during telemedicine in areas with primary care units lacking laryngoscopic abilities. It could support physicians when prescreening cases by allowing for invasive examinations to be performed only for cases involving problems with automatic recognition or listening and for professional analyses of other clinical examination results that reveal doubts about the presence of pathologies.


Assuntos
Aprendizado Profundo , Prega Vocal , Inteligência Artificial , Humanos , Qualidade de Vida , Reconhecimento de Voz
3.
Crit Care ; 24(1): 478, 2020 07 31.
Artigo em Inglês | MEDLINE | ID: mdl-32736589

RESUMO

BACKGROUND: Cardiac surgery-associated acute kidney injury (CSA-AKI) is a major complication that results in increased morbidity and mortality after cardiac surgery. Most established prediction models are limited to the analysis of nonlinear relationships and fail to fully consider intraoperative variables, which represent the acute response to surgery. Therefore, this study utilized an artificial intelligence-based machine learning approach thorough perioperative data-driven learning to predict CSA-AKI. METHODS: A total of 671 patients undergoing cardiac surgery from August 2016 to August 2018 were enrolled. AKI following cardiac surgery was defined according to criteria from Kidney Disease: Improving Global Outcomes (KDIGO). The variables used for analysis included demographic characteristics, clinical condition, preoperative biochemistry data, preoperative medication, and intraoperative variables such as time-series hemodynamic changes. The machine learning methods used included logistic regression, support vector machine (SVM), random forest (RF), extreme gradient boosting (XGboost), and ensemble (RF + XGboost). The performance of these models was evaluated using the area under the receiver operating characteristic curve (AUC). We also utilized SHapley Additive exPlanation (SHAP) values to explain the prediction model. RESULTS: Development of CSA-AKI was noted in 163 patients (24.3%) during the first postoperative week. Regarding the efficacy of the single model that most accurately predicted the outcome, RF exhibited the greatest AUC (0.839, 95% confidence interval [CI] 0.772-0.898), whereas the AUC (0.843, 95% CI 0.778-0.899) of ensemble model (RF + XGboost) was even greater than that of the RF model alone. The top 3 most influential features in the RF importance matrix plot were intraoperative urine output, units of packed red blood cells (pRBCs) transfused during surgery, and preoperative hemoglobin level. The SHAP summary plot was used to illustrate the positive or negative effects of the top 20 features attributed to the RF. We also used the SHAP dependence plot to explain how a single feature affects the output of the RF prediction model. CONCLUSIONS: In this study, machine learning methods were successfully established to predict CSA-AKI, which determines risks following cardiac surgery, enabling the optimization of postoperative treatment strategies to minimize the postoperative complications following cardiac surgeries.


Assuntos
Injúria Renal Aguda/epidemiologia , Procedimentos Cirúrgicos Cardíacos/efeitos adversos , Aprendizado de Máquina , Modelos Estatísticos , Complicações Pós-Operatórias/epidemiologia , Idoso , Feminino , Humanos , Masculino , Pessoa de Meia-Idade , Curva ROC , Reprodutibilidade dos Testes , Estudos Retrospectivos , Medição de Risco/métodos
4.
IEEE Trans Med Imaging ; 41(10): 2828-2847, 2022 10.
Artigo em Inglês | MEDLINE | ID: mdl-35507621

RESUMO

Age-related macular degeneration (AMD) is the leading cause of visual impairment among elderly in the world. Early detection of AMD is of great importance, as the vision loss caused by this disease is irreversible and permanent. Color fundus photography is the most cost-effective imaging modality to screen for retinal disorders. Cutting edge deep learning based algorithms have been recently developed for automatically detecting AMD from fundus images. However, there are still lack of a comprehensive annotated dataset and standard evaluation benchmarks. To deal with this issue, we set up the Automatic Detection challenge on Age-related Macular degeneration (ADAM), which was held as a satellite event of the ISBI 2020 conference. The ADAM challenge consisted of four tasks which cover the main aspects of detecting and characterizing AMD from fundus images, including detection of AMD, detection and segmentation of optic disc, localization of fovea, and detection and segmentation of lesions. As part of the ADAM challenge, we have released a comprehensive dataset of 1200 fundus images with AMD diagnostic labels, pixel-wise segmentation masks for both optic disc and AMD-related lesions (drusen, exudates, hemorrhages and scars, among others), as well as the coordinates corresponding to the location of the macular fovea. A uniform evaluation framework has been built to make a fair comparison of different models using this dataset. During the ADAM challenge, 610 results were submitted for online evaluation, with 11 teams finally participating in the onsite challenge. This paper introduces the challenge, the dataset and the evaluation methods, as well as summarizes the participating methods and analyzes their results for each task. In particular, we observed that the ensembling strategy and the incorporation of clinical domain knowledge were the key to improve the performance of the deep learning models.


Assuntos
Degeneração Macular , Idoso , Técnicas de Diagnóstico Oftalmológico , Fundo de Olho , Humanos , Degeneração Macular/diagnóstico por imagem , Fotografação/métodos , Reprodutibilidade dos Testes
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA