Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 15 de 15
Filtrar
1.
Cardiovasc Diabetol ; 23(1): 296, 2024 Aug 10.
Artículo en Inglés | MEDLINE | ID: mdl-39127709

RESUMEN

BACKGROUND: Cardiac autonomic neuropathy (CAN) in diabetes mellitus (DM) is independently associated with cardiovascular (CV) events and CV death. Diagnosis of this complication of DM is time-consuming and not routinely performed in the clinical practice, in contrast to fundus retinal imaging which is accessible and routinely performed. Whether artificial intelligence (AI) utilizing retinal images collected through diabetic eye screening can provide an efficient diagnostic method for CAN is unknown. METHODS: This was a single center, observational study in a cohort of patients with DM as a part of the Cardiovascular Disease in Patients with Diabetes: The Silesia Diabetes-Heart Project (NCT05626413). To diagnose CAN, we used standard CV autonomic reflex tests. In this analysis we implemented AI-based deep learning techniques with non-mydriatic 5-field color fundus imaging to identify patients with CAN. Two experiments have been developed utilizing Multiple Instance Learning and primarily ResNet 18 as the backbone network. Models underwent training and validation prior to testing on an unseen image set. RESULTS: In an analysis of 2275 retinal images from 229 patients, the ResNet 18 backbone model demonstrated robust diagnostic capabilities in the binary classification of CAN, correctly identifying 93% of CAN cases and 89% of non-CAN cases within the test set. The model achieved an area under the receiver operating characteristic curve (AUCROC) of 0.87 (95% CI 0.74-0.97). For distinguishing between definite or severe stages of CAN (dsCAN), the ResNet 18 model accurately classified 78% of dsCAN cases and 93% of cases without dsCAN, with an AUCROC of 0.94 (95% CI 0.86-1.00). An alternate backbone model, ResWide 50, showed enhanced sensitivity at 89% for dsCAN, but with a marginally lower AUCROC of 0.91 (95% CI 0.73-1.00). CONCLUSIONS: AI-based algorithms utilising retinal images can differentiate with high accuracy patients with CAN. AI analysis of fundus images to detect CAN may be implemented in routine clinical practice to identify patients at the highest CV risk. TRIAL REGISTRATION: This is a part of the Silesia Diabetes-Heart Project (Clinical-Trials.gov Identifier: NCT05626413).


Asunto(s)
Aprendizaje Profundo , Neuropatías Diabéticas , Valor Predictivo de las Pruebas , Humanos , Masculino , Femenino , Persona de Mediana Edad , Anciano , Neuropatías Diabéticas/diagnóstico , Neuropatías Diabéticas/fisiopatología , Neuropatías Diabéticas/diagnóstico por imagen , Neuropatías Diabéticas/etiología , Reproducibilidad de los Resultados , Retinopatía Diabética/diagnóstico , Retinopatía Diabética/diagnóstico por imagen , Retinopatía Diabética/epidemiología , Interpretación de Imagen Asistida por Computador , Sistema Nervioso Autónomo/fisiopatología , Sistema Nervioso Autónomo/diagnóstico por imagen , Fondo de Ojo , Cardiopatías/diagnóstico por imagen , Cardiopatías/diagnóstico , Adulto , Inteligencia Artificial
2.
Malar J ; 22(1): 139, 2023 Apr 26.
Artículo en Inglés | MEDLINE | ID: mdl-37101295

RESUMEN

BACKGROUND: Cerebral malaria (CM) continues to present a major health challenge, particularly in sub-Saharan Africa. CM is associated with a characteristic malarial retinopathy (MR) with diagnostic and prognostic significance. Advances in retinal imaging have allowed researchers to better characterize the changes seen in MR and to make inferences about the pathophysiology of the disease. The study aimed to explore the role of retinal imaging in diagnosis and prognostication in CM; establish insights into pathophysiology of CM from retinal imaging; establish future research directions. METHODS: The literature was systematically reviewed using the African Index Medicus, MEDLINE, Scopus and Web of Science databases. A total of 35 full texts were included in the final analysis. The descriptive nature of the included studies and heterogeneity precluded meta-analysis. RESULTS: Available research clearly shows retinal imaging is useful both as a clinical tool for the assessment of CM and as a scientific instrument to aid the understanding of the condition. Modalities which can be performed at the bedside, such as fundus photography and optical coherence tomography, are best positioned to take advantage of artificial intelligence-assisted image analysis, unlocking the clinical potential of retinal imaging for real-time diagnosis in low-resource environments where extensively trained clinicians may be few in number, and for guiding adjunctive therapies as they develop. CONCLUSIONS: Further research into retinal imaging technologies in CM is justified. In particular, co-ordinated interdisciplinary work shows promise in unpicking the pathophysiology of a complex disease.


Asunto(s)
Malaria Cerebral , Enfermedades de la Retina , Humanos , Inteligencia Artificial , Retina/diagnóstico por imagen , Enfermedades de la Retina/diagnóstico por imagen , Tomografía de Coherencia Óptica/métodos
3.
Diabetologia ; 65(3): 457-466, 2022 03.
Artículo en Inglés | MEDLINE | ID: mdl-34806115

RESUMEN

AIMS/HYPOTHESIS: We aimed to develop an artificial intelligence (AI)-based deep learning algorithm (DLA) applying attribution methods without image segmentation to corneal confocal microscopy images and to accurately classify peripheral neuropathy (or lack of). METHODS: The AI-based DLA utilised convolutional neural networks with data augmentation to increase the algorithm's generalisability. The algorithm was trained using a high-end graphics processor for 300 epochs on 329 corneal nerve images and tested on 40 images (1 image/participant). Participants consisted of healthy volunteer (HV) participants (n = 90) and participants with type 1 diabetes (n = 88), type 2 diabetes (n = 141) and prediabetes (n = 50) (defined as impaired fasting glucose, impaired glucose tolerance or a combination of both), and were classified into HV, those without neuropathy (PN-) (n = 149) and those with neuropathy (PN+) (n = 130). For the AI-based DLA, a modified residual neural network called ResNet-50 was developed and used to extract features from images and perform classification. The algorithm was tested on 40 participants (15 HV, 13 PN-, 12 PN+). Attribution methods gradient-weighted class activation mapping (Grad-CAM), Guided Grad-CAM and occlusion sensitivity displayed the areas within the image that had the greatest impact on the decision of the algorithm. RESULTS: The results were as follows: HV: recall of 1.0 (95% CI 1.0, 1.0), precision of 0.83 (95% CI 0.65, 1.0), F1-score of 0.91 (95% CI 0.79, 1.0); PN-: recall of 0.85 (95% CI 0.62, 1.0), precision of 0.92 (95% CI 0.73, 1.0), F1-score of 0.88 (95% CI 0.71, 1.0); PN+: recall of 0.83 (95% CI 0.58, 1.0), precision of 1.0 (95% CI 1.0, 1.0), F1-score of 0.91 (95% CI 0.74, 1.0). The features displayed by the attribution methods demonstrated more corneal nerves in HV, a reduction in corneal nerves for PN- and an absence of corneal nerves for PN+ images. CONCLUSIONS/INTERPRETATION: We demonstrate promising results in the rapid classification of peripheral neuropathy using a single corneal image. A large-scale multicentre validation study is required to assess the utility of AI-based DLA in screening and diagnostic programmes for diabetic neuropathy.


Asunto(s)
Diabetes Mellitus Tipo 2 , Neuropatías Diabéticas , Estado Prediabético , Inteligencia Artificial , Neuropatías Diabéticas/diagnóstico , Humanos , Microscopía Confocal/métodos , Estado Prediabético/diagnóstico
4.
Front Med (Lausanne) ; 11: 1421439, 2024.
Artículo en Inglés | MEDLINE | ID: mdl-39081694

RESUMEN

We introduce a novel AI-driven approach to unsupervised fundus image registration utilizing our Generalized Polynomial Transformation (GPT) model. Through the GPT, we establish a foundational model capable of simulating diverse polynomial transformations, trained on a large synthetic dataset to encompass a broad range of transformation scenarios. Additionally, our hybrid pre-processing strategy aims to streamline the learning process by offering model-focused input. We evaluated our model's effectiveness on the publicly available AREDS dataset by using standard metrics such as image-level and parameter-level analyzes. Linear regression analysis reveals an average Pearson correlation coefficient (R) of 0.9876 across all quadratic transformation parameters. Image-level evaluation, comprising qualitative and quantitative analyzes, showcases significant improvements in Structural Similarity Index (SSIM) and Normalized Cross Correlation (NCC) scores, indicating its robust performance. Notably, precise matching of the optic disc and vessel locations with minimal global distortion are observed. These findings underscore the potential of GPT-based approaches in image registration methodologies, promising advancements in diagnosis, treatment planning, and disease monitoring in ophthalmology and beyond.

5.
Med Image Anal ; 95: 103183, 2024 Jul.
Artículo en Inglés | MEDLINE | ID: mdl-38692098

RESUMEN

Automated segmentation is a challenging task in medical image analysis that usually requires a large amount of manually labeled data. However, most current supervised learning based algorithms suffer from insufficient manual annotations, posing a significant difficulty for accurate and robust segmentation. In addition, most current semi-supervised methods lack explicit representations of geometric structure and semantic information, restricting segmentation accuracy. In this work, we propose a hybrid framework to learn polygon vertices, region masks, and their boundaries in a weakly/semi-supervised manner that significantly advances geometric and semantic representations. Firstly, we propose multi-granularity learning of explicit geometric structure constraints via polygon vertices (PolyV) and pixel-wise region (PixelR) segmentation masks in a semi-supervised manner. Secondly, we propose eliminating boundary ambiguity by using an explicit contrastive objective to learn a discriminative feature space of boundary contours at the pixel level with limited annotations. Thirdly, we exploit the task-specific clinical domain knowledge to differentiate the clinical function assessment end-to-end. The ground truth of clinical function assessment, on the other hand, can serve as auxiliary weak supervision for PolyV and PixelR learning. We evaluate the proposed framework on two tasks, including optic disc (OD) and cup (OC) segmentation along with vertical cup-to-disc ratio (vCDR) estimation in fundus images; left ventricle (LV) segmentation at end-diastolic and end-systolic frames along with ejection fraction (LVEF) estimation in two-dimensional echocardiography images. Experiments on nine large-scale datasets of the two tasks under different label settings demonstrate our model's superior performance on segmentation and clinical function assessment.


Asunto(s)
Algoritmos , Humanos , Interpretación de Imagen Asistida por Computador/métodos , Ecocardiografía
6.
Artículo en Inglés | MEDLINE | ID: mdl-39321005

RESUMEN

Optical coherence tomography angiography (OCTA) plays a crucial role in quantifying and analyzing retinal vascular diseases. However, the limited field of view (FOV) inherent in most commercial OCTA imaging systems poses a significant challenge for clinicians, restricting the possibility to analyze larger retinal regions of high resolution. Automatic stitching of OCTA scans in adjacent regions may provide a promising solution to extend the region of interest. However, commonly-used stitching algorithms face difficulties in achieving effective alignment due to noise, artifacts and dense vasculature present in OCTA images. To address these challenges, we propose a novel retinal OCTA image stitching network, named MR2-Net, which integrates multi-scale representation learning and dynamic location guidance. In the first stage, an image registration network with a progressive multi-resolution feature fusion is proposed to derive deep semantic information effectively. Additionally, we introduce a dynamic guidance strategy to locate the foveal avascular zone (FAZ) and constrain registration errors in overlapping vascular regions. In the second stage, an image fusion network based on multiple mask constraints and adjacent image aggregation (AIA) strategies is developed to further eliminate the artifacts in the overlapping areas of stitched images, thereby achieving precise vessel alignment. To validate the effectiveness of our method, we conduct a series of experiments on two delicately constructed datasets, i.e., OPTOVUE-OCTA and SVision-OCTA. Experimental results demonstrate that our method outperforms other image stitching methods and effectively generates high-quality wide-field OCTA images, achieving a structural similarity index (SSIM) score of 0.8264 and 0.8014 on the two datasets, respectively.

7.
J Clin Med ; 12(4)2023 Feb 06.
Artículo en Inglés | MEDLINE | ID: mdl-36835819

RESUMEN

Diabetic peripheral neuropathy (DPN) is the leading cause of neuropathy worldwide resulting in excess morbidity and mortality. We aimed to develop an artificial intelligence deep learning algorithm to classify the presence or absence of peripheral neuropathy (PN) in participants with diabetes or pre-diabetes using corneal confocal microscopy (CCM) images of the sub-basal nerve plexus. A modified ResNet-50 model was trained to perform the binary classification of PN (PN+) versus no PN (PN-) based on the Toronto consensus criteria. A dataset of 279 participants (149 PN-, 130 PN+) was used to train (n = 200), validate (n = 18), and test (n = 61) the algorithm, utilizing one image per participant. The dataset consisted of participants with type 1 diabetes (n = 88), type 2 diabetes (n = 141), and pre-diabetes (n = 50). The algorithm was evaluated using diagnostic performance metrics and attribution-based methods (gradient-weighted class activation mapping (Grad-CAM) and Guided Grad-CAM). In detecting PN+, the AI-based DLA achieved a sensitivity of 0.91 (95%CI: 0.79-1.0), a specificity of 0.93 (95%CI: 0.83-1.0), and an area under the curve (AUC) of 0.95 (95%CI: 0.83-0.99). Our deep learning algorithm demonstrates excellent results for the diagnosis of PN using CCM. A large-scale prospective real-world study is required to validate its diagnostic efficacy prior to implementation in screening and diagnostic programmes.

8.
Transl Vis Sci Technol ; 12(5): 14, 2023 05 01.
Artículo en Inglés | MEDLINE | ID: mdl-37184500

RESUMEN

Purpose: To evaluate a novel deep learning algorithm to distinguish between eyes that may or may not have a graft detachment based on pre-Descemet membrane endothelial keratoplasty (DMEK) anterior segment optical coherence tomography (AS-OCT) images. Methods: Retrospective cohort study. A multiple-instance learning artificial intelligence (MIL-AI) model using a ResNet-101 backbone was designed. AS-OCT images were split into training and testing sets. The MIL-AI model was trained and validated on the training set. Model performance and heatmaps were calculated from the testing set. Classification performance metrics included F1 score (harmonic mean of recall and precision), specificity, sensitivity, and area under curve (AUC). Finally, MIL-AI performance was compared to manual classification by an experienced ophthalmologist. Results: In total, 9466 images of 74 eyes (128 images per eye) were included in the study. Images from 50 eyes were used to train and validate the MIL-AI system, while the remaining 24 eyes were used as the test set to determine its performance and generate heatmaps for visualization. The performance metrics on the test set (95% confidence interval) were as follows: F1 score, 0.77 (0.57-0.91); precision, 0.67 (0.44-0.88); specificity, 0.45 (0.15-0.75); sensitivity, 0.92 (0.73-1.00); and AUC, 0.63 (0.52-0.86). MIL-AI performance was more sensitive (92% vs. 31%) but less specific (45% vs. 64%) than the ophthalmologist's performance. Conclusions: The MIL-AI predicts with high sensitivity the eyes that may have post-DMEK graft detachment requiring rebubbling. Larger-scale clinical trials are warranted to validate the model. Translational Relevance: MIL-AI models represent an opportunity for implementation in routine DMEK suitability screening.


Asunto(s)
Enfermedades de la Córnea , Aprendizaje Profundo , Queratoplastia Endotelial de la Lámina Limitante Posterior , Humanos , Endotelio Corneal/trasplante , Tomografía de Coherencia Óptica/métodos , Estudios Retrospectivos , Inteligencia Artificial , Agudeza Visual , Queratoplastia Endotelial de la Lámina Limitante Posterior/métodos , Enfermedades de la Córnea/cirugía
9.
IEEE Trans Med Imaging ; 42(2): 416-429, 2023 02.
Artículo en Inglés | MEDLINE | ID: mdl-36044486

RESUMEN

Glaucoma is a progressive eye disease that results in permanent vision loss, and the vertical cup to disc ratio (vCDR) in colour fundus images is essential in glaucoma screening and assessment. Previous fully supervised convolution neural networks segment the optic disc (OD) and optic cup (OC) from color fundus images and then calculate the vCDR offline. However, they rely on a large set of labeled masks for training, which is expensive and time-consuming to acquire. To address this, we propose a weakly and semi-supervised graph-based network that investigates geometric associations and domain knowledge between segmentation probability maps (PM), modified signed distance function representations (mSDF), and boundary region of interest characteristics (B-ROI) in three aspects. Firstly, we propose a novel Dual Adaptive Graph Convolutional Network (DAGCN) to reason the long-range features of the PM and the mSDF w.r.t. the regional uniformity. Secondly, we propose a dual consistency regularization-based semi-supervised learning paradigm. The regional consistency between the PM and the mSDF, and the marginal consistency between the derived B-ROI from each of them boost the proposed model's performance due to the inherent geometric associations. Thirdly, we exploit the task-specific domain knowledge via the oval shapes of OD & OC, where a differentiable vCDR estimating layer is proposed. Furthermore, without additional annotations, the supervision on vCDR serves as weakly-supervisions for segmentation tasks. Experiments on six large-scale datasets demonstrate our model's superior performance on OD & OC segmentation and vCDR estimation. The implementation code has been made available.https://github.com/smallmax00/Dual_Adaptive_Graph_Reasoning.


Asunto(s)
Glaucoma , Disco Óptico , Humanos , Disco Óptico/diagnóstico por imagen , Glaucoma/diagnóstico por imagen , Fondo de Ojo , Redes Neurales de la Computación , Técnicas de Diagnóstico Oftalmológico
10.
Front Med (Lausanne) ; 10: 1113030, 2023.
Artículo en Inglés | MEDLINE | ID: mdl-37680621

RESUMEN

Background: The automatic analysis of medical images has the potential improve diagnostic accuracy while reducing the strain on clinicians. Current methods analyzing 3D-like imaging data, such as computerized tomography imaging, often treat each image slice as individual slices. This may not be able to appropriately model the relationship between slices. Methods: Our proposed method utilizes a mixed-effects model within the deep learning framework to model the relationship between slices. We externally validated this method on a data set taken from a different country and compared our results against other proposed methods. We evaluated the discrimination, calibration, and clinical usefulness of our model using a range of measures. Finally, we carried out a sensitivity analysis to demonstrate our methods robustness to noise and missing data. Results: In the external geographic validation set our model showed excellent performance with an AUROC of 0.930 (95%CI: 0.914, 0.947), with a sensitivity and specificity, PPV, and NPV of 0.778 (0.720, 0.828), 0.882 (0.853, 0.908), 0.744 (0.686, 0.797), and 0.900 (0.872, 0.924) at the 0.5 probability cut-off point. Our model also maintained good calibration in the external validation dataset, while other methods showed poor calibration. Conclusion: Deep learning can reduce stress on healthcare systems by automatically screening CT imaging for COVID-19. Our method showed improved generalizability in external validation compared to previous published methods. However, deep learning models must be robustly assessed using various performance measures and externally validated in each setting. In addition, best practice guidelines for developing and reporting predictive models are vital for the safe adoption of such models.

11.
Med Image Anal ; 84: 102722, 2023 02.
Artículo en Inglés | MEDLINE | ID: mdl-36574737

RESUMEN

Coronavirus disease (COVID-19) has caused a worldwide pandemic, putting millions of people's health and lives in jeopardy. Detecting infected patients early on chest computed tomography (CT) is critical in combating COVID-19. Harnessing uncertainty-aware consensus-assisted multiple instance learning (UC-MIL), we propose to diagnose COVID-19 using a new bilateral adaptive graph-based (BA-GCN) model that can use both 2D and 3D discriminative information in 3D CT volumes with arbitrary number of slices. Given the importance of lung segmentation for this task, we have created the largest manual annotation dataset so far with 7,768 slices from COVID-19 patients, and have used it to train a 2D segmentation model to segment the lungs from individual slices and mask the lungs as the regions of interest for the subsequent analyses. We then used the UC-MIL model to estimate the uncertainty of each prediction and the consensus between multiple predictions on each CT slice to automatically select a fixed number of CT slices with reliable predictions for the subsequent model reasoning. Finally, we adaptively constructed a BA-GCN with vertices from different granularity levels (2D and 3D) to aggregate multi-level features for the final diagnosis with the benefits of the graph convolution network's superiority to tackle cross-granularity relationships. Experimental results on three largest COVID-19 CT datasets demonstrated that our model can produce reliable and accurate COVID-19 predictions using CT volumes with any number of slices, which outperforms existing approaches in terms of learning and generalisation ability. To promote reproducible research, we have made the datasets, including the manual annotations and cleaned CT dataset, as well as the implementation code, available at https://doi.org/10.5281/zenodo.6361963.


Asunto(s)
Prueba de COVID-19 , COVID-19 , Humanos , Consenso , Incertidumbre , COVID-19/diagnóstico por imagen , Tomografía Computarizada por Rayos X
12.
J Empir Res Hum Res Ethics ; 17(3): 373-381, 2022 07.
Artículo en Inglés | MEDLINE | ID: mdl-35068259

RESUMEN

This study determined the effectiveness of three deidentification methods: use of a) a black box to obscure facial landmarks, b) a letterbox view to display restricted facial landmarks and c) a half letterbox view. Facial images of well-known celebrities were used to create a series of decreasingly deidentified images and displayed to participants in a structured interview session. 55.5% were recognised when all facial features were covered using a black box, leaving only the hair and neck exposed. The letterbox view proved more effective, reaching over 50% recognition only once the periorbital region, eyebrows, and forehead were visible. The half letterbox was the most effective, requiring the nose to be revealed before recognition reached over 50%, and should be the option of choice where appropriate. These findings provide valuable information for informed consent discussions, and we recommend consent to publish forms should stipulate the deidentification method that will be used.


Asunto(s)
Confidencialidad , Anonimización de la Información , Estudios Transversales , Humanos , Consentimiento Informado , Proyectos Piloto , Edición
13.
IEEE Trans Med Imaging ; 41(3): 690-701, 2022 03.
Artículo en Inglés | MEDLINE | ID: mdl-34714742

RESUMEN

Segmentation is a fundamental task in biomedical image analysis. Unlike the existing region-based dense pixel classification methods or boundary-based polygon regression methods, we build a novel graph neural network (GNN) based deep learning framework with multiple graph reasoning modules to explicitly leverage both region and boundary features in an end-to-end manner. The mechanism extracts discriminative region and boundary features, referred to as initialized region and boundary node embeddings, using a proposed Attention Enhancement Module (AEM). The weighted links between cross-domain nodes (region and boundary feature domains) in each graph are defined in a data-dependent way, which retains both global and local cross-node relationships. The iterative message aggregation and node update mechanism can enhance the interaction between each graph reasoning module's global semantic information and local spatial characteristics. Our model, in particular, is capable of concurrently addressing region and boundary feature reasoning and aggregation at several different feature levels due to the proposed multi-level feature node embeddings in different parallel graph reasoning modules. Experiments on two types of challenging datasets demonstrate that our method outperforms state-of-the-art approaches for segmentation of polyps in colonoscopy images and of the optic disc and optic cup in colour fundus images. The trained models will be made available at: https://github.com/smallmax00/Graph_Region_Boudnary.


Asunto(s)
Redes Neurales de la Computación , Disco Óptico , Fondo de Ojo , Procesamiento de Imagen Asistido por Computador , Semántica
14.
J Clin Med ; 11(20)2022 Oct 20.
Artículo en Inglés | MEDLINE | ID: mdl-36294519

RESUMEN

Corneal confocal microscopy (CCM) is a rapid non-invasive in vivo ophthalmic imaging technique that images the cornea. Historically, it was utilised in the diagnosis and clinical management of corneal epithelial and stromal disorders. However, over the past 20 years, CCM has been increasingly used to image sub-basal small nerve fibres in a variety of peripheral neuropathies and central neurodegenerative diseases. CCM has been used to identify subclinical nerve damage and to predict the development of diabetic peripheral neuropathy (DPN). The complex structure of the corneal sub-basal nerve plexus can be readily analysed through nerve segmentation with manual or automated quantification of parameters such as corneal nerve fibre length (CNFL), nerve fibre density (CNFD), and nerve branch density (CNBD). Large quantities of 2D corneal nerve images lend themselves to the application of artificial intelligence (AI)-based deep learning algorithms (DLA). Indeed, DLA have demonstrated performance comparable to manual but superior to automated quantification of corneal nerve morphology. Recently, our end-to-end classification with a 3 class AI model demonstrated high sensitivity and specificity in differentiating healthy volunteers from people with and without peripheral neuropathy. We believe there is significant scope and need to apply AI to help differentiate between peripheral neuropathies and also central neurodegenerative disorders. AI has significant potential to enhance the diagnostic and prognostic utility of CCM in the management of both peripheral and central neurodegenerative diseases.

15.
IEEE J Biomed Health Inform ; 24(10): 2776-2786, 2020 10.
Artículo en Inglés | MEDLINE | ID: mdl-32750973

RESUMEN

Fast and accurate diagnosis is essential for the efficient and effective control of the COVID-19 pandemic that is currently disrupting the whole world. Despite the prevalence of the COVID-19 outbreak, relatively few diagnostic images are openly available to develop automatic diagnosis algorithms. Traditional deep learning methods often struggle when data is highly unbalanced with many cases in one class and only a few cases in another; new methods must be developed to overcome this challenge. We propose a novel activation function based on the generalized extreme value (GEV) distribution from extreme value theory, which improves performance over the traditional sigmoid activation function when one class significantly outweighs the other. We demonstrate the proposed activation function on a publicly available dataset and externally validate on a dataset consisting of 1,909 healthy chest X-rays and 84 COVID-19 X-rays. The proposed method achieves an improved area under the receiver operating characteristic (DeLong's p-value < 0.05) compared to the sigmoid activation. Our method is also demonstrated on a dataset of healthy and pneumonia vs. COVID-19 X-rays and a set of computerized tomography images, achieving improved sensitivity. The proposed GEV activation function significantly improves upon the previously used sigmoid activation for binary classification. This new paradigm is expected to play a significant role in the fight against COVID-19 and other diseases, with relatively few training cases available.


Asunto(s)
Algoritmos , Betacoronavirus , Técnicas de Laboratorio Clínico/métodos , Infecciones por Coronavirus/diagnóstico , Pandemias , Neumonía Viral/diagnóstico , Teorema de Bayes , COVID-19 , Prueba de COVID-19 , Técnicas de Laboratorio Clínico/estadística & datos numéricos , Biología Computacional , Infecciones por Coronavirus/diagnóstico por imagen , Infecciones por Coronavirus/epidemiología , Bases de Datos Factuales/estadística & datos numéricos , Aprendizaje Profundo , Humanos , Redes Neurales de la Computación , Neumonía Viral/diagnóstico por imagen , Neumonía Viral/epidemiología , Interpretación de Imagen Radiográfica Asistida por Computador/métodos , SARS-CoV-2 , Tomografía Computarizada por Rayos X/estadística & datos numéricos
SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA