Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 12 de 12
Filtrar
1.
Malar J ; 22(1): 139, 2023 Apr 26.
Artigo em Inglês | MEDLINE | ID: mdl-37101295

RESUMO

BACKGROUND: Cerebral malaria (CM) continues to present a major health challenge, particularly in sub-Saharan Africa. CM is associated with a characteristic malarial retinopathy (MR) with diagnostic and prognostic significance. Advances in retinal imaging have allowed researchers to better characterize the changes seen in MR and to make inferences about the pathophysiology of the disease. The study aimed to explore the role of retinal imaging in diagnosis and prognostication in CM; establish insights into pathophysiology of CM from retinal imaging; establish future research directions. METHODS: The literature was systematically reviewed using the African Index Medicus, MEDLINE, Scopus and Web of Science databases. A total of 35 full texts were included in the final analysis. The descriptive nature of the included studies and heterogeneity precluded meta-analysis. RESULTS: Available research clearly shows retinal imaging is useful both as a clinical tool for the assessment of CM and as a scientific instrument to aid the understanding of the condition. Modalities which can be performed at the bedside, such as fundus photography and optical coherence tomography, are best positioned to take advantage of artificial intelligence-assisted image analysis, unlocking the clinical potential of retinal imaging for real-time diagnosis in low-resource environments where extensively trained clinicians may be few in number, and for guiding adjunctive therapies as they develop. CONCLUSIONS: Further research into retinal imaging technologies in CM is justified. In particular, co-ordinated interdisciplinary work shows promise in unpicking the pathophysiology of a complex disease.


Assuntos
Malária Cerebral , Doenças Retinianas , Humanos , Inteligência Artificial , Retina/diagnóstico por imagem , Doenças Retinianas/diagnóstico por imagem , Tomografia de Coerência Óptica/métodos
2.
Diabetologia ; 65(3): 457-466, 2022 03.
Artigo em Inglês | MEDLINE | ID: mdl-34806115

RESUMO

AIMS/HYPOTHESIS: We aimed to develop an artificial intelligence (AI)-based deep learning algorithm (DLA) applying attribution methods without image segmentation to corneal confocal microscopy images and to accurately classify peripheral neuropathy (or lack of). METHODS: The AI-based DLA utilised convolutional neural networks with data augmentation to increase the algorithm's generalisability. The algorithm was trained using a high-end graphics processor for 300 epochs on 329 corneal nerve images and tested on 40 images (1 image/participant). Participants consisted of healthy volunteer (HV) participants (n = 90) and participants with type 1 diabetes (n = 88), type 2 diabetes (n = 141) and prediabetes (n = 50) (defined as impaired fasting glucose, impaired glucose tolerance or a combination of both), and were classified into HV, those without neuropathy (PN-) (n = 149) and those with neuropathy (PN+) (n = 130). For the AI-based DLA, a modified residual neural network called ResNet-50 was developed and used to extract features from images and perform classification. The algorithm was tested on 40 participants (15 HV, 13 PN-, 12 PN+). Attribution methods gradient-weighted class activation mapping (Grad-CAM), Guided Grad-CAM and occlusion sensitivity displayed the areas within the image that had the greatest impact on the decision of the algorithm. RESULTS: The results were as follows: HV: recall of 1.0 (95% CI 1.0, 1.0), precision of 0.83 (95% CI 0.65, 1.0), F1-score of 0.91 (95% CI 0.79, 1.0); PN-: recall of 0.85 (95% CI 0.62, 1.0), precision of 0.92 (95% CI 0.73, 1.0), F1-score of 0.88 (95% CI 0.71, 1.0); PN+: recall of 0.83 (95% CI 0.58, 1.0), precision of 1.0 (95% CI 1.0, 1.0), F1-score of 0.91 (95% CI 0.74, 1.0). The features displayed by the attribution methods demonstrated more corneal nerves in HV, a reduction in corneal nerves for PN- and an absence of corneal nerves for PN+ images. CONCLUSIONS/INTERPRETATION: We demonstrate promising results in the rapid classification of peripheral neuropathy using a single corneal image. A large-scale multicentre validation study is required to assess the utility of AI-based DLA in screening and diagnostic programmes for diabetic neuropathy.


Assuntos
Diabetes Mellitus Tipo 2 , Neuropatias Diabéticas , Estado Pré-Diabético , Inteligência Artificial , Neuropatias Diabéticas/diagnóstico , Humanos , Microscopia Confocal/métodos , Estado Pré-Diabético/diagnóstico
3.
Med Image Anal ; 95: 103183, 2024 Jul.
Artigo em Inglês | MEDLINE | ID: mdl-38692098

RESUMO

Automated segmentation is a challenging task in medical image analysis that usually requires a large amount of manually labeled data. However, most current supervised learning based algorithms suffer from insufficient manual annotations, posing a significant difficulty for accurate and robust segmentation. In addition, most current semi-supervised methods lack explicit representations of geometric structure and semantic information, restricting segmentation accuracy. In this work, we propose a hybrid framework to learn polygon vertices, region masks, and their boundaries in a weakly/semi-supervised manner that significantly advances geometric and semantic representations. Firstly, we propose multi-granularity learning of explicit geometric structure constraints via polygon vertices (PolyV) and pixel-wise region (PixelR) segmentation masks in a semi-supervised manner. Secondly, we propose eliminating boundary ambiguity by using an explicit contrastive objective to learn a discriminative feature space of boundary contours at the pixel level with limited annotations. Thirdly, we exploit the task-specific clinical domain knowledge to differentiate the clinical function assessment end-to-end. The ground truth of clinical function assessment, on the other hand, can serve as auxiliary weak supervision for PolyV and PixelR learning. We evaluate the proposed framework on two tasks, including optic disc (OD) and cup (OC) segmentation along with vertical cup-to-disc ratio (vCDR) estimation in fundus images; left ventricle (LV) segmentation at end-diastolic and end-systolic frames along with ejection fraction (LVEF) estimation in two-dimensional echocardiography images. Experiments on nine large-scale datasets of the two tasks under different label settings demonstrate our model's superior performance on segmentation and clinical function assessment.


Assuntos
Algoritmos , Humanos , Interpretação de Imagem Assistida por Computador/métodos , Ecocardiografia
4.
IEEE Trans Med Imaging ; 42(2): 416-429, 2023 02.
Artigo em Inglês | MEDLINE | ID: mdl-36044486

RESUMO

Glaucoma is a progressive eye disease that results in permanent vision loss, and the vertical cup to disc ratio (vCDR) in colour fundus images is essential in glaucoma screening and assessment. Previous fully supervised convolution neural networks segment the optic disc (OD) and optic cup (OC) from color fundus images and then calculate the vCDR offline. However, they rely on a large set of labeled masks for training, which is expensive and time-consuming to acquire. To address this, we propose a weakly and semi-supervised graph-based network that investigates geometric associations and domain knowledge between segmentation probability maps (PM), modified signed distance function representations (mSDF), and boundary region of interest characteristics (B-ROI) in three aspects. Firstly, we propose a novel Dual Adaptive Graph Convolutional Network (DAGCN) to reason the long-range features of the PM and the mSDF w.r.t. the regional uniformity. Secondly, we propose a dual consistency regularization-based semi-supervised learning paradigm. The regional consistency between the PM and the mSDF, and the marginal consistency between the derived B-ROI from each of them boost the proposed model's performance due to the inherent geometric associations. Thirdly, we exploit the task-specific domain knowledge via the oval shapes of OD & OC, where a differentiable vCDR estimating layer is proposed. Furthermore, without additional annotations, the supervision on vCDR serves as weakly-supervisions for segmentation tasks. Experiments on six large-scale datasets demonstrate our model's superior performance on OD & OC segmentation and vCDR estimation. The implementation code has been made available.https://github.com/smallmax00/Dual_Adaptive_Graph_Reasoning.


Assuntos
Glaucoma , Disco Óptico , Humanos , Disco Óptico/diagnóstico por imagem , Glaucoma/diagnóstico por imagem , Fundo de Olho , Redes Neurais de Computação , Técnicas de Diagnóstico Oftalmológico
5.
J Clin Med ; 12(4)2023 Feb 06.
Artigo em Inglês | MEDLINE | ID: mdl-36835819

RESUMO

Diabetic peripheral neuropathy (DPN) is the leading cause of neuropathy worldwide resulting in excess morbidity and mortality. We aimed to develop an artificial intelligence deep learning algorithm to classify the presence or absence of peripheral neuropathy (PN) in participants with diabetes or pre-diabetes using corneal confocal microscopy (CCM) images of the sub-basal nerve plexus. A modified ResNet-50 model was trained to perform the binary classification of PN (PN+) versus no PN (PN-) based on the Toronto consensus criteria. A dataset of 279 participants (149 PN-, 130 PN+) was used to train (n = 200), validate (n = 18), and test (n = 61) the algorithm, utilizing one image per participant. The dataset consisted of participants with type 1 diabetes (n = 88), type 2 diabetes (n = 141), and pre-diabetes (n = 50). The algorithm was evaluated using diagnostic performance metrics and attribution-based methods (gradient-weighted class activation mapping (Grad-CAM) and Guided Grad-CAM). In detecting PN+, the AI-based DLA achieved a sensitivity of 0.91 (95%CI: 0.79-1.0), a specificity of 0.93 (95%CI: 0.83-1.0), and an area under the curve (AUC) of 0.95 (95%CI: 0.83-0.99). Our deep learning algorithm demonstrates excellent results for the diagnosis of PN using CCM. A large-scale prospective real-world study is required to validate its diagnostic efficacy prior to implementation in screening and diagnostic programmes.

6.
Transl Vis Sci Technol ; 12(5): 14, 2023 05 01.
Artigo em Inglês | MEDLINE | ID: mdl-37184500

RESUMO

Purpose: To evaluate a novel deep learning algorithm to distinguish between eyes that may or may not have a graft detachment based on pre-Descemet membrane endothelial keratoplasty (DMEK) anterior segment optical coherence tomography (AS-OCT) images. Methods: Retrospective cohort study. A multiple-instance learning artificial intelligence (MIL-AI) model using a ResNet-101 backbone was designed. AS-OCT images were split into training and testing sets. The MIL-AI model was trained and validated on the training set. Model performance and heatmaps were calculated from the testing set. Classification performance metrics included F1 score (harmonic mean of recall and precision), specificity, sensitivity, and area under curve (AUC). Finally, MIL-AI performance was compared to manual classification by an experienced ophthalmologist. Results: In total, 9466 images of 74 eyes (128 images per eye) were included in the study. Images from 50 eyes were used to train and validate the MIL-AI system, while the remaining 24 eyes were used as the test set to determine its performance and generate heatmaps for visualization. The performance metrics on the test set (95% confidence interval) were as follows: F1 score, 0.77 (0.57-0.91); precision, 0.67 (0.44-0.88); specificity, 0.45 (0.15-0.75); sensitivity, 0.92 (0.73-1.00); and AUC, 0.63 (0.52-0.86). MIL-AI performance was more sensitive (92% vs. 31%) but less specific (45% vs. 64%) than the ophthalmologist's performance. Conclusions: The MIL-AI predicts with high sensitivity the eyes that may have post-DMEK graft detachment requiring rebubbling. Larger-scale clinical trials are warranted to validate the model. Translational Relevance: MIL-AI models represent an opportunity for implementation in routine DMEK suitability screening.


Assuntos
Doenças da Córnea , Aprendizado Profundo , Ceratoplastia Endotelial com Remoção da Lâmina Limitante Posterior , Humanos , Endotélio Corneano/transplante , Tomografia de Coerência Óptica/métodos , Estudos Retrospectivos , Inteligência Artificial , Acuidade Visual , Ceratoplastia Endotelial com Remoção da Lâmina Limitante Posterior/métodos , Doenças da Córnea/cirurgia
7.
Med Image Anal ; 84: 102722, 2023 02.
Artigo em Inglês | MEDLINE | ID: mdl-36574737

RESUMO

Coronavirus disease (COVID-19) has caused a worldwide pandemic, putting millions of people's health and lives in jeopardy. Detecting infected patients early on chest computed tomography (CT) is critical in combating COVID-19. Harnessing uncertainty-aware consensus-assisted multiple instance learning (UC-MIL), we propose to diagnose COVID-19 using a new bilateral adaptive graph-based (BA-GCN) model that can use both 2D and 3D discriminative information in 3D CT volumes with arbitrary number of slices. Given the importance of lung segmentation for this task, we have created the largest manual annotation dataset so far with 7,768 slices from COVID-19 patients, and have used it to train a 2D segmentation model to segment the lungs from individual slices and mask the lungs as the regions of interest for the subsequent analyses. We then used the UC-MIL model to estimate the uncertainty of each prediction and the consensus between multiple predictions on each CT slice to automatically select a fixed number of CT slices with reliable predictions for the subsequent model reasoning. Finally, we adaptively constructed a BA-GCN with vertices from different granularity levels (2D and 3D) to aggregate multi-level features for the final diagnosis with the benefits of the graph convolution network's superiority to tackle cross-granularity relationships. Experimental results on three largest COVID-19 CT datasets demonstrated that our model can produce reliable and accurate COVID-19 predictions using CT volumes with any number of slices, which outperforms existing approaches in terms of learning and generalisation ability. To promote reproducible research, we have made the datasets, including the manual annotations and cleaned CT dataset, as well as the implementation code, available at https://doi.org/10.5281/zenodo.6361963.


Assuntos
Teste para COVID-19 , COVID-19 , Humanos , Consenso , Incerteza , COVID-19/diagnóstico por imagem , Tomografia Computadorizada por Raios X
8.
Front Med (Lausanne) ; 10: 1113030, 2023.
Artigo em Inglês | MEDLINE | ID: mdl-37680621

RESUMO

Background: The automatic analysis of medical images has the potential improve diagnostic accuracy while reducing the strain on clinicians. Current methods analyzing 3D-like imaging data, such as computerized tomography imaging, often treat each image slice as individual slices. This may not be able to appropriately model the relationship between slices. Methods: Our proposed method utilizes a mixed-effects model within the deep learning framework to model the relationship between slices. We externally validated this method on a data set taken from a different country and compared our results against other proposed methods. We evaluated the discrimination, calibration, and clinical usefulness of our model using a range of measures. Finally, we carried out a sensitivity analysis to demonstrate our methods robustness to noise and missing data. Results: In the external geographic validation set our model showed excellent performance with an AUROC of 0.930 (95%CI: 0.914, 0.947), with a sensitivity and specificity, PPV, and NPV of 0.778 (0.720, 0.828), 0.882 (0.853, 0.908), 0.744 (0.686, 0.797), and 0.900 (0.872, 0.924) at the 0.5 probability cut-off point. Our model also maintained good calibration in the external validation dataset, while other methods showed poor calibration. Conclusion: Deep learning can reduce stress on healthcare systems by automatically screening CT imaging for COVID-19. Our method showed improved generalizability in external validation compared to previous published methods. However, deep learning models must be robustly assessed using various performance measures and externally validated in each setting. In addition, best practice guidelines for developing and reporting predictive models are vital for the safe adoption of such models.

9.
J Empir Res Hum Res Ethics ; 17(3): 373-381, 2022 07.
Artigo em Inglês | MEDLINE | ID: mdl-35068259

RESUMO

This study determined the effectiveness of three deidentification methods: use of a) a black box to obscure facial landmarks, b) a letterbox view to display restricted facial landmarks and c) a half letterbox view. Facial images of well-known celebrities were used to create a series of decreasingly deidentified images and displayed to participants in a structured interview session. 55.5% were recognised when all facial features were covered using a black box, leaving only the hair and neck exposed. The letterbox view proved more effective, reaching over 50% recognition only once the periorbital region, eyebrows, and forehead were visible. The half letterbox was the most effective, requiring the nose to be revealed before recognition reached over 50%, and should be the option of choice where appropriate. These findings provide valuable information for informed consent discussions, and we recommend consent to publish forms should stipulate the deidentification method that will be used.


Assuntos
Confidencialidade , Anonimização de Dados , Estudos Transversais , Humanos , Consentimento Livre e Esclarecido , Projetos Piloto , Editoração
10.
IEEE Trans Med Imaging ; 41(3): 690-701, 2022 03.
Artigo em Inglês | MEDLINE | ID: mdl-34714742

RESUMO

Segmentation is a fundamental task in biomedical image analysis. Unlike the existing region-based dense pixel classification methods or boundary-based polygon regression methods, we build a novel graph neural network (GNN) based deep learning framework with multiple graph reasoning modules to explicitly leverage both region and boundary features in an end-to-end manner. The mechanism extracts discriminative region and boundary features, referred to as initialized region and boundary node embeddings, using a proposed Attention Enhancement Module (AEM). The weighted links between cross-domain nodes (region and boundary feature domains) in each graph are defined in a data-dependent way, which retains both global and local cross-node relationships. The iterative message aggregation and node update mechanism can enhance the interaction between each graph reasoning module's global semantic information and local spatial characteristics. Our model, in particular, is capable of concurrently addressing region and boundary feature reasoning and aggregation at several different feature levels due to the proposed multi-level feature node embeddings in different parallel graph reasoning modules. Experiments on two types of challenging datasets demonstrate that our method outperforms state-of-the-art approaches for segmentation of polyps in colonoscopy images and of the optic disc and optic cup in colour fundus images. The trained models will be made available at: https://github.com/smallmax00/Graph_Region_Boudnary.


Assuntos
Redes Neurais de Computação , Disco Óptico , Fundo de Olho , Processamento de Imagem Assistida por Computador , Semântica
11.
J Clin Med ; 11(20)2022 Oct 20.
Artigo em Inglês | MEDLINE | ID: mdl-36294519

RESUMO

Corneal confocal microscopy (CCM) is a rapid non-invasive in vivo ophthalmic imaging technique that images the cornea. Historically, it was utilised in the diagnosis and clinical management of corneal epithelial and stromal disorders. However, over the past 20 years, CCM has been increasingly used to image sub-basal small nerve fibres in a variety of peripheral neuropathies and central neurodegenerative diseases. CCM has been used to identify subclinical nerve damage and to predict the development of diabetic peripheral neuropathy (DPN). The complex structure of the corneal sub-basal nerve plexus can be readily analysed through nerve segmentation with manual or automated quantification of parameters such as corneal nerve fibre length (CNFL), nerve fibre density (CNFD), and nerve branch density (CNBD). Large quantities of 2D corneal nerve images lend themselves to the application of artificial intelligence (AI)-based deep learning algorithms (DLA). Indeed, DLA have demonstrated performance comparable to manual but superior to automated quantification of corneal nerve morphology. Recently, our end-to-end classification with a 3 class AI model demonstrated high sensitivity and specificity in differentiating healthy volunteers from people with and without peripheral neuropathy. We believe there is significant scope and need to apply AI to help differentiate between peripheral neuropathies and also central neurodegenerative disorders. AI has significant potential to enhance the diagnostic and prognostic utility of CCM in the management of both peripheral and central neurodegenerative diseases.

12.
IEEE J Biomed Health Inform ; 24(10): 2776-2786, 2020 10.
Artigo em Inglês | MEDLINE | ID: mdl-32750973

RESUMO

Fast and accurate diagnosis is essential for the efficient and effective control of the COVID-19 pandemic that is currently disrupting the whole world. Despite the prevalence of the COVID-19 outbreak, relatively few diagnostic images are openly available to develop automatic diagnosis algorithms. Traditional deep learning methods often struggle when data is highly unbalanced with many cases in one class and only a few cases in another; new methods must be developed to overcome this challenge. We propose a novel activation function based on the generalized extreme value (GEV) distribution from extreme value theory, which improves performance over the traditional sigmoid activation function when one class significantly outweighs the other. We demonstrate the proposed activation function on a publicly available dataset and externally validate on a dataset consisting of 1,909 healthy chest X-rays and 84 COVID-19 X-rays. The proposed method achieves an improved area under the receiver operating characteristic (DeLong's p-value < 0.05) compared to the sigmoid activation. Our method is also demonstrated on a dataset of healthy and pneumonia vs. COVID-19 X-rays and a set of computerized tomography images, achieving improved sensitivity. The proposed GEV activation function significantly improves upon the previously used sigmoid activation for binary classification. This new paradigm is expected to play a significant role in the fight against COVID-19 and other diseases, with relatively few training cases available.


Assuntos
Algoritmos , Betacoronavirus , Técnicas de Laboratório Clínico/métodos , Infecções por Coronavirus/diagnóstico , Pandemias , Pneumonia Viral/diagnóstico , Teorema de Bayes , COVID-19 , Teste para COVID-19 , Técnicas de Laboratório Clínico/estatística & dados numéricos , Biologia Computacional , Infecções por Coronavirus/diagnóstico por imagem , Infecções por Coronavirus/epidemiologia , Bases de Dados Factuais/estatística & dados numéricos , Aprendizado Profundo , Humanos , Redes Neurais de Computação , Pneumonia Viral/diagnóstico por imagem , Pneumonia Viral/epidemiologia , Interpretação de Imagem Radiográfica Assistida por Computador/métodos , SARS-CoV-2 , Tomografia Computadorizada por Raios X/estatística & dados numéricos
SELEÇÃO DE REFERÊNCIAS
Detalhe da pesquisa