Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 7 de 7
Filtrar
Mais filtros

Base de dados
Tipo de documento
País de afiliação
Intervalo de ano de publicação
1.
Comput Biol Med ; 172: 108246, 2024 Apr.
Artigo em Inglês | MEDLINE | ID: mdl-38471350

RESUMO

Diabetic retinopathy (DR) is a severe ocular complication of diabetes that can lead to vision damage and even blindness. Currently, traditional deep convolutional neural networks (CNNs) used for DR grading tasks face two primary challenges: (1) insensitivity to minority classes due to imbalanced data distribution, and (2) neglecting the relationship between the left and right eyes by utilizing the fundus image of only one eye for training without differentiating between them. To tackle these challenges, we proposed the DRGCNN (DR Grading CNN) model. To solve the problem caused by imbalanced data distribution, our model adopts a more balanced strategy by allocating an equal number of channels to feature maps representing various DR categories. Furthermore, we introduce a CAM-EfficientNetV2-M encoder dedicated to encoding input retinal fundus images for feature vector generation. The number of parameters of our encoder is 52.88 M, which is less than RegNet_y_16gf (80.57 M) and EfficientNetB7 (63.79 M), but the corresponding kappa value is higher. Additionally, in order to take advantage of the binocular relationship, we input fundus retinal images from both eyes of the patient into the network for features fusion during the training phase. We achieved a kappa value of 86.62% on the EyePACS dataset and 86.16% on the Messidor-2 dataset. Experimental results on these representative datasets for diabetic retinopathy (DR) demonstrate the exceptional performance of our DRGCNN model, establishing it as a highly competitive intelligent classification model in the field of DR. The code is available for use at https://github.com/Fat-Hai/DRGCNN.


Assuntos
Diabetes Mellitus , Retinopatia Diabética , Humanos , Retinopatia Diabética/diagnóstico por imagem , Redes Neurais de Computação , Fundo de Olho
2.
J Invest Dermatol ; 2024 Jun 22.
Artigo em Inglês | MEDLINE | ID: mdl-38909840

RESUMO

Precise evaluation of repigmentation in vitiligo patients is crucial for monitoring treatment efficacy and enhancing patient satisfaction. This study aimed to develop a computer-aided system for assessing repigmentation rates in vitiligo patients, providing valuable insights for clinical practice. A retrospective study was conducted at the Dermatology Department of Shenzhen People's Hospital between June 2019 and November 2022. Pre- and post-treatment images of vitiligo lesions under Wood's lamp were collected, involving 833 participants stratified by sex, age, and pigmentation patterns. Our results demonstrated that the marginal pigmentation pattern exhibited a higher repigmentation rate of 72% compared with the central non-follicular pattern at 45%. Males had a slightly higher average repigmentation rate of 0.37 in comparison to females at 0.33. Among age groups, individuals aged 0-20 years showed the highest average repigmentation rate at 0.41, while the oldest age group (61-80 years) displayed the lowest rate at 0.25. Analysis of multiple visits identified the marginal pattern as the most prevalent (60%), with a mean repigmentation rate of 40%. This study introduced a computational system for evaluating vitiligo repigmentation rates, enhancing our comprehension of patient responses, and ultimately contributing to enhanced clinical care.

3.
Sci Rep ; 14(1): 19285, 2024 08 20.
Artigo em Inglês | MEDLINE | ID: mdl-39164445

RESUMO

Age-related macular degeneration (AMD) and diabetic macular edema (DME) are significant causes of blindness worldwide. The prevalence of these diseases is steadily increasing due to population aging. Therefore, early diagnosis and prevention are crucial for effective treatment. Classification of Macular Degeneration OCT Images is a widely used method for assessing retinal lesions. However, there are two main challenges in OCT image classification: incomplete image feature extraction and lack of prominence in important positional features. To address these challenges, we proposed a deep learning neural network model called MSA-Net, which incorporates our proposed multi-scale architecture and spatial attention mechanism. Our multi-scale architecture is based on depthwise separable convolution, which ensures comprehensive feature extraction from multiple scales while minimizing the growth of model parameters. The spatial attention mechanism is aim to highlight the important positional features in the images, which emphasizes the representation of macular region features in OCT images. We test MSA-NET on the NEH dataset and the UCSD dataset, performing three-class (CNV, DURSEN, and NORMAL) and four-class (CNV, DURSEN, DME, and NORMAL) classification tasks. On the NEH dataset, the accuracy, sensitivity, and specificity are 98.1%, 97.9%, and 98.0%, respectively. After fine-tuning on the UCSD dataset, the accuracy, sensitivity, and specificity are 96.7%, 96.7%, and 98.9%, respectively. Experimental results demonstrate the excellent classification performance and generalization ability of our model compared to previous models and recent well-known OCT classification models, establishing it as a highly competitive intelligence classification approach in the field of macular degeneration.


Assuntos
Aprendizado Profundo , Degeneração Macular , Redes Neurais de Computação , Tomografia de Coerência Óptica , Humanos , Degeneração Macular/diagnóstico por imagem , Degeneração Macular/classificação , Degeneração Macular/patologia , Tomografia de Coerência Óptica/métodos , Edema Macular/diagnóstico por imagem , Edema Macular/classificação , Edema Macular/patologia , Retinopatia Diabética/diagnóstico por imagem , Retinopatia Diabética/classificação , Retinopatia Diabética/patologia , Retinopatia Diabética/diagnóstico , Processamento de Imagem Assistida por Computador/métodos
4.
Sci Rep ; 14(1): 11588, 2024 05 21.
Artigo em Inglês | MEDLINE | ID: mdl-38773207

RESUMO

Current assessment methods for diabetic foot ulcers (DFUs) lack objectivity and consistency, posing a significant risk to diabetes patients, including the potential for amputations, highlighting the urgent need for improved diagnostic tools and care standards in the field. To address this issue, the objective of this study was to develop and evaluate the Smart Diabetic Foot Ulcer Scoring System, ScoreDFUNet, which incorporates artificial intelligence (AI) and image analysis techniques, aiming to enhance the precision and consistency of diabetic foot ulcer assessment. ScoreDFUNet demonstrates precise categorization of DFU images into "ulcer," "infection," "normal," and "gangrene" areas, achieving a noteworthy accuracy rate of 95.34% on the test set, with elevated levels of precision, recall, and F1 scores. Comparative evaluations with dermatologists affirm that our algorithm consistently surpasses the performance of junior and mid-level dermatologists, closely matching the assessments of senior dermatologists, and rigorous analyses including Bland-Altman plots and significance testing validate the robustness and reliability of our algorithm. This innovative AI system presents a valuable tool for healthcare professionals and can significantly improve the care standards in the field of diabetic foot ulcer assessment.


Assuntos
Algoritmos , Inteligência Artificial , Pé Diabético , Pé Diabético/diagnóstico , Pé Diabético/patologia , Humanos , Reprodutibilidade dos Testes , Processamento de Imagem Assistida por Computador/métodos , Índice de Gravidade de Doença
5.
Med Image Anal ; 92: 103061, 2024 Feb.
Artigo em Inglês | MEDLINE | ID: mdl-38086235

RESUMO

The Segment Anything Model (SAM) is the first foundation model for general image segmentation. It has achieved impressive results on various natural image segmentation tasks. However, medical image segmentation (MIS) is more challenging because of the complex modalities, fine anatomical structures, uncertain and complex object boundaries, and wide-range object scales. To fully validate SAM's performance on medical data, we collected and sorted 53 open-source datasets and built a large medical segmentation dataset with 18 modalities, 84 objects, 125 object-modality paired targets, 1050K 2D images, and 6033K masks. We comprehensively analyzed different models and strategies on the so-called COSMOS 1050K dataset. Our findings mainly include the following: (1) SAM showed remarkable performance in some specific objects but was unstable, imperfect, or even totally failed in other situations. (2) SAM with the large ViT-H showed better overall performance than that with the small ViT-B. (3) SAM performed better with manual hints, especially box, than the Everything mode. (4) SAM could help human annotation with high labeling quality and less time. (5) SAM was sensitive to the randomness in the center point and tight box prompts, and may suffer from a serious performance drop. (6) SAM performed better than interactive methods with one or a few points, but will be outpaced as the number of points increases. (7) SAM's performance correlated to different factors, including boundary complexity, intensity differences, etc. (8) Finetuning the SAM on specific medical tasks could improve its average DICE performance by 4.39% and 6.68% for ViT-B and ViT-H, respectively. Codes and models are available at: https://github.com/yuhoo0302/Segment-Anything-Model-for-Medical-Images. We hope that this comprehensive report can help researchers explore the potential of SAM applications in MIS, and guide how to appropriately use and develop SAM.


Assuntos
Diagnóstico por Imagem , Processamento de Imagem Assistida por Computador , Humanos , Processamento de Imagem Assistida por Computador/métodos
6.
J Med Imaging (Bellingham) ; 6(3): 034004, 2019 Jul.
Artigo em Inglês | MEDLINE | ID: mdl-31572745

RESUMO

A color fundus image is an image of the inner wall of the eyeball taken with a fundus camera. Doctors can observe retinal vessel changes in the image, and these changes can be used to diagnose many serious diseases such as atherosclerosis, glaucoma, and age-related macular degeneration. Automated segmentation of retinal vessels can facilitate more efficient diagnosis of these diseases. We propose an improved U-net architecture to segment retinal vessels. Multiscale input layer and dense block are introduced into the conventional U-net, so that the network can make use of richer spatial context information. The proposed method is evaluated on the public dataset DRIVE, achieving 0.8199 in sensitivity and 0.9561 in accuracy. Especially for thin blood vessels, which are difficult to detect because of their low contrast with the background pixels, the segmentation results have been improved.

7.
Comput Med Imaging Graph ; 55: 78-86, 2017 01.
Artigo em Inglês | MEDLINE | ID: mdl-27665058

RESUMO

The automatic exudate segmentation in colour retinal fundus images is an important task in computer aided diagnosis and screening systems for diabetic retinopathy. In this paper, we present a location-to-segmentation strategy for automatic exudate segmentation in colour retinal fundus images, which includes three stages: anatomic structure removal, exudate location and exudate segmentation. In anatomic structure removal stage, matched filters based main vessels segmentation method and a saliency based optic disk segmentation method are proposed. The main vessel and optic disk are then removed to eliminate the adverse affects that they bring to the second stage. In the location stage, we learn a random forest classifier to classify patches into two classes: exudate patches and exudate-free patches, in which the histograms of completed local binary patterns are extracted to describe the texture structures of the patches. Finally, the local variance, the size prior about the exudate regions and the local contrast prior are used to segment the exudate regions out from patches which are classified as exudate patches in the location stage. We evaluate our method both at exudate-level and image-level. For exudate-level evaluation, we test our method on e-ophtha EX dataset, which provides pixel level annotation from the specialists. The experimental results show that our method achieves 76% in sensitivity and 75% in positive prediction value (PPV), which both outperform the state of the art methods significantly. For image-level evaluation, we test our method on DiaRetDB1, and achieve competitive performance compared to the state of the art methods.


Assuntos
Cor , Retinopatia Diabética/diagnóstico por imagem , Exsudatos e Transudatos/diagnóstico por imagem , Fundo de Olho , Interpretação de Imagem Assistida por Computador/métodos , Disco Óptico/diagnóstico por imagem , Vasos Retinianos/diagnóstico por imagem , Algoritmos , Humanos , Reconhecimento Automatizado de Padrão/métodos
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA