Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 6 de 6
Filtrar
Mais filtros

Base de dados
Ano de publicação
Tipo de documento
País de afiliação
Intervalo de ano de publicação
1.
PLoS One ; 19(8): e0306794, 2024.
Artigo em Inglês | MEDLINE | ID: mdl-39110715

RESUMO

BACKGROUND AND OBJECTIVES: To develop and test VMseg, a new image processing algorithm performing automatic segmentation of retinal non-perfusion in widefield OCT-Angiography images, in order to estimate the non-perfusion index in diabetic patients. METHODS: We included diabetic patients with severe non-proliferative or proliferative diabetic retinopathy. We acquired images using the PlexElite 9000 OCT-A device with a photomontage of 5 images of size 12 x 12 mm. We then developed VMseg, a Python algorithm for non-perfusion detection, which binarizes a variance map calculated through convolution and morphological operations. We used 70% of our data set (development set) to fine-tune the algorithm parameters (convolution and morphological parameters, binarization thresholds) and evaluated the algorithm performance on the remaining 30% (test set). The obtained automatic segmentations were compared to a ground truth corresponding to manual segmentation from a retina expert and the inference processing time was estimated. RESULTS: We included 51 eyes of 30 patients (27 severe non-proliferative, 24 proliferative diabetic retinopathy). Using the optimal parameters found on the development set to tune the algorithm, the mean dice for the test set was 0.683 (sd = 0.175). We found a higher dice coefficient for images with a higher area of retinal non-perfusion (rs = 0.722, p < 10-4). There was a strong correlation (rs = 0.877, p < 10-4) between VMseg estimated non-perfusion indexes and indexes estimated using the ground truth segmentation. The Bland-Altman plot revealed that 3 eyes (5.9%) were significantly under-segmented by VMseg. CONCLUSION: We developed VMseg, an automatic algorithm for retinal non-perfusion segmentation on 12 x 12 mm OCT-A widefield photomontages. This simple algorithm was fast at inference time, segmented images in full-resolution and for the OCT-A format, was accurate enough for automatic estimation of retinal non-perfusion index in diabetic patients with diabetic retinopathy.


Assuntos
Algoritmos , Retinopatia Diabética , Tomografia de Coerência Óptica , Humanos , Retinopatia Diabética/diagnóstico por imagem , Tomografia de Coerência Óptica/métodos , Feminino , Masculino , Pessoa de Meia-Idade , Idoso , Processamento de Imagem Assistida por Computador/métodos , Vasos Retinianos/diagnóstico por imagem , Retina/diagnóstico por imagem , Retina/patologia , Angiografia/métodos , Angiofluoresceinografia/métodos
2.
BMJ Open ; 14(4): e084574, 2024 Apr 15.
Artigo em Inglês | MEDLINE | ID: mdl-38626974

RESUMO

INTRODUCTION: An important obstacle in the fight against diabetic retinopathy (DR) is the use of a classification system based on old imaging techniques and insufficient data to accurately predict its evolution. New imaging techniques generate new valuable data, but we lack an adapted classification based on these data. The main objective of the Evaluation Intelligente de la Rétinopathie Diabétique, Intelligent evaluation of DR (EviRed) project is to develop and validate a system assisting the ophthalmologist in decision-making during DR follow-up by improving the prediction of its evolution. METHODS AND ANALYSIS: A cohort of up to 5000 patients with diabetes will be recruited from 18 diabetology departments and 14 ophthalmology departments, in public or private hospitals in France and followed for an average of 2 years. Each year, systemic health data as well as ophthalmological data will be collected. Both eyes will be imaged by using different imaging modalities including widefield photography, optical coherence tomography (OCT) and OCT-angiography. The EviRed cohort will be divided into two groups: one group will be randomly selected in each stratum during the inclusion period to be representative of the general diabetic population. Their data will be used for validating the algorithms (validation cohort). The data for the remaining patients (training cohort) will be used to train the algorithms. ETHICS AND DISSEMINATION: The study protocol was approved by the French South-West and Overseas Ethics Committee 4 on 28 August 2020 (CPP2020-07-060b/2020-A01725-34/20.06.16.41433). Prior to the start of the study, each patient will provide a written informed consent documenting his or her agreement to participate in the clinical trial. Results of this research will be disseminated in peer-reviewed publications and conference presentations. The database will also be available for further study or development that could benefit patients. TRIAL REGISTRATION NUMBER: NCT04624737.


Assuntos
Diabetes Mellitus , Retinopatia Diabética , Humanos , Masculino , Feminino , Retinopatia Diabética/diagnóstico por imagem , Inteligência Artificial , Estudos Prospectivos , Retina , Algoritmos
3.
Comput Biol Med ; 177: 108635, 2024 Jul.
Artigo em Inglês | MEDLINE | ID: mdl-38796881

RESUMO

Multimodal medical imaging plays a pivotal role in clinical diagnosis and research, as it combines information from various imaging modalities to provide a more comprehensive understanding of the underlying pathology. Recently, deep learning-based multimodal fusion techniques have emerged as powerful tools for improving medical image classification. This review offers a thorough analysis of the developments in deep learning-based multimodal fusion for medical classification tasks. We explore the complementary relationships among prevalent clinical modalities and outline three main fusion schemes for multimodal classification networks: input fusion, intermediate fusion (encompassing single-level fusion, hierarchical fusion, and attention-based fusion), and output fusion. By evaluating the performance of these fusion techniques, we provide insight into the suitability of different network architectures for various multimodal fusion scenarios and application domains. Furthermore, we delve into challenges related to network architecture selection, handling incomplete multimodal data management, and the potential limitations of multimodal fusion. Finally, we spotlight the promising future of Transformer-based multimodal fusion techniques and give recommendations for future research in this rapidly evolving field.


Assuntos
Aprendizado Profundo , Imagem Multimodal , Humanos , Imagem Multimodal/métodos , Interpretação de Imagem Assistida por Computador/métodos , Processamento de Imagem Assistida por Computador/métodos
4.
Sci Rep ; 14(1): 11723, 2024 05 22.
Artigo em Inglês | MEDLINE | ID: mdl-38778145

RESUMO

In the realm of ophthalmology, precise measurement of tear film break-up time (TBUT) plays a crucial role in diagnosing dry eye disease (DED). This study aims to introduce an automated approach utilizing artificial intelligence (AI) to mitigate subjectivity and enhance the reliability of TBUT measurement. We employed a dataset of 47 slit lamp videos for development, while a test dataset of 20 slit lamp videos was used for evaluating the proposed approach. The multistep approach for TBUT estimation involves the utilization of a Dual-Task Siamese Network for classifying video frames into tear film breakup or non-breakup categories. Subsequently, a postprocessing step incorporates a Gaussian filter to smooth the instant breakup/non-breakup predictions effectively. Applying a threshold to the smoothed predictions identifies the initiation of tear film breakup. Our proposed method demonstrates on the evaluation dataset a precise breakup/non-breakup classification of video frames, achieving an Area Under the Curve of 0.870. At the video level, we observed a strong Pearson correlation coefficient (r) of 0.81 between TBUT assessments conducted using our approach and the ground truth. These findings underscore the potential of AI-based approaches in quantifying TBUT, presenting a promising avenue for advancing diagnostic methodologies in ophthalmology.


Assuntos
Aprendizado Profundo , Síndromes do Olho Seco , Lágrimas , Síndromes do Olho Seco/diagnóstico , Humanos , Reprodutibilidade dos Testes , Gravação em Vídeo
5.
Artif Intell Med ; 149: 102803, 2024 Mar.
Artigo em Inglês | MEDLINE | ID: mdl-38462293

RESUMO

Diabetic Retinopathy (DR), an ocular complication of diabetes, is a leading cause of blindness worldwide. Traditionally, DR is monitored using Color Fundus Photography (CFP), a widespread 2-D imaging modality. However, DR classifications based on CFP have poor predictive power, resulting in suboptimal DR management. Optical Coherence Tomography Angiography (OCTA) is a recent 3-D imaging modality offering enhanced structural and functional information (blood flow) with a wider field of view. This paper investigates automatic DR severity assessment using 3-D OCTA. A straightforward solution to this task is a 3-D neural network classifier. However, 3-D architectures have numerous parameters and typically require many training samples. A lighter solution consists in using 2-D neural network classifiers processing 2-D en-face (or frontal) projections and/or 2-D cross-sectional slices. Such an approach mimics the way ophthalmologists analyze OCTA acquisitions: (1) en-face flow maps are often used to detect avascular zones and neovascularization, and (2) cross-sectional slices are commonly analyzed to detect macular edemas, for instance. However, arbitrary data reduction or selection might result in information loss. Two complementary strategies are thus proposed to optimally summarize OCTA volumes with 2-D images: (1) a parametric en-face projection optimized through deep learning and (2) a cross-sectional slice selection process controlled through gradient-based attribution. The full summarization and DR classification pipeline is trained from end to end. The automatic 2-D summary can be displayed in a viewer or printed in a report to support the decision. We show that the proposed 2-D summarization and classification pipeline outperforms direct 3-D classification with the advantage of improved interpretability.


Assuntos
Diabetes Mellitus , Retinopatia Diabética , Humanos , Retinopatia Diabética/diagnóstico por imagem , Angiofluoresceinografia/métodos , Vasos Retinianos/diagnóstico por imagem , Tomografia de Coerência Óptica/métodos , Estudos Transversais
6.
Sci Rep ; 13(1): 23099, 2023 12 28.
Artigo em Inglês | MEDLINE | ID: mdl-38155189

RESUMO

Quantitative Gait Analysis (QGA) is considered as an objective measure of gait performance. In this study, we aim at designing an artificial intelligence that can efficiently predict the progression of gait quality using kinematic data obtained from QGA. For this purpose, a gait database collected from 734 patients with gait disorders is used. As the patient walks, kinematic data is collected during the gait session. This data is processed to generate the Gait Profile Score (GPS) for each gait cycle. Tracking potential GPS variations enables detecting changes in gait quality. In this regard, our work is driven by predicting such future variations. Two approaches were considered: signal-based and image-based. The signal-based one uses raw gait cycles, while the image-based one employs a two-dimensional Fast Fourier Transform (2D FFT) representation of gait cycles. Several architectures were developed, and the obtained Area Under the Curve (AUC) was above 0.72 for both approaches. To the best of our knowledge, our study is the first to apply neural networks for gait prediction tasks.


Assuntos
Inteligência Artificial , Análise da Marcha , Humanos , Análise da Marcha/métodos , Marcha , Redes Neurais de Computação , Análise de Fourier , Fenômenos Biomecânicos
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA