Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 5 de 5
Filtrar
Más filtros










Base de datos
Intervalo de año de publicación
1.
J Imaging Inform Med ; 37(1): 107-122, 2024 Feb.
Artículo en Inglés | MEDLINE | ID: mdl-38343245

RESUMEN

Central Serous Chorioretinopathy (CSC) is a retinal disorder caused by the accumulation of fluid, resulting in vision distortion. The diagnosis of this disease is typically performed through Optical Coherence Tomography (OCT) imaging, which displays any fluid buildup between the retinal layers. Currently, these fluid regions are manually detected by visual inspection a time-consuming and subjective process that can be prone to errors. A series of six deep learning-based automatic segmentation architectural configurations of different levels of complexity were trained and compared in order to determine the best model intended for the automatic segmentation of CSC-related lesions in OCT images. The best performing models were then evaluated in an external validation study. Furthermore, an intra- and inter-expert analysis was conducted in order to compare the manual segmentation performed by expert ophthalmologists with the automatic segmentation provided by the models. Test results of the best performing configuration achieved a mean Dice of 0.868 ± 0.056 in the internal dataset. In the external validation set, these models achieved a level of agreement with human experts of up to 0.960 in terms of Kappa coefficient, contrasting with a value of 0.951 for agreement between human experts. Overall, the models reached a better agreement with either of the human experts than these experts with each other, suggesting that automatic segmentation models for the detection of CSC-related lesions in OCT imaging can be useful tools for assessing this disease, reducing the workload of manual inspection and leading to a more robust and objective diagnosis method.

2.
IEEE J Biomed Health Inform ; 27(11): 5483-5494, 2023 11.
Artículo en Inglés | MEDLINE | ID: mdl-37682646

RESUMEN

Retinal Optical Coherence Tomography (OCT) allows the non-invasive direct observation of the central nervous system, enabling the measurement and extraction of biomarkers from neural tissue that can be helpful in the assessment of ocular, systemic and Neurological Disorders (ND). Deep learning models can be trained to segment the retinal layers for biomarker extraction. However, the onset of ND can have an impact on the neural tissue, which can lead to the degraded performance of models not exposed to images displaying signs of disease during training. We present a fully automatic approach for the retinal layer segmentation in multiple neurodegenerative disorder scenarios, using an annotated dataset of patients of the most prevalent NDs: Alzheimer's disease, Parkinson's disease, multiple sclerosis and essential tremor, along with healthy control patients. Furthermore, we present a two-part, comprehensive study on the effects of ND on the performance of these models. The results show that images of healthy patients may not be sufficient for the robust training of automated segmentation models intended for the analysis of ND patients, and that using images representative of different NDs can increase the model performance. These results indicate that the presence or absence of patients of ND in datasets should be taken into account when training deep learning models for retinal layer segmentation, and that the proposed approach can provide a valuable tool for the robust and reliable diagnosis in multiple scenarios of ND.


Asunto(s)
Esclerosis Múltiple , Enfermedad de Parkinson , Humanos , Retina , Tomografía de Coherencia Óptica/métodos
3.
Quant Imaging Med Surg ; 13(5): 2846-2859, 2023 May 01.
Artículo en Inglés | MEDLINE | ID: mdl-37179949

RESUMEN

Background: Glaucoma is the leading global cause of irreversible blindness. Glaucoma patients experience a progressive deterioration of the retinal nervous tissues that begins with a loss of peripheral vision. An early diagnosis is essential in order to prevent blindness. Ophthalmologists measure the deterioration caused by this disease by assessing the retinal layers in different regions of the eye, using different optical coherence tomography (OCT) scanning patterns to extract images, generating different views from multiple parts of the retina. These images are used to measure the thickness of the retinal layers in different regions. Methods: We present two approaches for the multi-region segmentation of the retinal layers in OCT images of glaucoma patients. These approaches can extract the relevant anatomical structures for glaucoma assessment from three different OCT scan patterns: circumpapillary circle scans, macular cube scans and optic disc (OD) radial scans. By employing transfer learning to take advantage of the visual patterns present in a related domain, these approaches use state-of-the-art segmentation modules to achieve a robust, fully automatic segmentation of the retinal layers. The first approach exploits inter-view similarities by using a single module to segment all of the scan patterns, considering them as a single domain. The second approach uses view-specific modules for the segmentation of each scan pattern, automatically detecting the suitable module to analyse each image. Results: The proposed approaches produced satisfactory results with the first approach achieving a dice coefficient of 0.85±0.06 and the second one 0.87±0.08 for all segmented layers. The first approach produced the best results for the radial scans. Concurrently, the view-specific second approach achieved the best results for the better represented circle and cube scan patterns. Conclusions: To the extent of our knowledge, this is the first proposal in the literature for the multi-view segmentation of the retinal layers of glaucoma patients, demonstrating the applicability of machine learning-based systems for aiding in the diagnosis of this relevant pathology.

4.
Med Biol Eng Comput ; 61(5): 1093-1112, 2023 May.
Artículo en Inglés | MEDLINE | ID: mdl-36680707

RESUMEN

In optical coherence tomography (OCT), there is a trade-off between the scanning time and image quality, leading to a scarcity of high quality data. OCT platforms provide different scanning presets, producing visually distinct images, limiting their compatibility. In this work, a fully automatic methodology for the unpaired visual conversion of the two most prevalent scanning presets is proposed. Using contrastive unpaired translation generative adversarial architectures, low quality images acquired with the faster Macular Cube preset can be converted to the visual style of high visibility Seven Lines scans and vice-versa. This modifies the visual appearance of the OCT images generated by each preset while preserving natural tissue structure. The quality of original and synthetic generated images was compared using BRISQUE. The synthetic generated images achieved very similar scores to original images of their target preset. The generative models were validated in automatic and expert separability tests. These models demonstrated they were able to replicate the genuine look of the original images. This methodology has the potential to create multi-preset datasets with which to train robust computer-aided diagnosis systems by exposing them to the visual features of different presets they may encounter in real clinical scenarios without having to obtain additional data. Graphical Abstract Unpaired mutual conversion between scanning presets. Two generative adversarial models are trained for the conversion of OCT images into images of another scanning preset, replicating the visual features that characterise said preset.


Asunto(s)
Diagnóstico por Computador , Tomografía de Coherencia Óptica , Tomografía de Coherencia Óptica/métodos , Procesamiento de Imagen Asistido por Computador/métodos
5.
Comput Med Imaging Graph ; 98: 102068, 2022 06.
Artículo en Inglés | MEDLINE | ID: mdl-35489237

RESUMEN

BACKGROUND AND OBJECTIVES: The Epiretinal Membrane (ERM) is an ocular disease that can cause visual distortions and irreversible vision loss. Patient sight preservation relies on an early diagnosis and on determining the location of the ERM in order to be treated and potentially removed. In this context, the visual inspection of the images in order to screen for ERM signs is a costly and subjective process. METHODS: In this work, we propose and study three end-to-end fully-automatic approaches for the simultaneous segmentation and screening of ERM signs in Optical Coherence Tomography images. These convolutional approaches exploit a multi-task learning context to leverage inter-task complementarity in order to guide the training process. The proposed architectures are combined with three different state of the art encoder architectures of reference in order to provide an exhaustive study of the suitability of each of the approaches for these tasks. Furthermore, these architectures work in an end-to-end manner, entailing a significant simplification of the development process since they are able to be trained directly from annotated images without the need for a series of purpose-specific steps. RESULTS: In terms of segmentation, the proposed models obtained a precision of 0.760 ± 0.050, a sensitivity of 0.768 ± 0.210 and a specificity of 0.945 ± 0.011. For the screening task, these models achieved a precision of 0.963 ± 0.068, a sensitivity of 0.816 ± 0.162 and a specificity of 0.983 ± 0.068. The obtained results show that these multi-task approaches are able to perform competitively with or even outperform single-task methods tailored for either the segmentation or the screening of the ERM. CONCLUSIONS: These results highlight the advantages of using complementary knowledge related to the segmentation and screening tasks in the diagnosis of this relevant pathology, constituting the first proposal to address the diagnosis of the ERM from a multi-task perspective.


Asunto(s)
Membrana Epirretinal , Diagnóstico Precoz , Membrana Epirretinal/diagnóstico por imagen , Humanos , Tomografía de Coherencia Óptica/métodos
SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA
...