Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 11 de 11
Filtrar
Más filtros










Base de datos
Intervalo de año de publicación
1.
Artículo en Inglés | MEDLINE | ID: mdl-39056571

RESUMEN

PURPOSE: To utilise ganglion cell-inner plexiform layer (GCIPL) measurements acquired using widefield optical coherence tomography (OCT) scans spanning 55° × 45° to explore the link between co-localised structural parameters and clinical visual field (VF) data. METHODS: Widefield OCT scans acquired from 311 healthy, 268 glaucoma suspect and 269 glaucoma eyes were segmented to generate GCIPL thickness measurements. Estimated ganglion cell (GC) counts, calculated from GCIPL measurements, were plotted against 24-2 SITA Faster visual field (VF) thresholds, and regression models were computed with data categorised by diagnosis and VF status. Classification of locations as VF defective or non-defective using GCIPL parameters computed across eccentricity- and hemifield-dependent clusters was assessed by analysing areas under receiver operating characteristic curves (AUROCCs). Sensitivities and specificities were calculated per diagnostic category. RESULTS: Segmented linear regression models between GC counts and VF thresholds demonstrated higher variability in VF defective locations relative to non-defective locations (mean absolute error 6.10-9.93 dB and 1.43-1.91 dB, respectively). AUROCCs from cluster-wide GCIPL parameters were similar across methods centrally (p = 0.06-0.84) but significantly greater peripherally, especially when considering classification of more central locations (p < 0.0001). Across diagnoses, cluster-wide GCIPL parameters demonstrated variable sensitivities and specificities (0.36-0.93 and 0.65-0.98, respectively), with the highest specificities observed across healthy eyes (0.73-0.98). CONCLUSIONS: Quantitative prediction of VF thresholds from widefield OCT is affected by high variability at VF defective locations. Prediction of VF status based on cluster-wide GCIPL parameters from widefield OCT could become useful to aid clinical decision-making in appropriately targeting VF assessments.

2.
Biomed Opt Express ; 15(4): 2262-2280, 2024 Apr 01.
Artículo en Inglés | MEDLINE | ID: mdl-38633090

RESUMEN

OCT is a widely used clinical ophthalmic imaging technique, but the presence of speckle noise can obscure important pathological features and hinder accurate segmentation. This paper presents a novel method for denoising optical coherence tomography (OCT) images using a combination of texture loss and generative adversarial networks (GANs). Previous approaches have integrated deep learning techniques, starting with denoising Convolutional Neural Networks (CNNs) that employed pixel-wise losses. While effective in reducing noise, these methods often introduced a blurring effect in the denoised OCT images. To address this, perceptual losses were introduced, improving denoising performance and overall image quality. Building on these advancements, our research focuses on designing an image reconstruction GAN that generates OCT images with textural similarity to the gold standard, the averaged OCT image. We utilize the PatchGAN discriminator approach as a texture loss to enhance the quality of the reconstructed OCT images. We also compare the performance of UNet and ResNet as generators in the conditional GAN (cGAN) setting, as well as compare PatchGAN with the Wasserstein GAN. Using real clinical foveal-centered OCT retinal scans of children with normal vision, our experiments demonstrate that the combination of PatchGAN and UNet achieves superior performance (PSNR = 32.50) compared to recently proposed methods such as SiameseGAN (PSNR = 31.02). Qualitative experiments involving six masked clinical ophthalmologists also favor the reconstructed OCT images with PatchGAN texture loss. In summary, this paper introduces a novel method for denoising OCT images by incorporating texture loss within a GAN framework. The proposed approach outperforms existing methods and is well-received by clinical experts, offering promising advancements in OCT image reconstruction and facilitating accurate clinical interpretation.

3.
Ophthalmic Physiol Opt ; 44(2): 457-471, 2024 Mar.
Artículo en Inglés | MEDLINE | ID: mdl-37990841

RESUMEN

PURPOSE: To describe variations in ganglion cell-inner plexiform layer (GCIPL) thickness in a healthy cohort from widefield optical coherence tomography (OCT) scans. METHODS: Widefield OCT scans spanning 55° × 45° were acquired from 470 healthy eyes. The GCIPL was automatically segmented using deep learning methods. Thickness measurements were extracted after correction for warpage and retinal tilt. Multiple linear regression analysis was applied to discern trends between global GCIPL thickness and age, axial length and sex. To further characterise age-related change, hierarchical and two-step cluster algorithms were applied to identify locations sharing similar ageing properties, and rates of change were quantified using regression analyses with data pooled by cluster analysis outcomes. RESULTS: Declines in widefield GCIPL thickness with age, increasing axial length and female sex were observed (parameter estimates -0.053, -0.436 and -0.464, p-values <0.001, <0.001 and 0.02, respectively). Cluster analyses revealed concentric, slightly nasally displaced, horseshoe patterns of age-related change in the GCIPL, with up to four statistically distinct clusters outside the macula. Linear regression analyses revealed significant ageing decline in GCIPL thickness across all clusters, with faster rates of change observed at central locations when expressed as absolute (slope = -0.19 centrally vs. -0.04 to -0.12 peripherally) and percentage rates of change (slope = -0.001 centrally vs. -0.0005 peripherally). CONCLUSIONS: Normative variations in GCIPL thickness from widefield OCT with age, axial length and sex were noted, highlighting factors worth considering in further developments. Widefield OCT has promising potential to facilitate quantitative detection of abnormal GCIPL outside standard fields of view.


Asunto(s)
Mácula Lútea , Tomografía de Coherencia Óptica , Humanos , Femenino , Tomografía de Coherencia Óptica/métodos , Células Ganglionares de la Retina , Fibras Nerviosas , Retina
4.
Clin Exp Optom ; 106(5): 466-475, 2023 07.
Artículo en Inglés | MEDLINE | ID: mdl-35999058

RESUMEN

Deep learning (DL) represents a paradigm-shifting, burgeoning field of research with emerging clinical applications in optometry. Unlike traditional programming, which relies on human-set specific rules, DL works by exposing the algorithm to a large amount of annotated data and allowing the software to develop its own set of rules (i.e. learn) by adjusting the parameters inside the model (network) during a training process in order to complete the task on its own. One major limitation of traditional programming is that, with complex tasks, it may require an extensive set of rules to accurately complete the assignment. Additionally, traditional programming can be susceptible to human bias from programmer experience. With the dramatic increase in the amount and the complexity of clinical data, DL has been utilised to automate data analysis and thus to assist clinicians in patient management. This review will present the latest advances in DL, for managing posterior eye diseases as well as DL-based solutions for patients with vision loss.


Asunto(s)
Aprendizaje Profundo , Enfermedades del Nervio Óptico , Humanos , Pruebas del Campo Visual , Campos Visuales , Células Ganglionares de la Retina , Enfermedades del Nervio Óptico/terapia
5.
Br J Ophthalmol ; 107(5): 614-620, 2023 05.
Artículo en Inglés | MEDLINE | ID: mdl-34815236

RESUMEN

BACKGROUND: Conjunctival ultraviolet autofluorescence (CUVAF) is a method of detecting conjunctival damage related to ultraviolet radiation exposure. In cross-sectional studies, CUVAF area is positively associated with self-reported time spent outdoors and pterygium and negatively associated with myopia; however, longitudinal studies are scarce. AIMS: To use a novel deep learning-based tool to assess 8-year change in CUVAF area in young adults, investigate factors associated with this change and identify the number of new onset pterygia. METHODS: A deep learning-based CUVAF tool was developed to measure CUVAF area. CUVAF area and pterygium status were assessed at three study visits: baseline (participants were approximately 20 years old) and at 7-year and 8-year follow-ups. Participants self-reported sun protection behaviours and ocular history. RESULTS: CUVAF data were available for 1497 participants from at least one study visit; 633 (43%) participants had complete CUVAF data. Mean CUVAF areas at baseline and the 7-year and 8-year follow-ups were 48.4, 39.3 and 37.7 mm2, respectively. There was a decrease in mean CUVAF area over time (change in total CUVAF area=-0.96 mm2 per year (95% CI: -1.07 to -0.86)). For participants who wore sunglasses ≥1/2 of the time, CUVAF area decreased by an additional -0.42 mm2 per year (95% CI: -0.72 to -0.12) on average. Fourteen (1.5%) participants developed a pterygium. CONCLUSIONS: In this young adult cohort, CUVAF area declined over an 8-year period. Wearing sunglasses was associated with a faster reduction in CUVAF area. Deep learning-based models can assist in accurate and efficient measurement of CUVAF area.


Asunto(s)
Pterigion , Adulto Joven , Humanos , Adulto , Pterigion/diagnóstico , Rayos Ultravioleta/efectos adversos , Luz Solar/efectos adversos , Estudios Transversales , Imagen Óptica/métodos , Conjuntiva
6.
J Optom ; 15 Suppl 1: S1-S11, 2022.
Artículo en Inglés | MEDLINE | ID: mdl-36241526

RESUMEN

Optical coherence tomography (OCT) has revolutionized ophthalmic clinical practice and research, as a result of the high-resolution images that the method is able to capture in a fast, non-invasive manner. Although clinicians can interpret OCT images qualitatively, the ability to quantitatively and automatically analyse these images represents a key goal for eye care by providing clinicians with immediate and relevant metrics to inform best clinical practice. The range of applications and methods to analyse OCT images is rich and rapidly expanding. With the advent of deep learning methods, the field has experienced significant progress with state-of-the-art-performance for several OCT image analysis tasks. Generative adversarial networks (GANs) represent a subfield of deep learning that allows for a range of novel applications not possible in most other deep learning methods, with the potential to provide more accurate and robust analyses. In this review, the progress in this field and clinical impact are reviewed and the potential future development of applications of GANs to OCT image processing are discussed.


Asunto(s)
Procesamiento de Imagen Asistido por Computador , Tomografía de Coherencia Óptica , Humanos , Procesamiento de Imagen Asistido por Computador/métodos
7.
Sci Rep ; 12(1): 14888, 2022 09 01.
Artículo en Inglés | MEDLINE | ID: mdl-36050364

RESUMEN

Deep learning methods have enabled a fast, accurate and automated approach for retinal layer segmentation in posterior segment OCT images. Due to the success of semantic segmentation methods adopting the U-Net, a wide range of variants and improvements have been developed and applied to OCT segmentation. Unfortunately, the relative performance of these methods is difficult to ascertain for OCT retinal layer segmentation due to a lack of comprehensive comparative studies, and a lack of proper matching between networks in previous comparisons, as well as the use of different OCT datasets between studies. In this paper, a detailed and unbiased comparison is performed between eight U-Net architecture variants across four different OCT datasets from a range of different populations, ocular pathologies, acquisition parameters, instruments and segmentation tasks. The U-Net architecture variants evaluated include some which have not been previously explored for OCT segmentation. Using the Dice coefficient to evaluate segmentation performance, minimal differences were noted between most of the tested architectures across the four datasets. Using an extra convolutional layer per pooling block gave a small improvement in segmentation performance for all architectures across all four datasets. This finding highlights the importance of careful architecture comparison (e.g. ensuring networks are matched using an equivalent number of layers) to obtain a true and unbiased performance assessment of fully semantic models. Overall, this study demonstrates that the vanilla U-Net is sufficient for OCT retinal layer segmentation and that state-of-the-art methods and other architectural changes are potentially unnecessary for this particular task, especially given the associated increased complexity and slower speed for the marginal performance gains observed. Given the U-Net model and its variants represent one of the most commonly applied image segmentation methods, the consistent findings across several datasets here are likely to translate to many other OCT datasets and studies. This will provide significant value by saving time and cost in experimentation and model development as well as reduced inference time in practice by selecting simpler models.


Asunto(s)
Aprendizaje Profundo , Procesamiento de Imagen Asistido por Computador , Procesamiento de Imagen Asistido por Computador/métodos , Retina/diagnóstico por imagen
8.
J Biomed Opt ; 26(4)2021 04.
Artículo en Inglés | MEDLINE | ID: mdl-33893726

RESUMEN

SIGNIFICANCE: Speckle noise is an inherent limitation of optical coherence tomography (OCT) images that makes clinical interpretation challenging. The recent emergence of deep learning could offer a reliable method to reduce noise in OCT images. AIM: We sought to investigate the use of deep features (VGG) to limit the effect of blurriness and increase perceptual sharpness and to evaluate its impact on the performance of OCT image denoising (DnCNN). APPROACH: Fifty-one macula-centered OCT pairs were used in training of the network. Another set of 20 OCT pair was used for testing. The DnCNN model was cascaded with a VGG network that acted as a perceptual loss function instead of the traditional losses of L1 and L2. The VGG network remains fixed during the training process. We focused on the individual layers of the VGG-16 network to decipher the contribution of each distinctive layer as a loss function to produce denoised OCT images that were perceptually sharp and that preserved the faint features (retinal layer boundaries) essential for interpretation. The peak signal-to-noise ratio (PSNR), edge-preserving index, and no-reference image sharpness/blurriness [perceptual sharpness index (PSI), just noticeable blur (JNB), and spectral and spatial sharpness measure (S3)] metrics were used to compare deep feature losses with the traditional losses. RESULTS: The deep feature loss produced images with high perceptual sharpness measures at the cost of less smoothness (PSNR) in OCT images. The deep feature loss outperformed the traditional losses (L1 and L2) for all of the evaluation metrics except for PSNR. The PSI, S3, and JNB estimates of deep feature loss performance were 0.31, 0.30, and 16.53, respectively. For L1 and L2 losses performance, the PSI, S3, and JNB were 0.21 and 0.21, 0.17 and 0.16, and 14.46 and 14.34, respectively. CONCLUSIONS: We demonstrate the potential of deep feature loss in denoising OCT images. Our preliminary findings suggest research directions for further investigation.


Asunto(s)
Procesamiento de Imagen Asistido por Computador , Tomografía de Coherencia Óptica , Redes Neurales de la Computación , Retina/diagnóstico por imagen , Relación Señal-Ruido
9.
Transl Vis Sci Technol ; 9(11): 12, 2020 10.
Artículo en Inglés | MEDLINE | ID: mdl-33133774

RESUMEN

Purpose: To use a deep learning model to develop a fully automated method (fully semantic network and graph search [FS-GS]) of retinal segmentation for optical coherence tomography (OCT) images from patients with Stargardt disease. Methods: Eighty-seven manually segmented (ground truth) OCT volume scan sets (5171 B-scans) from 22 patients with Stargardt disease were used for training, validation and testing of a novel retinal boundary detection approach (FS-GS) that combines a fully semantic deep learning segmentation method, which generates a per-pixel class prediction map with a graph-search method to extract retinal boundary positions. The performance was evaluated using the mean absolute boundary error and the differences in two clinical metrics (retinal thickness and volume) compared with the ground truth. The performance of a separate deep learning method and two publicly available software algorithms were also evaluated against the ground truth. Results: FS-GS showed an excellent agreement with the ground truth, with a boundary mean absolute error of 0.23 and 1.12 pixels for the internal limiting membrane and the base of retinal pigment epithelium or Bruch's membrane, respectively. The mean difference in thickness and volume across the central 6 mm zone were 2.10 µm and 0.059 mm3. The performance of the proposed method was more accurate and consistent than the publicly available OCTExplorer and AURA tools. Conclusions: The FS-GS method delivers good performance in segmentation of OCT images of pathologic retina in Stargardt disease. Translational Relevance: Deep learning models can provide a robust method for retinal segmentation and support a high-throughput analysis pipeline for measuring retinal thickness and volume in Stargardt disease.


Asunto(s)
Aprendizaje Profundo , Tomografía de Coherencia Óptica , Humanos , Retina/diagnóstico por imagen , Epitelio Pigmentado de la Retina , Enfermedad de Stargardt
10.
Sci Rep ; 9(1): 13298, 2019 09 16.
Artículo en Inglés | MEDLINE | ID: mdl-31527630

RESUMEN

The analysis of the choroid in the eye is crucial for our understanding of a range of ocular diseases and physiological processes. Optical coherence tomography (OCT) imaging provides the ability to capture highly detailed cross-sectional images of the choroid yet only a very limited number of commercial OCT instruments provide methods for automatic segmentation of choroidal tissue. Manual annotation of the choroidal boundaries is often performed but this is impractical due to the lengthy time taken to analyse large volumes of images. Therefore, there is a pressing need for reliable and accurate methods to automatically segment choroidal tissue boundaries in OCT images. In this work, a variety of patch-based and fully-convolutional deep learning methods are proposed to accurately determine the location of the choroidal boundaries of interest. The effect of network architecture, patch-size and contrast enhancement methods was tested to better understand the optimal architecture and approach to maximize performance. The results are compared with manual boundary segmentation used as a ground-truth, as well as with a standard image analysis technique. Results of total retinal layer segmentation are also presented for comparison purposes. The findings presented here demonstrate the benefit of deep learning methods for segmentation of the chorio-retinal boundary analysis in OCT images.


Asunto(s)
Coroides/anatomía & histología , Coroides/diagnóstico por imagen , Procesamiento de Imagen Asistido por Computador/métodos , Tomografía de Coherencia Óptica/métodos , Humanos , Redes Neurales de la Computación , Epitelio Pigmentado de la Retina/anatomía & histología , Máquina de Vectores de Soporte
11.
Biomed Opt Express ; 9(11): 5759-5777, 2018 Nov 01.
Artículo en Inglés | MEDLINE | ID: mdl-30460160

RESUMEN

The manual segmentation of individual retinal layers within optical coherence tomography (OCT) images is a time-consuming task and is prone to errors. The investigation into automatic segmentation methods that are both efficient and accurate has seen a variety of methods proposed. In particular, recent machine learning approaches have focused on the use of convolutional neural networks (CNNs). Traditionally applied to sequential data, recurrent neural networks (RNNs) have recently demonstrated success in the area of image analysis, primarily due to their usefulness to extract temporal features from sequences of images or volumetric data. However, their potential use in OCT retinal layer segmentation has not previously been reported, and their direct application for extracting spatial features from individual 2D images has been limited. This paper proposes the use of a recurrent neural network trained as a patch-based image classifier (retinal boundary classifier) with a graph search (RNN-GS) to segment seven retinal layer boundaries in OCT images from healthy children and three retinal layer boundaries in OCT images from patients with age-related macular degeneration (AMD). The optimal architecture configuration to maximize classification performance is explored. The results demonstrate that a RNN is a viable alternative to a CNN for image classification tasks in the case where the images exhibit a clear sequential structure. Compared to a CNN, the RNN showed a slightly superior average generalization classification accuracy. Secondly, in terms of segmentation, the RNN-GS performed competitively against a previously proposed CNN based method (CNN-GS) with respect to both accuracy and consistency. These findings apply to both normal and AMD data. Overall, the RNN-GS method yielded superior mean absolute errors in terms of the boundary position with an average error of 0.53 pixels (normal) and 1.17 pixels (AMD). The methodology and results described in this paper may assist the future investigation of techniques within the area of OCT retinal segmentation and highlight the potential of RNN methods for OCT image analysis.

SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA