Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 11 de 11
Filtrar
1.
Commun Med (Lond) ; 4(1): 68, 2024 Apr 10.
Artigo em Inglês | MEDLINE | ID: mdl-38600290

RESUMO

BACKGROUND: In vivo imaging of the human retina using adaptive optics optical coherence tomography (AO-OCT) has transformed medical imaging by enabling visualization of 3D retinal structures at cellular-scale resolution, including the retinal pigment epithelial (RPE) cells, which are essential for maintaining visual function. However, because noise inherent to the imaging process (e.g., speckle) makes it difficult to visualize RPE cells from a single volume acquisition, a large number of 3D volumes are typically averaged to improve contrast, substantially increasing the acquisition duration and reducing the overall imaging throughput. METHODS: Here, we introduce parallel discriminator generative adversarial network (P-GAN), an artificial intelligence (AI) method designed to recover speckle-obscured cellular features from a single AO-OCT volume, circumventing the need for acquiring a large number of volumes for averaging. The combination of two parallel discriminators in P-GAN provides additional feedback to the generator to more faithfully recover both local and global cellular structures. Imaging data from 8 eyes of 7 participants were used in this study. RESULTS: We show that P-GAN not only improves RPE cell contrast by 3.5-fold, but also improves the end-to-end time required to visualize RPE cells by 99-fold, thereby enabling large-scale imaging of cells in the living human eye. RPE cell spacing measured across a large set of AI recovered images from 3 participants were in agreement with expected normative ranges. CONCLUSIONS: The results demonstrate the potential of AI assisted imaging in overcoming a key limitation of RPE imaging and making it more accessible in a routine clinical setting.


The retinal pigment epithelium (RPE) is a single layer of cells within the eye that is crucial for vision. These cells are unhealthy in many eye diseases, and this can result in vision problems, including blindness. Imaging RPE cells in living human eyes is time consuming and difficult with the current technology. Our method substantially speeds up the process of RPE imaging by incorporating artificial intelligence. This enables larger areas of the eye to be imaged more efficiently. Our method could potentially be used in the future during routine eye tests. This could lead to earlier detection and treatment of eye diseases, and the prevention of some causes of blindness.

3.
Commun Biol ; 5(1): 893, 2022 09 13.
Artigo em Inglês | MEDLINE | ID: mdl-36100689

RESUMO

Choroideremia is an X-linked, blinding retinal degeneration with progressive loss of photoreceptors, retinal pigment epithelial (RPE) cells, and choriocapillaris. To study the extent to which these layers are disrupted in affected males and female carriers, we performed multimodal adaptive optics imaging to better visualize the in vivo pathogenesis of choroideremia in the living human eye. We demonstrate the presence of subclinical, widespread enlarged RPE cells present in all subjects imaged. In the fovea, the last area to be affected in choroideremia, we found greater disruption to the RPE than to either the photoreceptor or choriocapillaris layers. The unexpected finding of patches of photoreceptors that were fluorescently-labeled, but structurally and functionally normal, suggests that the RPE blood barrier function may be altered in choroideremia. Finally, we introduce a strategy for detecting enlarged cells using conventional ophthalmic imaging instrumentation. These findings establish that there is subclinical polymegathism of RPE cells in choroideremia.


Assuntos
Coroideremia , Degeneração Retiniana , Corioide/diagnóstico por imagem , Coroideremia/genética , Coroideremia/patologia , Feminino , Humanos , Masculino , Óptica e Fotônica , Células Fotorreceptoras Retinianas Cones , Degeneração Retiniana/patologia
4.
Invest Ophthalmol Vis Sci ; 63(8): 27, 2022 07 08.
Artigo em Inglês | MEDLINE | ID: mdl-35900727

RESUMO

Purpose: To assess the structure of cone photoreceptors and retinal pigment epithelial (RPE) cells in vitelliform macular dystrophy (VMD) arising from various genetic etiologies. Methods: Multimodal adaptive optics (AO) imaging was performed in 11 patients with VMD using a custom-assembled instrument. Non-confocal split detection and AO-enhanced indocyanine green were used to visualize the cone photoreceptor and RPE mosaics, respectively. Cone and RPE densities were measured and compared across BEST1-, PRPH2-, IMPG1-, and IMPG2-related VMD. Results: Within macular lesions associated with VMD, both cone and RPE densities were reduced below normal, to 37% of normal cone density (eccentricity 0.2 mm) and to 8.4% of normal RPE density (eccentricity 0.5 mm). Outside of lesions, cone and RPE densities were slightly reduced (both to 92% of normal values), but with high degree of variability in the individual measurements. Comparison of juxtalesional cone and RPE measurements (<1 mm from the lesion edge) revealed significant differences in RPE density across the four genes (P < 0.05). Overall, cones were affected to a greater extent than RPE in patients with IMPG1 and IMPG2 pathogenic variants, but RPE was affected more than cones in BEST1 and PRPH2 VMD. This trend was observed even in contralateral eyes from a subset of five patients who presented with macular lesions in only one eye. Conclusions: Assessment of cones and RPE in retinal locations outside of the macular lesions reveals a pattern of cone and RPE disruption that appears to be gene dependent in VMD. These findings provide insight into the cellular pathogenesis of disease in VMD.


Assuntos
Distrofia Macular Viteliforme , Bestrofinas/genética , Proteínas da Matriz Extracelular/genética , Proteínas do Olho/química , Proteínas do Olho/genética , Humanos , Óptica e Fotônica , Proteoglicanas/genética , Células Fotorreceptoras Retinianas Cones/patologia , Epitélio Pigmentado da Retina/patologia , Tomografia de Coerência Óptica/métodos , Distrofia Macular Viteliforme/diagnóstico , Distrofia Macular Viteliforme/genética
5.
Optica ; 8(3): 333-343, 2021 Mar 20.
Artigo em Inglês | MEDLINE | ID: mdl-34504903

RESUMO

Adaptive optics scanning light ophthalmoscopy (AOSLO) allows non-invasive visualization of the living human eye at the microscopic scale; but even with correction of the ocular wavefront aberrations over a large pupil, the smallest cells in the photoreceptor mosaic cannot always be resolved. Here, we synergistically combine annular pupil illumination with sub-Airy disk confocal detection to demonstrate a 33% improvement in transverse resolution (from 2.36 to 1.58 µm) and a 13% axial resolution enhancement (from 37 to 32 µm), an important step towards the study of the complete photoreceptor mosaic in heath and disease. Interestingly, annular pupil illumination also enhanced the visualization of the photoreceptor mosaic in non-confocal detection schemes such as split detection AOSLO, providing a strategy for enhanced multimodal imaging of the cone and rod photoreceptor mosaic.

6.
Biomed Opt Express ; 12(3): 1449-1466, 2021 Mar 01.
Artigo em Inglês | MEDLINE | ID: mdl-33796365

RESUMO

In vivo imaging of human retinal pigment epithelial (RPE) cells has been demonstrated through multiple adaptive optics (AO)-based modalities. However, whether consistent and complete information regarding the cellular structure of the RPE mosaic is obtained across these modalities remains uncertain due to limited comparisons performed in the same eye. Here, an imaging platform combining multimodal AO-scanning light ophthalmoscopy (AO-SLO) with AO-optical coherence tomography (AO-OCT) is developed to make a side-by-side comparison of the same RPE cells imaged across four modalities: AO-darkfield, AO-enhanced indocyanine green (AO-ICG), AO-infrared autofluorescence (AO-IRAF), and AO-OCT. Co-registered images were acquired in five subjects, including one patient with choroideremia. Multimodal imaging provided multiple perspectives of the RPE mosaic that were used to explore variations in RPE cell contrast in a subject-, location-, and even cell-dependent manner. Estimated cell-to-cell spacing and density were found to be consistent both across modalities and with normative data. Multimodal images from a patient with choroideremia illustrate the benefit of using multiple modalities to infer the cellular structure of the RPE mosaic in an affected eye, in which disruptions to the RPE mosaic may locally alter the signal strength, visibility of individual RPE cells, or even source of contrast in unpredictable ways.

7.
IEEE Trans Med Imaging ; 40(10): 2820-2831, 2021 10.
Artigo em Inglês | MEDLINE | ID: mdl-33507868

RESUMO

Data annotation is a fundamental precursor for establishing large training sets to effectively apply deep learning methods to medical image analysis. For cell segmentation, obtaining high quality annotations is an expensive process that usually requires manual grading by experts. This work introduces an approach to efficiently generate annotated images, called "A-GANs", created by combining an active cell appearance model (ACAM) with conditional generative adversarial networks (C-GANs). ACAM is a statistical model that captures a realistic range of cell characteristics and is used to ensure that the image statistics of generated cells are guided by real data. C-GANs utilize cell contours generated by ACAM to produce cells that match input contours. By pairing ACAM-generated contours with A-GANs-based generated images, high quality annotated images can be efficiently generated. Experimental results on adaptive optics (AO) retinal images showed that A-GANs robustly synthesize realistic, artificial images whose cell distributions are exquisitely specified by ACAM. The cell segmentation performance using as few as 64 manually-annotated real AO images combined with 248 artificially-generated images from A-GANs was similar to the case of using 248 manually-annotated real images alone (Dice coefficients of 88% for both). Finally, application to rare diseases in which images exhibit never-seen characteristics demonstrated improvements in cell segmentation without the need for incorporating manual annotations from these new retinal images. Overall, A-GANs introduce a methodology for generating high quality annotated data that statistically captures the characteristics of any desired dataset and can be used to more efficiently train deep-learning-based medical image analysis applications.


Assuntos
Processamento de Imagem Assistida por Computador , Redes Neurais de Computação , Algoritmos , Retina/diagnóstico por imagem
8.
Artigo em Inglês | MEDLINE | ID: mdl-35464297

RESUMO

High quality data labeling is essential for improving the accuracy of deep learning applications in medical imaging. However, noisy images are not only under-represented in training datasets, but also, labeling of noisy data is low quality. Unfortunately, noisy images with poor quality labels are exacerbated by traditional data augmentation strategies. Real world images contain noise and can lead to unexpected drops in algorithm performance. In this paper, we present a non-traditional, purposeful data augmentation method to specifically transfer high quality automated labels into noisy image regions for incorporation into the training dataset. The overall approach is based on the use of paired images of the same cells in which variable image noise results in cell segmentation failures. Iteratively updating the cell segmentation model with accurate labels of noisy image areas resulted in an improvement in Dice coefficient from 77% to 86%. This was achieved by adding only 3.4% more cells to the training dataset, showing that local label transfer through graph matching is an effective augmentation strategy to improve segmentation.

9.
IEEE J Biomed Health Inform ; 24(12): 3520-3528, 2020 12.
Artigo em Inglês | MEDLINE | ID: mdl-32750947

RESUMO

Retinal pigment epithelial (RPE) cells play an important role in nourishing retinal neurosensory photoreceptor cells, and numerous blinding diseases are associated with RPE defects. Their fluorescence signature can now be visualized in the living human eye using adaptive optics (AO) imaging combined with indocyanine green (ICG), which motivates us to develop an automated RPE detection method to improve the quantitative evaluation of RPE status in patients. This paper proposes a spatially-aware, Dense-LinkNet-based regression approach to improve the detection of in vivo fluorescent cell patterns, achieving precision, recall, and F1-Score of 93.6 ± 4.3%, 81.4 ± 9.5%, and 86.7 ± 5.7%, respectively. These results demonstrate the utility of incorporating spatial inputs into a deep learning-based regression framework for cell detection.


Assuntos
Aprendizado Profundo , Corantes Fluorescentes , Processamento de Imagem Assistida por Computador/métodos , Imagem Óptica/métodos , Epitélio Pigmentado da Retina , Técnicas de Diagnóstico Oftalmológico , Corantes Fluorescentes/análise , Corantes Fluorescentes/química , Humanos , Verde de Indocianina/análise , Epitélio Pigmentado da Retina/citologia , Epitélio Pigmentado da Retina/diagnóstico por imagem
10.
Med Image Comput Comput Assist Interv ; 11764: 201-208, 2019 Oct.
Artigo em Inglês | MEDLINE | ID: mdl-31696163

RESUMO

Data augmentation is an important strategy for enlarging training datasets in deep learning-based medical image analysis. This is because large, annotated medical datasets are not only difficult and costly to generate, but also quickly become obsolete due to rapid advances in imaging technology. Image-to-image conditional generative adversarial networks (C-GAN) provide a potential solution for data augmentation. However, annotations used as inputs to C-GAN are typically based only on shape information, which can result in undesirable intensity distributions in the resulting artificially-created images. In this paper, we introduce an active cell appearance model (ACAM) that can measure statistical distributions of shape and intensity and use this ACAM model to guide C-GAN to generate more realistic images, which we call A-GAN. A-GAN provides an effective means for conveying anisotropic intensity information to C-GAN. A-GAN incorporates a statistical model (ACAM) to determine how transformations are applied for data augmentation. Traditional approaches for data augmentation that are based on arbitrary transformations might lead to unrealistic shape variations in an augmented dataset that are not representative of real data. A-GAN is designed to ameliorate this. To validate the effectiveness of using A-GAN for data augmentation, we assessed its performance on cell analysis in adaptive optics retinal imaging, which is a rapidly-changing medical imaging modality. Compared to C-GAN, A-GAN achieved stability in fewer iterations. The cell detection and segmentation accuracy when assisted by A-GAN augmentation was higher than that achieved with C-GAN. These findings demonstrate the potential for A-GAN to substantially improve existing data augmentation methods in medical image analysis.

11.
Ophthalmic Med Image Anal (2019) ; 11855: 86-94, 2019 Oct.
Artigo em Inglês | MEDLINE | ID: mdl-31701095

RESUMO

Direct visualization of photoreceptor cells, specialized neurons in the eye that sense light, can be achieved using adaptive optics (AO) retinal imaging. Evaluating photoreceptor cell morphology in retinal diseases is important for monitoring the onset and progression of blindness, but segmentation of these cells is a critical first step. Most segmentation approaches focus on cell region extraction, without directly considering cell boundary localization. This makes it difficult to track cells that have ambiguous boundaries, which result from low image contrast, anisotropic cell regions, or densely-packed cells whose boundaries appear to touch each other. These are all characteristics of the AO images that we consider here. To address these challenges, we develop an AOSeg-Net method that uses a multi-channel U-Net to predict the spatial probabilities of the cell boundary and obtain cell centroid and region distribution information as a means for facilitating cell segmentation. Five-color theorem guarantees the separation of any touching cells. Finally, a region-based level set algorithm that combines all of these visual cues is used to achieve subpixel cell segmentation. Five-fold cross-validation on 428 high resolution retinal images from 23 human subjects showed that AOSegNet substantially outperformed the only other existing approach with Dice coefficients [%] of 84.7 and 78.4, respectively, and average symmetric contour distances [µm] of 0.59 and 0.80, respectively.

SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA