Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 7 de 7
Filtrar
1.
BMC Ophthalmol ; 24(1): 387, 2024 Sep 03.
Artigo em Inglês | MEDLINE | ID: mdl-39227901

RESUMO

BACKGROUND: To analyse and compare the grading of diabetic retinopathy (DR) severity level using standard 35° ETDRS 7-fields photography and CLARUS™ 500 ultra-widefield imaging system. METHODS: A cross-sectional analysis of retinal images of patients with type 2 diabetes (n = 160 eyes) was performed for this study. All patients underwent 7-fields colour fundus photography (CFP) at 35° on a standard Topcon TRC-50DX® camera, and ultra-widefield (UWF) imaging at 200° on a CLARUS™ 500 (ZEISS, Dublin, CA, USA) by an automatic montage of two 133° images (nasal and temporal). 35° 7-fields photographs were graded by two graders, according to the Early Treatment Diabetic Retinopathy Study (ETDRS). For CLARUS UWF images, a prototype 7-fields grid was applied using the CLARUS review software, and the same ETDRS grading procedures were performed inside that area only. Grading of DR severity level was compared between these two methods to evaluate the agreement between both imaging techniques. RESULTS: Images of 160 eyes from 83 diabetic patients were considered for analysis. According to the 35° ETDRS 7-fields images, 22 eyes were evaluated as DR severity level 10-20, 64 eyes were evaluated as DR level 35, 41 eyes level 43, 21 eyes level 47, 7 eyes level 53, and 5 eyes level 61. The same DR severity level was achieved with CLARUS 500 UWF images in 92 eyes (57%), showing a perfect agreement (k > 0.80) with the 7-fields 35° technique. Fifty-seven eyes (36%) showed a higher DR level with CLARUS UWF images, mostly due to a better visualization of haemorrhages and a higher detection rate of intraretinal microvascular abnormalities (IRMA). Only 11 eyes (7%) showed a lower severity level with the CLARUS UWF system, due to the presence of artifacts or media opacities that precluded the correct evaluation of DR lesions. CONCLUSIONS: UWF CLARUS 500 device showed nearly perfect agreement with standard 35° 7-fields images in all ETDRS severity levels. Moreover, CLARUS images showed an increased ability to detect haemorrhages and IRMA helping with finer evaluation of lesions, thus demonstrating that a UWF photograph can be used to grade ETDRS severity level with a better visualization than the standard 7-fields images. TRIAL REGISTRATION: Approved by the AIBILI - Association for Innovation and Biomedical Research on Light and Image Ethics Committee for Health with number CEC/009/17- EYEMARKER.


Assuntos
Diabetes Mellitus Tipo 2 , Retinopatia Diabética , Fotografação , Índice de Gravidade de Doença , Humanos , Retinopatia Diabética/diagnóstico , Retinopatia Diabética/diagnóstico por imagem , Estudos Transversais , Feminino , Masculino , Pessoa de Meia-Idade , Fotografação/métodos , Idoso , Diabetes Mellitus Tipo 2/complicações , Fundo de Olho , Técnicas de Diagnóstico Oftalmológico , Adulto , Reprodutibilidade dos Testes
2.
Int J Retina Vitreous ; 10(1): 9, 2024 Jan 23.
Artigo em Inglês | MEDLINE | ID: mdl-38263402

RESUMO

BACKGROUND: Automated identification of spectral domain optical coherence tomography (SD-OCT) features can improve retina clinic workflow efficiency as they are able to detect pathologic findings. The purpose of this study was to test a deep learning (DL)-based algorithm for the identification of Idiopathic Full Thickness Macular Hole (IFTMH) features and stages of severity in SD-OCT B-scans. METHODS: In this cross-sectional study, subjects solely diagnosed with either IFTMH or Posterior Vitreous Detachment (PVD) were identified excluding secondary causes of macular holes, any concurrent maculopathies, or incomplete records. SD-OCT scans (512 × 128) from all subjects were acquired with CIRRUS™ HD-OCT (ZEISS, Dublin, CA) and reviewed for quality. In order to establish a ground truth classification, each SD-OCT B-scan was labeled by two trained graders and adjudicated by a retina specialist when applicable. Two test sets were built based on different gold-standard classification methods. The sensitivity, specificity and accuracy of the algorithm to identify IFTMH features in SD-OCT B-scans were determined. Spearman's correlation was run to examine if the algorithm's probability score was associated with the severity stages of IFTMH. RESULTS: Six hundred and one SD-OCT cube scans from 601 subjects (299 with IFTMH and 302 with PVD) were used. A total of 76,928 individual SD-OCT B-scans were labeled gradable by the algorithm and yielded an accuracy of 88.5% (test set 1, 33,024 B-scans) and 91.4% (test set 2, 43,904 B-scans) in identifying SD-OCT features of IFTMHs. A Spearman's correlation coefficient of 0.15 was achieved between the algorithm's probability score and the stages of the 299 (47 [15.7%] stage 2, 56 [18.7%] stage 3 and 196 [65.6%] stage 4) IFTMHs cubes studied. CONCLUSIONS: The DL-based algorithm was able to accurately detect IFTMHs features on individual SD-OCT B-scans in both test sets. However, there was a low correlation between the algorithm's probability score and IFTMH severity stages. The algorithm may serve as a clinical decision support tool that assists with the identification of IFTMHs. Further training is necessary for the algorithm to identify stages of IFTMHs.

3.
Ophthalmol Retina ; 7(2): 127-141, 2023 02.
Artigo em Inglês | MEDLINE | ID: mdl-35970318

RESUMO

PURPOSE: To present a deep learning algorithm for segmentation of geographic atrophy (GA) using en face swept-source OCT (SS-OCT) images that is accurate and reproducible for the assessment of GA growth over time. DESIGN: Retrospective review of images obtained as part of a prospective natural history study. SUBJECTS: Patients with GA (n = 90), patients with early or intermediate age-related macular degeneration (n = 32), and healthy controls (n = 16). METHODS: An automated algorithm using scan volume data to generate 3 image inputs characterizing the main OCT features of GA-hypertransmission in subretinal pigment epithelium (sub-RPE) slab, regions of RPE loss, and loss of retinal thickness-was trained using 126 images (93 with GA and 33 without GA, from the same number of eyes) using a fivefold cross-validation method and data augmentation techniques. It was tested in an independent set of one hundred eighty 6 × 6-mm2 macular SS-OCT scans consisting of 3 repeated scans of 30 eyes with GA at baseline and follow-up as well as 45 images obtained from 42 eyes without GA. MAIN OUTCOME MEASURES: The GA area, enlargement rate of GA area, square root of GA area, and square root of the enlargement rate of GA area measurements were calculated using the automated algorithm and compared with ground truth calculations performed by 2 manual graders. The repeatability of these measurements was determined using intraclass coefficients (ICCs). RESULTS: There were no significant differences in the GA areas, enlargement rates of GA area, square roots of GA area, and square roots of the enlargement rates of GA area between the graders and the automated algorithm. The algorithm showed high repeatability, with ICCs of 0.99 and 0.94 for the GA area measurements and the enlargement rates of GA area, respectively. The repeatability limit for the GA area measurements made by grader 1, grader 2, and the automated algorithm was 0.28, 0.33, and 0.92 mm2, respectively. CONCLUSIONS: When compared with manual methods, this proposed deep learning-based automated algorithm for GA segmentation using en face SS-OCT images was able to accurately delineate GA and produce reproducible measurements of the enlargement rates of GA.


Assuntos
Aprendizado Profundo , Atrofia Geográfica , Humanos , Atrofia Geográfica/diagnóstico , Angiofluoresceinografia , Estudos Prospectivos , Tomografia de Coerência Óptica/métodos , Epitélio Pigmentado da Retina
4.
Stud Health Technol Inform ; 173: 500-5, 2012.
Artigo em Inglês | MEDLINE | ID: mdl-22357044

RESUMO

Translational science requires the use of mouse models for the characterization of disease and evaluation of treatment therapies. However, often there is a lack of comprehensive training for scientists in the systemic and regional anatomy of the mouse that limits their ability to perform studies involving complex interventional procedures. We present our methodologies for the development, evaluation, and dissemination of an interactive 3D mouse atlas that includes designs for presenting emulation of procedural technique. We present the novel integration of super-resolution imaging techniques, depth-of-field interactive volume rendering of large data, and the seamless delivery of remote visualization and interaction to thin clients.


Assuntos
Anatomia , Simulação por Computador , Processamento de Imagem Assistida por Computador/métodos , Animais , Imageamento Tridimensional , Camundongos
5.
Biomed Opt Express ; 12(9): 5387-5399, 2021 Sep 01.
Artigo em Inglês | MEDLINE | ID: mdl-34692189

RESUMO

This work explores a student-teacher framework that leverages unlabeled images to train lightweight deep learning models with fewer parameters to perform fast automated detection of optical coherence tomography B-scans of interest. Twenty-seven lightweight models (LWMs) from four families of models were trained on expert-labeled B-scans (∼70 K) as either "abnormal" or "normal", which established a baseline performance for the models. Then the LWMs were trained from random initialization using a student-teacher framework to incorporate a large number of unlabeled B-scans (∼500 K). A pre-trained ResNet50 model served as the teacher network. The ResNet50 teacher model achieved 96.0% validation accuracy and the validation accuracy achieved by the LWMs ranged from 89.6% to 95.1%. The best performing LWMs were 2.53 to 4.13 times faster than ResNet50 (0.109s to 0.178s vs. 0.452s). All LWMs benefitted from increasing the training set by including unlabeled B-scans in the student-teacher framework, with several models achieving validation accuracy of 96.0% or higher. The three best-performing models achieved comparable sensitivity and specificity in two hold-out test sets to the teacher network. We demonstrated the effectiveness of a student-teacher framework for training fast LWMs for automated B-scan of interest detection leveraging unlabeled, routinely-available data.

7.
Int J Biomed Imaging ; 2013: 820874, 2013.
Artigo em Inglês | MEDLINE | ID: mdl-24174930

RESUMO

3D isotropic imaging at high spatial resolution (30-100 microns) is important for comparing mouse phenotypes. 3D imaging at high spatial resolutions is limited by long acquisition times and is not possible in many in vivo settings. Super resolution reconstruction (SRR) is a postprocessing technique that has been proposed to improve spatial resolution in the slice-select direction using multiple 2D multislice acquisitions. Any 2D multislice acquisition can be used for SRR. In this study, the effects of using three different low-resolution acquisition geometries (orthogonal, rotational, and shifted) on SRR images were evaluated and compared to a known standard. Iterative back projection was used for the reconstruction of all three acquisition geometries. The results of the study indicate that super resolution reconstructed images based on orthogonally acquired low-resolution images resulted in reconstructed images with higher SNR and CNR in less acquisition time than those based on rotational and shifted acquisition geometries. However, interpolation artifacts were observed in SRR images based on orthogonal acquisition geometry, particularly when the slice thickness was greater than six times the inplane voxel size. Reconstructions based on rotational geometry appeared smoother than those based on orthogonal geometry, but they required two times longer to acquire than the orthogonal LR images.

SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA