Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 6 de 6
Filtrar
Más filtros










Base de datos
Intervalo de año de publicación
1.
Int J Retina Vitreous ; 10(1): 9, 2024 Jan 23.
Artículo en Inglés | MEDLINE | ID: mdl-38263402

RESUMEN

BACKGROUND: Automated identification of spectral domain optical coherence tomography (SD-OCT) features can improve retina clinic workflow efficiency as they are able to detect pathologic findings. The purpose of this study was to test a deep learning (DL)-based algorithm for the identification of Idiopathic Full Thickness Macular Hole (IFTMH) features and stages of severity in SD-OCT B-scans. METHODS: In this cross-sectional study, subjects solely diagnosed with either IFTMH or Posterior Vitreous Detachment (PVD) were identified excluding secondary causes of macular holes, any concurrent maculopathies, or incomplete records. SD-OCT scans (512 × 128) from all subjects were acquired with CIRRUS™ HD-OCT (ZEISS, Dublin, CA) and reviewed for quality. In order to establish a ground truth classification, each SD-OCT B-scan was labeled by two trained graders and adjudicated by a retina specialist when applicable. Two test sets were built based on different gold-standard classification methods. The sensitivity, specificity and accuracy of the algorithm to identify IFTMH features in SD-OCT B-scans were determined. Spearman's correlation was run to examine if the algorithm's probability score was associated with the severity stages of IFTMH. RESULTS: Six hundred and one SD-OCT cube scans from 601 subjects (299 with IFTMH and 302 with PVD) were used. A total of 76,928 individual SD-OCT B-scans were labeled gradable by the algorithm and yielded an accuracy of 88.5% (test set 1, 33,024 B-scans) and 91.4% (test set 2, 43,904 B-scans) in identifying SD-OCT features of IFTMHs. A Spearman's correlation coefficient of 0.15 was achieved between the algorithm's probability score and the stages of the 299 (47 [15.7%] stage 2, 56 [18.7%] stage 3 and 196 [65.6%] stage 4) IFTMHs cubes studied. CONCLUSIONS: The DL-based algorithm was able to accurately detect IFTMHs features on individual SD-OCT B-scans in both test sets. However, there was a low correlation between the algorithm's probability score and IFTMH severity stages. The algorithm may serve as a clinical decision support tool that assists with the identification of IFTMHs. Further training is necessary for the algorithm to identify stages of IFTMHs.

3.
Ophthalmol Retina ; 7(2): 127-141, 2023 02.
Artículo en Inglés | MEDLINE | ID: mdl-35970318

RESUMEN

PURPOSE: To present a deep learning algorithm for segmentation of geographic atrophy (GA) using en face swept-source OCT (SS-OCT) images that is accurate and reproducible for the assessment of GA growth over time. DESIGN: Retrospective review of images obtained as part of a prospective natural history study. SUBJECTS: Patients with GA (n = 90), patients with early or intermediate age-related macular degeneration (n = 32), and healthy controls (n = 16). METHODS: An automated algorithm using scan volume data to generate 3 image inputs characterizing the main OCT features of GA-hypertransmission in subretinal pigment epithelium (sub-RPE) slab, regions of RPE loss, and loss of retinal thickness-was trained using 126 images (93 with GA and 33 without GA, from the same number of eyes) using a fivefold cross-validation method and data augmentation techniques. It was tested in an independent set of one hundred eighty 6 × 6-mm2 macular SS-OCT scans consisting of 3 repeated scans of 30 eyes with GA at baseline and follow-up as well as 45 images obtained from 42 eyes without GA. MAIN OUTCOME MEASURES: The GA area, enlargement rate of GA area, square root of GA area, and square root of the enlargement rate of GA area measurements were calculated using the automated algorithm and compared with ground truth calculations performed by 2 manual graders. The repeatability of these measurements was determined using intraclass coefficients (ICCs). RESULTS: There were no significant differences in the GA areas, enlargement rates of GA area, square roots of GA area, and square roots of the enlargement rates of GA area between the graders and the automated algorithm. The algorithm showed high repeatability, with ICCs of 0.99 and 0.94 for the GA area measurements and the enlargement rates of GA area, respectively. The repeatability limit for the GA area measurements made by grader 1, grader 2, and the automated algorithm was 0.28, 0.33, and 0.92 mm2, respectively. CONCLUSIONS: When compared with manual methods, this proposed deep learning-based automated algorithm for GA segmentation using en face SS-OCT images was able to accurately delineate GA and produce reproducible measurements of the enlargement rates of GA.


Asunto(s)
Aprendizaje Profundo , Atrofia Geográfica , Humanos , Atrofia Geográfica/diagnóstico , Angiografía con Fluoresceína , Estudios Prospectivos , Tomografía de Coherencia Óptica/métodos , Epitelio Pigmentado de la Retina
4.
Biomed Opt Express ; 12(9): 5387-5399, 2021 Sep 01.
Artículo en Inglés | MEDLINE | ID: mdl-34692189

RESUMEN

This work explores a student-teacher framework that leverages unlabeled images to train lightweight deep learning models with fewer parameters to perform fast automated detection of optical coherence tomography B-scans of interest. Twenty-seven lightweight models (LWMs) from four families of models were trained on expert-labeled B-scans (∼70 K) as either "abnormal" or "normal", which established a baseline performance for the models. Then the LWMs were trained from random initialization using a student-teacher framework to incorporate a large number of unlabeled B-scans (∼500 K). A pre-trained ResNet50 model served as the teacher network. The ResNet50 teacher model achieved 96.0% validation accuracy and the validation accuracy achieved by the LWMs ranged from 89.6% to 95.1%. The best performing LWMs were 2.53 to 4.13 times faster than ResNet50 (0.109s to 0.178s vs. 0.452s). All LWMs benefitted from increasing the training set by including unlabeled B-scans in the student-teacher framework, with several models achieving validation accuracy of 96.0% or higher. The three best-performing models achieved comparable sensitivity and specificity in two hold-out test sets to the teacher network. We demonstrated the effectiveness of a student-teacher framework for training fast LWMs for automated B-scan of interest detection leveraging unlabeled, routinely-available data.

5.
Int J Biomed Imaging ; 2013: 820874, 2013.
Artículo en Inglés | MEDLINE | ID: mdl-24174930

RESUMEN

3D isotropic imaging at high spatial resolution (30-100 microns) is important for comparing mouse phenotypes. 3D imaging at high spatial resolutions is limited by long acquisition times and is not possible in many in vivo settings. Super resolution reconstruction (SRR) is a postprocessing technique that has been proposed to improve spatial resolution in the slice-select direction using multiple 2D multislice acquisitions. Any 2D multislice acquisition can be used for SRR. In this study, the effects of using three different low-resolution acquisition geometries (orthogonal, rotational, and shifted) on SRR images were evaluated and compared to a known standard. Iterative back projection was used for the reconstruction of all three acquisition geometries. The results of the study indicate that super resolution reconstructed images based on orthogonally acquired low-resolution images resulted in reconstructed images with higher SNR and CNR in less acquisition time than those based on rotational and shifted acquisition geometries. However, interpolation artifacts were observed in SRR images based on orthogonal acquisition geometry, particularly when the slice thickness was greater than six times the inplane voxel size. Reconstructions based on rotational geometry appeared smoother than those based on orthogonal geometry, but they required two times longer to acquire than the orthogonal LR images.

6.
Stud Health Technol Inform ; 173: 500-5, 2012.
Artículo en Inglés | MEDLINE | ID: mdl-22357044

RESUMEN

Translational science requires the use of mouse models for the characterization of disease and evaluation of treatment therapies. However, often there is a lack of comprehensive training for scientists in the systemic and regional anatomy of the mouse that limits their ability to perform studies involving complex interventional procedures. We present our methodologies for the development, evaluation, and dissemination of an interactive 3D mouse atlas that includes designs for presenting emulation of procedural technique. We present the novel integration of super-resolution imaging techniques, depth-of-field interactive volume rendering of large data, and the seamless delivery of remote visualization and interaction to thin clients.


Asunto(s)
Anatomía , Simulación por Computador , Procesamiento de Imagen Asistido por Computador/métodos , Animales , Imagenología Tridimensional , Ratones
SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA
...