Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 19 de 19
Filtrar
1.
Am J Ophthalmol ; 269: 181-188, 2024 Aug 31.
Artículo en Inglés | MEDLINE | ID: mdl-39218386

RESUMEN

PURPOSE: This study aims to quantify the volume of intraretinal fluid (IRF), subretinal fluid (SRF), and subretinal pigment epithelium (sub-RPE) fluid in treatment-naïve Type 3 macular neovascularization (MNV) eyes with age-related macular degeneration (AMD) and to investigate the correlation of these fluid volumes with visual acuity (VA) outcomes at baseline and following antivascular endothelial growth factor (VEGF) treatment. DESIGN: Retrospective, clinical cohort study. METHODS: In this study, we analyzed patients diagnosed with exudative AMD and treatment-naïve Type 3 MNV undergoing a loading dose of anti-VEGF therapy. Using a validated deep-learning segmentation strategy, we processed optical coherence tomography (OCT) B-scans to segment and quantify IRF (i.e., both in the inner and outer retina), SRF, and sub-RPE fluid volumes at baseline. The study correlated baseline fluid volumes with baseline and short-term VA outcomes postloading dose of anti-VEGF injections. RESULTS: Forty-six eyes from 46 patients were included in this study. Visual acuity was 0.51 ± 0.30 LogMAR at baseline and 0.33 ± 0.20 LogMAR after the loading dose of anti-VEGF (P = .001). Visual acuity at the follow-up visit was 0.40 ± 0.17 LogMAR in patients with no complete resolution of retinal fluid and 0.31 ± 0.20 LogMAR in eyes without retinal fluid after treatment (P = .225). In the multivariable analysis, the IRF volume in the inner retina (P = .032) and the distance of the MNV from the fovea (P = .037) were predictors of visual acuity at baseline. The baseline IRF volume in the inner retina also predicted the visual acuity at follow-up (P = .023). CONCLUSION: The present study highlights the fluid volume in the inner retina as a crucial predictor of short-term visual outcomes in Type 3 MNV, underscoring the detrimental effect of IRF on neuroretinal structures.

2.
Eye (Lond) ; 38(3): 537-544, 2024 Feb.
Artículo en Inglés | MEDLINE | ID: mdl-37670143

RESUMEN

PURPOSE: To validate a deep learning algorithm for automated intraretinal fluid (IRF), subretinal fluid (SRF) and neovascular pigment epithelium detachment (nPED) segmentations in neovascular age-related macular degeneration (nAMD). METHODS: In this IRB-approved study, optical coherence tomography (OCT) data from 50 patients (50 eyes) with exudative nAMD were retrospectively analysed. Two models, A1 and A2, were created based on gradings from two masked readers, R1 and R2. Area under the curve (AUC) values gauged detection performance, and quantification between readers and models was evaluated using Dice and correlation (R2) coefficients. RESULTS: The deep learning-based algorithms had high accuracies for all fluid types between all models and readers: per B-scan IRF AUCs were 0.953, 0.932, 0.990, 0.942 for comparisons A1-R1, A1-R2, A2-R1 and A2-R2, respectively; SRF AUCs were 0.984, 0.974, 0.987, 0.979; and nPED AUCs were 0.963, 0.969, 0.961 and 0.966. Similarly, the R2 coefficients for IRF were 0.973, 0.974, 0.889 and 0.973; SRF were 0.928, 0.964, 0.965 and 0.998; and nPED were 0.908, 0.952, 0.839 and 0.905. The Dice coefficients for IRF averaged 0.702, 0.667, 0.649 and 0.631; for SRF were 0.699, 0.651, 0.692 and 0.701; and for nPED were 0.636, 0.703, 0.719 and 0.775. In an inter-observer comparison between manual readers R1 and R2, the R2 coefficient was 0.968 for IRF, 0.960 for SRF, and 0.906 for nPED, with Dice coefficients of 0.692, 0.660 and 0.784 for the same features. CONCLUSIONS: Our deep learning-based method applied on nAMD can segment critical OCT features with performance akin to manual grading.


Asunto(s)
Aprendizaje Profundo , Degeneración Macular , Desprendimiento de Retina , Degeneración Macular Húmeda , Humanos , Tomografía de Coherencia Óptica/métodos , Estudios Retrospectivos , Líquido Subretiniano , Degeneración Macular/tratamiento farmacológico , Degeneración Macular Húmeda/diagnóstico por imagen , Degeneración Macular Húmeda/tratamiento farmacológico , Inhibidores de la Angiogénesis/uso terapéutico , Ranibizumab/uso terapéutico , Inyecciones Intravítreas
3.
Retina ; 43(3): 433-443, 2023 03 01.
Artículo en Inglés | MEDLINE | ID: mdl-36705991

RESUMEN

PURPOSE: To evaluate a prototype home optical coherence tomography device and automated analysis software for detection and quantification of retinal fluid relative to manual human grading in a cohort of patients with neovascular age-related macular degeneration. METHODS: Patients undergoing anti-vascular endothelial growth factor therapy were enrolled in this prospective observational study. In 136 optical coherence tomography scans from 70 patients using the prototype home optical coherence tomography device, fluid segmentation was performed using automated analysis software and compared with manual gradings across all retinal fluid types using receiver-operating characteristic curves. The Dice similarity coefficient was used to assess the accuracy of segmentations, and correlation of fluid areas quantified end point agreement. RESULTS: Fluid detection per B-scan had area under the receiver-operating characteristic curves of 0.95, 0.97, and 0.98 for intraretinal fluid, subretinal fluid, and subretinal pigment epithelium fluid, respectively. On a per volume basis, the values for intraretinal fluid, subretinal fluid, and subretinal pigment epithelium fluid were 0.997, 0.998, and 0.998, respectively. The average Dice similarity coefficient values across all B-scans were 0.64, 0.73, and 0.74, and the coefficients of determination were 0.81, 0.93, and 0.97 for intraretinal fluid, subretinal fluid, and subretinal pigment epithelium fluid, respectively. CONCLUSION: Home optical coherence tomography device images assessed using the automated analysis software showed excellent agreement to manual human grading.


Asunto(s)
Degeneración Macular , Degeneración Macular Húmeda , Humanos , Tomografía de Coherencia Óptica/métodos , Retina , Líquido Subretiniano , Programas Informáticos , Degeneración Macular/diagnóstico , Inhibidores de la Angiogénesis
4.
Ophthalmic Surg Lasers Imaging Retina ; 53(4): 208-214, 2022 04.
Artículo en Inglés | MEDLINE | ID: mdl-35417293

RESUMEN

BACKGROUND AND OBJECTIVE: To determine whether an automated artificial intelligence (AI) model could assess macular hole (MH) volume on swept-source optical coherence tomography (OCT) images. PATIENTS AND METHODS: This was a proof-of-concept consecutive case series. Patients with an idiopathic full-thickness MH undergoing pars plana vitrectomy surgery with 1 year of follow-up were considered for inclusion. MHs were manually graded by a vitreoretinal surgeon from preoperative OCT images to delineate MH volume. This information was used to train a fully three-dimensional convolutional neural network for automatic segmentation. The main outcome was the correlation of manual MH volume to automated volume segmentation. RESULTS: The correlation between manual and automated MH volume was R2 = 0.94 (n = 24). Automated MH volume demonstrated a higher correlation to change in visual acuity from preoperative to the postoperative 1-year time point compared with the minimum linear diameter (volume: R2 = 0.53; minimum linear diameter: R2 = 0.39). CONCLUSION: MH automated volume segmentation on OCT imaging demonstrated high correlation to manual MH volume measurements. [Ophthalmic Surg Lasers Imaging Retina. 2022;53(4):208-214.].


Asunto(s)
Aprendizaje Profundo , Perforaciones de la Retina , Inteligencia Artificial , Humanos , Perforaciones de la Retina/diagnóstico por imagen , Perforaciones de la Retina/cirugía , Estudios Retrospectivos , Tomografía de Coherencia Óptica/métodos , Vitrectomía/métodos
5.
PLoS One ; 17(2): e0262111, 2022.
Artículo en Inglés | MEDLINE | ID: mdl-35157713

RESUMEN

PURPOSE: To evaluate the predictive ability of a deep learning-based algorithm to determine long-term best-corrected distance visual acuity (BCVA) outcomes in neovascular age-related macular degeneration (nARMD) patients using baseline swept-source optical coherence tomography (SS-OCT) and OCT-angiography (OCT-A) data. METHODS: In this phase IV, retrospective, proof of concept, single center study, SS-OCT data from 17 previously treated nARMD eyes was used to assess retinal layer thicknesses, as well as quantify intraretinal fluid (IRF), subretinal fluid (SRF), and serous pigment epithelium detachments (PEDs) using a novel deep learning-based, macular fluid segmentation algorithm. Baseline OCT and OCT-A morphological features and fluid measurements were correlated using the Pearson correlation coefficient (PCC) to changes in BCVA from baseline to week 52. RESULTS: Total retinal fluid (IRF, SRF and PED) volume at baseline had the strongest correlation to improvement in BCVA at month 12 (PCC = 0.652, p = 0.005). Fluid was subsequently sub-categorized into IRF, SRF and PED, with PED volume having the next highest correlation (PCC = 0.648, p = 0.005) to BCVA improvement. Average total retinal thickness in isolation demonstrated poor correlation (PCC = 0.334, p = 0.189). When two features, mean choroidal neovascular membranes (CNVM) size and total fluid volume, were combined and correlated with visual outcomes, the highest correlation increased to PCC = 0.695 (p = 0.002). CONCLUSIONS: In isolation, total fluid volume most closely correlates with change in BCVA values between baseline and week 52. In combination with complimentary information from OCT-A, an improvement in the linear correlation score was observed. Average total retinal thickness provided a lower correlation, and thus provides a lower predictive outcome than alternative metrics assessed. Clinically, a machine-learning approach to analyzing fluid metrics in combination with lesion size may provide an advantage in personalizing therapy and predicting BCVA outcomes at week 52.


Asunto(s)
Aprendizaje Profundo , Líquido Subretiniano/fisiología , Tomografía de Coherencia Óptica , Adulto , Humanos , Inyecciones Intravítreas , Degeneración Macular/diagnóstico , Degeneración Macular/diagnóstico por imagen , Degeneración Macular/tratamiento farmacológico , Prueba de Estudio Conceptual , Receptores de Factores de Crecimiento Endotelial Vascular/uso terapéutico , Proteínas Recombinantes de Fusión/uso terapéutico , Retina/diagnóstico por imagen , Retina/fisiología , Desprendimiento de Retina/patología , Estudios Retrospectivos , Agudeza Visual
6.
Sci Rep ; 11(1): 21688, 2021 11 04.
Artículo en Inglés | MEDLINE | ID: mdl-34737384

RESUMEN

Axonal loss is the main determinant of disease progression in multiple sclerosis (MS). This study aimed to assess the utility of corneal confocal microscopy (CCM) in detecting corneal axonal loss in different courses of MS. The results were confirmed by two independent segmentation methods. 72 subjects (144 eyes) [(clinically isolated syndrome (n = 9); relapsing-remitting MS (n = 20); secondary-progressive MS (n = 22); and age-matched, healthy controls (n = 21)] underwent CCM and assessment of their disability status. Two independent algorithms (ACCMetrics; and Voxeleron deepNerve) were used to quantify corneal nerve fiber density (CNFD) (ACCMetrics only), corneal nerve fiber length (CNFL) and corneal nerve fractal dimension (CNFrD). Data are expressed as mean ± standard deviation with 95% confidence interval (CI). Compared to controls, patients with MS had significantly lower CNFD (34.76 ± 5.57 vs. 19.85 ± 6.75 fibers/mm2, 95% CI - 18.24 to - 11.59, P < .0001), CNFL [for ACCMetrics: 19.75 ± 2.39 vs. 12.40 ± 3.30 mm/mm2, 95% CI - 8.94 to - 5.77, P < .0001; for deepNerve: 21.98 ± 2.76 vs. 14.40 ± 4.17 mm/mm2, 95% CI - 9.55 to - 5.6, P < .0001] and CNFrD [for ACCMetrics: 1.52 ± 0.02 vs. 1.45 ± 0.04, 95% CI - 0.09 to - 0.05, P < .0001; for deepNerve: 1.29 ± 0.03 vs. 1.19 ± 0.07, 95% - 0.13 to - 0.07, P < .0001]. Corneal nerve parameters were comparably reduced in different courses of MS. There was excellent reproducibility between the algorithms. Significant corneal axonal loss is detected in different courses of MS including patients with clinically isolated syndrome.


Asunto(s)
Córnea/diagnóstico por imagen , Córnea/inervación , Esclerosis Múltiple/fisiopatología , Adulto , Axones/fisiología , Biomarcadores , Córnea/metabolismo , Progresión de la Enfermedad , Femenino , Humanos , Masculino , Microscopía Confocal/métodos , Persona de Mediana Edad , Esclerosis Múltiple/metabolismo , Fibras Nerviosas , Reproducibilidad de los Resultados
7.
Cornea ; 40(5): 635-642, 2021 May 01.
Artículo en Inglés | MEDLINE | ID: mdl-33528225

RESUMEN

PURPOSE: To characterize corneal subbasal nerve plexus features of normal and simian immunodeficiency virus (SIV)-infected macaques by combining in vivo corneal confocal microscopy (IVCM) with automated assessments using deep learning-based methods customized for macaques. METHODS: IVCM images were collected from both male and female age-matched rhesus and pigtailed macaques housed at the Johns Hopkins University breeding colony using the Heidelberg HRTIII with Rostock Corneal Module. We also obtained repeat IVCM images of 12 SIV-infected animals including preinfection and 10-day post-SIV infection time points. All IVCM images were analyzed using a deep convolutional neural network architecture developed specifically for macaque studies. RESULTS: Deep learning-based segmentation of subbasal nerves in IVCM images from macaques demonstrated that corneal nerve fiber length and fractal dimension measurements did not differ between species, but pigtailed macaques had significantly higher baseline corneal nerve fiber tortuosity than rhesus macaques (P = 0.005). Neither sex nor age of macaques was associated with differences in any of the assessed corneal subbasal nerve parameters. In the SIV/macaque model of human immunodeficiency virus, acute SIV infection induced significant decreases in both corneal nerve fiber length and fractal dimension (P = 0.01 and P = 0.008, respectively). CONCLUSIONS: The combination of IVCM and robust objective deep learning analysis is a powerful tool to track sensory nerve damage, enabling early detection of neuropathy. Adapting deep learning analyses to clinical corneal nerve assessments will improve monitoring of small sensory nerve fiber damage in numerous clinical settings including human immunodeficiency virus.


Asunto(s)
Córnea/inervación , Aprendizaje Profundo , Infecciones Virales del Ojo/diagnóstico , Microscopía Confocal , Fibras Nerviosas/patología , Síndrome de Inmunodeficiencia Adquirida del Simio/diagnóstico , Virus de la Inmunodeficiencia de los Simios/patogenicidad , Enfermedades del Nervio Trigémino/diagnóstico , Enfermedad Aguda , Animales , Córnea/diagnóstico por imagen , Modelos Animales de Enfermedad , Infecciones Virales del Ojo/virología , Femenino , Humanos , Macaca mulatta , Macaca nemestrina , Masculino , Persona de Mediana Edad , Fibras Nerviosas/virología , Redes Neurales de la Computación , ARN Viral/genética , Reacción en Cadena en Tiempo Real de la Polimerasa , Síndrome de Inmunodeficiencia Adquirida del Simio/virología , Virus de la Inmunodeficiencia de los Simios/genética , Enfermedades del Nervio Trigémino/virología
8.
Transl Vis Sci Technol ; 9(2): 12, 2020 02 18.
Artículo en Inglés | MEDLINE | ID: mdl-32704418

RESUMEN

Purpose: The purpose of this study was to develop a 3D deep learning system from spectral domain optical coherence tomography (SD-OCT) macular cubes to differentiate between referable and nonreferable cases for glaucoma applied to real-world datasets to understand how this would affect the performance. Methods: There were 2805 Cirrus optical coherence tomography (OCT) macula volumes (Macula protocol 512 × 128) of 1095 eyes from 586 patients at a single site that were used to train a fully 3D convolutional neural network (CNN). Referable glaucoma included true glaucoma, pre-perimetric glaucoma, and high-risk suspects, based on qualitative fundus photographs, visual fields, OCT reports, and clinical examinations, including intraocular pressure (IOP) and treatment history as the binary (two class) ground truth. The curated real-world dataset did not include eyes with retinal disease or nonglaucomatous optic neuropathies. The cubes were first homogenized using layer segmentation with the Orion Software (Voxeleron) to achieve standardization. The algorithm was tested on two separate external validation sets from different glaucoma studies, comprised of Cirrus macular cube scans of 505 and 336 eyes, respectively. Results: The area under the receiver operating characteristic (AUROC) curve for the development dataset for distinguishing referable glaucoma was 0.88 for our CNN using homogenization, 0.82 without homogenization, and 0.81 for a CNN architecture from the existing literature. For the external validation datasets, which had different glaucoma definitions, the AUCs were 0.78 and 0.95, respectively. The performance of the model across myopia severity distribution has been assessed in the dataset from the United States and was found to have an AUC of 0.85, 0.92, and 0.95 in the severe, moderate, and mild myopia, respectively. Conclusions: A 3D deep learning algorithm trained on macular OCT volumes without retinal disease to detect referable glaucoma performs better with retinal segmentation preprocessing and performs reasonably well across all levels of myopia. Translational Relevance: Interpretation of OCT macula volumes based on normative data color distributions is highly influenced by population demographics and characteristics, such as refractive error, as well as the size of the normative database. Referable glaucoma, in this study, was chosen to include cases that should be seen by a specialist. This study is unique because it uses multimodal patient data for the glaucoma definition, and includes all severities of myopia as well as validates the algorithm with international data to understand generalizability potential.


Asunto(s)
Aprendizaje Profundo , Glaucoma , Mácula Lútea , Enfermedades del Nervio Óptico , Glaucoma/diagnóstico , Humanos , Mácula Lútea/diagnóstico por imagen , Tomografía de Coherencia Óptica
9.
Eye Vis (Lond) ; 7: 27, 2020.
Artículo en Inglés | MEDLINE | ID: mdl-32420401

RESUMEN

BACKGROUND: To develop and validate a deep learning-based approach to the fully-automated analysis of macaque corneal sub-basal nerves using in vivo confocal microscopy (IVCM). METHODS: IVCM was used to collect 108 images from 35 macaques. 58 of the images from 22 macaques were used to evaluate different deep convolutional neural network (CNN) architectures for the automatic analysis of sub-basal nerves relative to manual tracings. The remaining images were used to independently assess correlations and inter-observer performance relative to three readers. RESULTS: Correlation scores using the coefficient of determination between readers and the best CNN averaged 0.80. For inter-observer comparison, inter-correlation coefficients (ICCs) between the three expert readers and the automated approach were 0.75, 0.85 and 0.92. The ICC between all four observers was 0.84, the same as the average between the CNN and individual readers. CONCLUSIONS: Deep learning-based segmentation of sub-basal nerves in IVCM images shows high to very high correlation to manual segmentations in macaque data and is indistinguishable across readers. As quantitative measurements of corneal sub-basal nerves are important biomarkers for disease screening and management, the reported work offers utility to a variety of research and clinical studies using IVCM.

10.
Invest Ophthalmol Vis Sci ; 60(2): 712-722, 2019 02 01.
Artículo en Inglés | MEDLINE | ID: mdl-30786275

RESUMEN

Purpose: To develop and assess a method for predicting the likelihood of converting from early/intermediate to advanced wet age-related macular degeneration (AMD) using optical coherence tomography (OCT) imaging and methods of deep learning. Methods: Seventy-one eyes of 71 patients with confirmed early/intermediate AMD with contralateral wet AMD were imaged with OCT three times over 2 years (baseline, year 1, year 2). These eyes were divided into two groups: eyes that had not converted to wet AMD (n = 40) at year 2 and those that had (n = 31). Two deep convolutional neural networks (CNN) were evaluated using 5-fold cross validation on the OCT data at baseline to attempt to predict which eyes would convert to advanced AMD at year 2: (1) VGG16, a popular CNN for image recognition was fine-tuned, and (2) a novel, simplified CNN architecture was trained from scratch. Preprocessing was added in the form of a segmentation-based normalization to reduce variance in the data and improve performance. Results: Our new architecture, AMDnet, with preprocessing, achieved an area under the receiver operating characteristic (ROC) curve (AUC) of 0.89 at the B-scan level and 0.91 for volumes. Results for VGG16, an established CNN architecture, with preprocessing were 0.82 for B-scans/0.87 for volumes versus 0.66 for B-scans/0.69 for volumes without preprocessing. Conclusions: A CNN with layer segmentation-based preprocessing shows strong predictive power for the progression of early/intermediate AMD to advanced AMD. Use of the preprocessing was shown to improve performance regardless of the network architecture.


Asunto(s)
Aprendizaje Profundo , Diagnóstico por Computador/métodos , Degeneración Macular Húmeda/diagnóstico , Anciano , Anciano de 80 o más Años , Progresión de la Enfermedad , Femenino , Estudios de Seguimiento , Humanos , Aprendizaje Automático , Masculino , Persona de Mediana Edad , Redes Neurales de la Computación , Proyectos Piloto , Curva ROC , Tomografía de Coherencia Óptica/métodos
11.
Eye (Lond) ; 33(3): 428-434, 2019 03.
Artículo en Inglés | MEDLINE | ID: mdl-30310161

RESUMEN

PURPOSE: To evaluate longitudinally volume changes in inner and outer retinal layers in early and intermediate age-related macular degeneration (AMD) compared to healthy control eyes using optical coherence tomography (OCT). METHODS: 71 eyes with AMD and 31 control eyes were imaged at two time points: baseline and after 2 years. Automated OCT layer segmentation was performed using OrionTM. This software is able to measure volumes of retinal layers with distinct boundaries including Retinal Nerve Fibre Layer (RNFL), Ganglion Cell-Inner Plexiform Layer (GCIPL), Inner Nuclear Layer (INL), Outer Plexiform Layer (OPL), Outer Nuclear Layer (ONL), Photoreceptors (PR) and Retinal Pigment Epithelium-Bruch's Membrane complex (RPE-BM). The mean retinal layer volumes and volume changes at 2 years were compared between groups. RESULTS: Mean GCIPL and INL volumes were lower, while PR and RPE-BM volumes were higher in AMD eyes than controls at baseline (all P < 0.05) and year 2 (all P < 0.05). In AMD eyes, RNFL and ONL volumes decreased by 0.0232 (P = 0.033) and 0.0851 (P = 0.001), respectively. In contrast, OPL and RPE-BM volumes increased in AMD eyes by 0.0391 (P = 0.000) and 0.0209 (P = 0.000) respectively. Moreover, there were significant differences in longitudinal volume change of OPL (P = 0.02), ONL (P = 0.008) and RPE-BM (P = 0.02) between AMD eyes and controls. CONCLUSIONS: There were abnormal retinal layer volumes and volume changes in eyes with early and intermediate AMD.


Asunto(s)
Lámina Basal de la Coroides/patología , Degeneración Macular/patología , Retina/patología , Células Ganglionares de la Retina/patología , Epitelio Pigmentado de la Retina/patología , Anciano , Lámina Basal de la Coroides/diagnóstico por imagen , Progresión de la Enfermedad , Femenino , Humanos , Degeneración Macular/diagnóstico por imagen , Degeneración Macular/fisiopatología , Masculino , Persona de Mediana Edad , Retina/diagnóstico por imagen , Estudios Retrospectivos , Tomografía de Coherencia Óptica , Agudeza Visual
12.
Graefes Arch Clin Exp Ophthalmol ; 254(3): 561-7, 2016 Mar.
Artículo en Inglés | MEDLINE | ID: mdl-26016810

RESUMEN

PURPOSE: To characterise the changes of the retinal layers in patients with acute anterior ischaemic optic neuropathy (AION), aiming to identify imaging markers for predicting the residual visual function. METHODS: This was a retrospective review of consecutive patients with unilateral AION from January 2010 to December 2013. We analysed affected eyes at baseline and 1 month later, compared to fellow healthy eyes. Utilising novel image analysis software, we conducted algorithmic segmentation in layers and division in early treatment of diabetic retinopathy study (ETDRS) quadrants of optical coherence tomography images of the macula. Pearson product moment regression analysis of retinal layer thickness and best corrected visual acuity (BCVA) in logMAR units and mean deviation of the SITA 24-2 visual field (VF) were carried out at the 1-month time point. RESULTS: Twenty eyes from 20 patients were included and compared to 20 healthy fellow eyes. At baseline, we found a significantly increased mean thickness of the retinal nerve fibre layer (RNFL) of 42.2 µm (±6.7SD) in AION eyes compared to 37.9 µm (±4.2 SD) in healthy eyes (p = 0.002). The outer nuclear layer (ONL) was also significantly thickened at 96.6 µm (±7.2 SD) compared to 90.8 µm (±5.7 SD) in the fellow eye (p < 0.001). After 1 month, the RNFL and the ganglion cell layer (GCL) were thinned 17.7 % [to 31.2 µm (±6.4 SD), p < 0.001] and 19.3 % [to 66.5 µm (±7.0 SD), p < 0.001] compared to the contralateral eye. Additionally, the ONL remained thickened at 96.7 µm (±7.0 SD, p < 0.001). At baseline, we found a significant correlation between the ONL thickness and the VF (r = -0.482, p = 0.005) and the BCVA at discharge (r = 0.552, p < 0.001), indicating that a thicker ONL correlates with poorer visual function. The GCL thickness also correlates with the BCVA at discharge (r = 0.411, p = 0.02), where a thinner GCL predicts worse BCVA. At the 1-month time point, the GCL thinning was correlated with both the VF (r = 0.471, p = 0.005) and the BCVA (r = -0.456, p = 0.007), indicating worse visual function. CONCLUSIONS: Changes in the thickness of different layers of the retina occur early in the course of AION and evolve over time, resulting in the atrophy of the GCL and RNFL. ONL thickening at baseline is associated with visual dysfunction. Thinning of the GCL after 1 month correlates with poorer VF and BCVA at 1 month after acute AION.


Asunto(s)
Fibras Nerviosas/patología , Neuropatía Óptica Isquémica/fisiopatología , Células Ganglionares de la Retina/patología , Agudeza Visual/fisiología , Campos Visuales/fisiología , Enfermedad Aguda , Anciano , Arteritis/fisiopatología , Femenino , Humanos , Masculino , Persona de Mediana Edad , Estudios Retrospectivos , Tomografía de Coherencia Óptica
13.
Artículo en Inglés | MEDLINE | ID: mdl-17354804

RESUMEN

Computer-aided detection (CAD) has become increasingly common in recent years as a tool in catching breast cancer in its early, more treatable stages. More and more breast centers are using CAD as studies continue to demonstrate its effectiveness. As the technology behind CAD improves, so do its results and its impact on society. In trying to improve the sensitivity and specificity of CAD algorithms, a good deal of work has been done on feature extraction, the generation of mathematical representations of mammographic features which can help distinguish true cancerous lesions from false ones. One feature that is not currently seen in the literature that physicians rely on in making their decisions is location within the breast. This is a difficult feature to calculate as it requires a good deal of prior knowledge as well as some way of accounting for the tremendous variability present in breast shapes. In this paper, we present a method for the generation and implementation of a probabilistic breast cancer atlas. We then validate this method on data from the Digital Database for Screening Mammography (DDSM).


Asunto(s)
Algoritmos , Anatomía Artística/métodos , Neoplasias de la Mama/diagnóstico por imagen , Imagenología Tridimensional/métodos , Almacenamiento y Recuperación de la Información/métodos , Mamografía/métodos , Interpretación de Imagen Radiográfica Asistida por Computador/métodos , Inteligencia Artificial , Simulación por Computador , Interpretación Estadística de Datos , Bases de Datos Factuales , Femenino , Humanos , Aumento de la Imagen/métodos , Ilustración Médica , Modelos Anatómicos , Modelos Biológicos , Modelos Estadísticos , Reconocimiento de Normas Patrones Automatizadas/métodos , Reproducibilidad de los Resultados , Sensibilidad y Especificidad , Procesamiento de Señales Asistido por Computador
14.
Med Phys ; 32(9): 2870-80, 2005 Sep.
Artículo en Inglés | MEDLINE | ID: mdl-16266101

RESUMEN

Computation of digitally reconstructed radiograph (DRR) images is the rate-limiting step in most current intensity-based algorithms for the registration of three-dimensional (3D) images to two-dimensional (2D) projection images. This paper introduces and evaluates the progressive attenuation field (PAF), which is a new method to speed up DRR computation. A PAF is closely related to an attenuation field (AF). A major difference is that a PAF is constructed on the fly as the registration proceeds; it does not require any precomputation time, nor does it make any prior assumptions of the patient pose or limit the permissible range of patient motion. A PAF effectively acts as a cache memory for projection values once they are computed, rather than as a lookup table for precomputed projections like standard AFs. We use a cylindrical attenuation field parametrization, which is better suited for many medical applications of 2D-3D registration than the usual two-plane parametrization. The computed attenuation values are stored in a hash table for time-efficient storage and access. Using clinical gold-standard spine image data sets from five patients, we demonstrate consistent speedups of intensity-based 2D-3D image registration using PAF DRRs by a factor of 10 over conventional ray casting DRRs with no decrease of registration accuracy or robustness.


Asunto(s)
Imagenología Tridimensional , Interpretación de Imagen Radiográfica Asistida por Computador , Tomografía Computarizada por Rayos X , Algoritmos , Humanos , Columna Vertebral/diagnóstico por imagen
15.
IEEE Trans Med Imaging ; 24(11): 1441-54, 2005 Nov.
Artículo en Inglés | MEDLINE | ID: mdl-16279081

RESUMEN

Generation of digitally reconstructed radiographs (DRRs) is computationally expensive and is typically the rate-limiting step in the execution time of intensity-based two-dimensional to three-dimensional (2D-3D) registration algorithms. We address this computational issue by extending the technique of light field rendering from the computer graphics community. The extension of light fields, which we call attenuation fields (AFs), allows most of the DRR computation to be performed in a preprocessing step; after this precomputation step, DRRs can be generated substantially faster than with conventional ray casting. We derive expressions for the physical sizes of the two planes of an AF necessary to generate DRRs for a given X-ray camera geometry and all possible object motion within a specified range. Because an AF is a ray-based data structure, it is substantially more memory efficient than a huge table of precomputed DRRs because it eliminates the redundancy of replicated rays. Nonetheless, an AF can require substantial memory, which we address by compressing it using vector quantization. We compare DRRs generated using AFs (AF-DRRs) to those generated using ray casting (RC-DRRs) for a typical C-arm geometry and computed tomography images of several anatomic regions. They are quantitatively very similar: the median peak signal-to-noise ratio of AF-DRRs versus RC-DRRs is greater than 43 dB in all cases. We perform intensity-based 2D-3D registration using AF-DRRs and RC-DRRs and evaluate registration accuracy using gold-standard clinical spine image data from four patients. The registration accuracy and robustness of the two methods is virtually identical whereas the execution speed using AF-DRRs is an order of magnitude faster.


Asunto(s)
Algoritmos , Imagenología Tridimensional/métodos , Intensificación de Imagen Radiográfica/métodos , Interpretación de Imagen Radiográfica Asistida por Computador/métodos , Columna Vertebral/diagnóstico por imagen , Técnica de Sustracción , Cirugía Asistida por Computador/métodos , Sistemas de Computación , Humanos , Reproducibilidad de los Resultados , Dispersión de Radiación , Sensibilidad y Especificidad , Procesamiento de Señales Asistido por Computador , Columna Vertebral/cirugía
16.
IEEE Trans Med Imaging ; 24(11): 1455-68, 2005 Nov.
Artículo en Inglés | MEDLINE | ID: mdl-16279082

RESUMEN

Accurate and fast localization of a predefined target region inside the patient is an important component of many image-guided therapy procedures. This problem is commonly solved by registration of intraoperative 2-D projection images to 3-D preoperative images. If the patient is not fixed during the intervention, the 2-D image acquisition is repeated several times during the procedure, and the registration problem can be cast instead as a 3-D tracking problem. To solve the 3-D problem, we propose in this paper to apply 2-D region tracking to first recover the components of the transformation that are in-plane to the projections. The 2-D motion estimates of all projections are backprojected into 3-D space, where they are then combined into a consistent estimate of the 3-D motion. We compare this method to intensity-based 2-D to 3-D registration and a combination of 2-D motion backprojection followed by a 2-D to 3-D registration stage. Using clinical data with a fiducial marker-based gold-standard transformation, we show that our method is capable of accurately tracking vertebral targets in 3-D from 2-D motion measured in X-ray projection images. Using a standard tracking algorithm (hyperplane tracking), tracking is achieved at video frame rates but fails relatively often (32% of all frames tracked with target registration error (TRE) better than 1.2 mm, 82% of all frames tracked with TRE better than 2.4 mm). With intensity-based 2-D to 2-D image registration using normalized mutual information (NMI) and pattern intensity (PI), accuracy and robustness are substantially improved. NMI tracked 82% of all frames in our data with TRE better than 1.2 mm and 96% of all frames with TRE better than 2.4 mm. This comes at the cost of a reduced frame rate, 1.7 s average processing time per frame and projection device. Results using PI were slightly more accurate, but required on average 5.4 s time per frame. These results are still substantially faster than 2-D to 3-D registration. We conclude that motion backprojection from 2-D motion tracking is an accurate and efficient method for tracking 3-D target motion, but tracking 2-D motion accurately and robustly remains a challenge.


Asunto(s)
Algoritmos , Imagenología Tridimensional/métodos , Movimiento , Neuronavegación/métodos , Interpretación de Imagen Radiográfica Asistida por Computador/métodos , Radiocirugia/métodos , Técnica de Sustracción , Artefactos , Inteligencia Artificial , Sistemas de Computación , Humanos , Intensificación de Imagen Radiográfica/métodos , Reproducibilidad de los Resultados , Sensibilidad y Especificidad
17.
Acad Radiol ; 12(1): 37-50, 2005 Jan.
Artículo en Inglés | MEDLINE | ID: mdl-15691724

RESUMEN

RATIONALE AND OBJECTIVES: The two-dimensional (2D)-three dimensional (3D) registration of a computed tomography image to one or more x-ray projection images has a number of image-guided therapy applications. In general, fiducial marker-based methods are fast, accurate, and robust, but marker implantation is not always possible, often is considered too invasive to be clinically acceptable, and entails risk. There also is the unresolved issue of whether it is acceptable to leave markers permanently implanted. Intensity-based registration methods do not require the use of markers and can be automated because such geometric features as points and surfaces do not need to be segmented from the images. However, for spine images, intensity-based methods are susceptible to local optima in the cost function and thus need initial transformations that are close to the correct transformation. MATERIALS AND METHODS: In this report, we propose a hybrid similarity measure for 2D-3D registration that is a weighted combination of an intensity-based similarity measure (mutual information) and a point-based measure using one fiducial marker. We evaluate its registration accuracy and robustness by using gold-standard clinical spine image data from four patients. RESULTS: Mean registration errors for successful registrations for the four patients were 1.3 and 1.1 mm for the intensity-based and hybrid similarity measures, respectively. Whereas the percentage of successful intensity-based registrations (registration error < 2.5 mm) decreased rapidly as the initial transformation got further from the correct transformation, the incorporation of a single marker produced successful registrations more than 99% of the time independent of the initial transformation. CONCLUSION: The use of one fiducial marker reduces 2D-3D spine image registration error slightly and improves robustness substantially. The findings are potentially relevant for image-guided therapy. If one marker is sufficient to obtain clinically acceptable registration accuracy and robustness, as the preliminary results using the proposed hybrid similarity measure suggest, the marker can be placed on a spinous process, which could be accomplished without penetrating muscle or using fluoroscopic guidance, and such a marker could be removed relatively easily.


Asunto(s)
Procesamiento de Imagen Asistido por Computador/métodos , Imagenología Tridimensional/métodos , Cirugía Asistida por Computador/métodos , Tomografía Computarizada por Rayos X/métodos , Algoritmos , Calibración , Vértebras Cervicales/diagnóstico por imagen , Diseño de Equipo , Humanos , Procesamiento de Imagen Asistido por Computador/instrumentación , Radiocirugia/instrumentación , Radiocirugia/métodos , Enfermedades de la Columna Vertebral/cirugía , Columna Vertebral/diagnóstico por imagen , Cirugía Asistida por Computador/instrumentación , Vértebras Torácicas/diagnóstico por imagen
18.
IEEE Trans Med Imaging ; 23(8): 983-94, 2004 Aug.
Artículo en Inglés | MEDLINE | ID: mdl-15338732

RESUMEN

It is well known in the pattern recognition community that the accuracy of classifications obtained by combining decisions made by independent classifiers can be substantially higher than the accuracy of the individual classifiers. We have previously shown this to be true for atlas-based segmentation of biomedical images. The conventional method for combining individual classifiers weights each classifier equally (vote or sum rule fusion). In this paper, we propose two methods that estimate the performances of the individual classifiers and combine the individual classifiers by weighting them according to their estimated performance. The two methods are multiclass extensions of an expectation-maximization (EM) algorithm for ground truth estimation of binary classification based on decisions of multiple experts (Warfield et al., 2004). The first method performs parameter estimation independently for each class with a subsequent integration step. The second method considers all classes simultaneously. We demonstrate the efficacy of these performance-based fusion methods by applying them to atlas-based segmentations of three-dimensional confocal microscopy images of bee brains. In atlas-based image segmentation, multiple classifiers arise naturally by applying different registration methods to the same atlas, or the same registration method to different atlases, or both. We perform a validation study designed to quantify the success of classifier combination methods in atlas-based segmentation. By applying random deformations, a given ground truth atlas is transformed into multiple segmentations that could result from imperfect registrations of an image to multiple atlas images. In a second evaluation study, multiple actual atlas-based segmentations are combined and their accuracies computed by comparing them to a manual segmentation. We demonstrate in both evaluation studies that segmentations produced by combining multiple individual registration-based segmentations are more accurate for the two classifier fusion methods we propose, which weight the individual classifiers according to their EM-based performance estimates, than for simple sum rule fusion, which weights each classifier equally.


Asunto(s)
Algoritmos , Encéfalo/citología , Interpretación de Imagen Asistida por Computador/métodos , Imagenología Tridimensional/métodos , Reconocimiento de Normas Patrones Automatizadas , Técnica de Sustracción , Anatomía Artística , Animales , Abejas , Análisis por Conglomerados , Simulación por Computador , Aumento de la Imagen/métodos , Almacenamiento y Recuperación de la Información/métodos , Funciones de Verosimilitud , Ilustración Médica , Microscopía Confocal/métodos , Modelos Biológicos , Modelos Estadísticos , Análisis Numérico Asistido por Computador , Reproducibilidad de los Resultados , Sensibilidad y Especificidad , Procesamiento de Señales Asistido por Computador
19.
Inf Process Med Imaging ; 18: 210-21, 2003 Jul.
Artículo en Inglés | MEDLINE | ID: mdl-15344459

RESUMEN

It is well-known in the pattern recognition community that the accuracy of classifications obtained by combining decisions made by independent classifiers can be substantially higher that the accuracy of the individual classifiers. In order to combine multiple segmentations we introduce two extensions to an expectation maximization (EM) algorithm for ground truth estimation based on multiple experts (Warfield et al., MICCAI 2002). The first method repeatedly applies the Warfield algorithm with a subsequent integration step. The second method is a multi-label extension of the Warfield algorithm. Both extensions integrate multiple segmentations into one that is closer to the unknown ground truth than the individual segmentations. In atlas-based image segmentation, multiple classifiers arise naturally by applying different registration methods to the same atlas, or the same registration method to different atlases, or both. We perform a validation study designed to quantify the success of classifier combination methods in atlas-based segmentation. By applying random deformations, a given ground truth atlas is transformed into multiple segmentations that could result from imperfect registrations of an image to multiple atlas images. We demonstrate that a segmentation produced by combining multiple individual registration-based segmentations is more accurate for the two EM methods we propose than for simple label averaging.


Asunto(s)
Algoritmos , Encéfalo/anatomía & histología , Interpretación de Imagen Asistida por Computador/métodos , Imagenología Tridimensional/métodos , Modelos Biológicos , Reconocimiento de Normas Patrones Automatizadas , Técnica de Sustracción , Animales , Atlas como Asunto , Abejas , Simulación por Computador , Sistemas Especialistas , Aumento de la Imagen/métodos , Funciones de Verosimilitud , Modelos Estadísticos , Reproducibilidad de los Resultados , Sensibilidad y Especificidad
SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA
...