RESUMO
Regenerative therapies show promise in reversing sight loss caused by degenerative eye diseases. Their precise subretinal delivery can be facilitated by robotic systems alongside with Intra-operative Optical Coherence Tomography (iOCT). However, iOCT's real-time retinal layer information is compromised by inferior image quality. To address this limitation, we introduce an unpaired video super-resolution methodology for iOCT quality enhancement. A recurrent network is proposed to leverage temporal information from iOCT sequences, and spatial information from pre-operatively acquired OCT images. Additionally, a patchwise contrastive loss enables unpaired super-resolution. Extensive quantitative analysis demonstrates that our approach outperforms existing state-of-the-art iOCT super-resolution models. Furthermore, ablation studies showcase the importance of temporal aggregation and contrastive loss in elevating iOCT quality. A qualitative study involving expert clinicians also confirms this improvement. The comprehensive evaluation demonstrates our method's potential to enhance the iOCT image quality, thereby facilitating successful guidance for regenerative therapies.
RESUMO
AIM: To explore if novel non-invasive diagnostic technologies identify early small nerve fibre and retinal neurovascular pathology in prediabetes. METHODS: Participants with normoglycaemia, prediabetes or type 2 diabetes underwent an exploratory cross-sectional analysis with optical coherence tomography angiography (OCT-A), handheld electroretinography (ERG), corneal confocal microscopy (CCM) and evaluation of electrochemical skin conductance (ESC). RESULTS: Seventy-five participants with normoglycaemia (n = 20), prediabetes (n = 29) and type 2 diabetes (n = 26) were studied. Compared with normoglycaemia, mean peak ERG amplitudes of retinal responses at low (16-Td·s: 4.05 µV, 95% confidence interval [95% CI] 0.96-7.13) and high (32-Td·s: 5·20 µV, 95% CI 1.54-8.86) retinal illuminance were lower in prediabetes, as were OCT-A parafoveal vessel densities in superficial (0.051 pixels/mm2 , 95% CI 0.005-0.095) and deep (0.048 pixels/mm2 , 95% CI 0.003-0.093) retinal layers. There were no differences in CCM or ESC measurements between these two groups. Correlations between HbA1c and peak ERG amplitude at 32-Td·s (r = -0.256, p = 0.028), implicit time at 32-Td·s (r = 0.422, p < 0.001) and 16-Td·s (r = 0.327, p = 0.005), OCT parafoveal vessel density in the superficial (r = -0.238, p = 0.049) and deep (r = -0.3, p = 0.017) retinal layers, corneal nerve fibre length (CNFL) (r = -0.293, p = 0.017), and ESC-hands (r = -0.244, p = 0.035) were observed. HOMA-IR was a predictor of CNFD (ß = -0.94, 95% CI -1.66 to -0.21, p = 0.012) and CNBD (ß = -5.02, 95% CI -10.01 to -0.05, p = 0.048). CONCLUSIONS: The glucose threshold for the diagnosis of diabetes is based on emergent retinopathy on fundus examination. We show that both abnormal retinal neurovascular structure (OCT-A) and function (ERG) may precede retinopathy in prediabetes, which require confirmation in larger, adequately powered studies.
Assuntos
Diabetes Mellitus Tipo 2 , Estado Pré-Diabético , Doenças Retinianas , Humanos , Estado Pré-Diabético/diagnóstico , Diabetes Mellitus Tipo 2/diagnóstico , Estudos Transversais , RetinaRESUMO
PURPOSE: Intra-retinal delivery of novel sight-restoring therapies will require the precision of robotic systems accompanied by excellent visualisation of retinal layers. Intra-operative Optical Coherence Tomography (iOCT) provides cross-sectional retinal images in real time but at the cost of image quality that is insufficient for intra-retinal therapy delivery.This paper proposes a super-resolution methodology that improves iOCT image quality leveraging spatiotemporal consistency of incoming iOCT video streams. METHODS: To overcome the absence of ground truth high-resolution (HR) images, we first generate HR iOCT images by fusing spatially aligned iOCT video frames. Then, we automatically assess the quality of the HR images on key retinal layers using a deep semantic segmentation model. Finally, we use image-to-image translation models (Pix2Pix and CycleGAN) to enhance the quality of LR images via quality transfer from the estimated HR domain. RESULTS: Our proposed methodology generates iOCT images of improved quality according to both full-reference and no-reference metrics. A qualitative study with expert clinicians also confirms the improvement in the delineation of pertinent layers and in the reduction of artefacts. Furthermore, our approach outperforms conventional denoising filters and the learning-based state-of-the-art. CONCLUSIONS: The results indicate that the learning-based methods using the estimated, through our pipeline, HR domain can be used to enhance the iOCT image quality. Therefore, the proposed method can computationally augment the capabilities of iOCT imaging helping this modality support the vitreoretinal surgical interventions of the future.
Assuntos
Retina , Tomografia de Coerência Óptica , Estudos Transversais , Humanos , Retina/diagnóstico por imagem , Retina/cirurgia , Lâmpada de Fenda , Tomografia de Coerência Óptica/métodosRESUMO
Regenerative therapies have recently shown potential in restoring sight lost due to degenerative diseases. Their efficacy requires precise intra-retinal delivery, which can be achieved by robotic systems accompanied by high quality visualization of retinal layers. Intra-operative Optical Coherence Tomography (iOCT) captures cross-sectional retinal images in real-time but with image quality that is inadequate for intra-retinal therapy delivery. This paper proposes a two-stage super-resolution methodology that enhances the image quality of the low resolution (LR) iOCT images leveraging information from pre-operatively acquired high-resolution (HR) OCT (preOCT) images. First, we learn the degradation process from HR to LR domain through CycleGAN and use it to generate pseudo iOCT (LR) images from the HR preOCT ones. Then, we train a Pix2Pix model on the pairs of pseudo iOCT and preOCT to learn the super-resolution mapping. Quantitative analysis using both full-reference and no-reference image quality metrics demonstrates that our approach clearly outperforms the learning-based state-of-the art techniques with statistical significance. Achieving iOCT image quality comparable to pre-OCT quality can help this medical imaging modality be established in vitreoretinal surgery, without requiring expensive hardware-related system updates.
RESUMO
This paper addresses retinal vessel segmentation on optical coherence tomography angiography (OCT-A) images of the human retina. Our approach is motivated by the need for high precision image-guided delivery of regenerative therapies in vitreo-retinal surgery. OCT-A visualizes macular vasculature, the main landmark of the surgically targeted area, at a level of detail and spatial extent unattainable by other imaging modalities. Thus, automatic extraction of detailed vessel maps can ultimately inform surgical planning. We address the task of delineation of the Superficial Vascular Plexus in 2D Maximum Intensity Projections (MIP) of OCT-A using convolutional neural networks that iteratively refine the quality of the produced vessel segmentations. We demonstrate that the proposed approach compares favourably to alternative network baselines and graph-based methodologies through extensive experimental analysis, using data collected from 50 subjects, including both individuals that underwent surgery for structural macular abnormalities and healthy subjects. Additionally, we demonstrate generalization to 3D segmentation and narrower field-of-view OCT-A. In the future, the extracted vessel maps will be leveraged for surgical planning and semi-automated intraoperative navigation in vitreo-retinal surgery.
RESUMO
PURPOSE: Sustained delivery of regenerative retinal therapies by robotic systems requires intra-operative tracking of the retinal fundus. We propose a supervised deep convolutional neural network to densely predict semantic segmentation and optical flow of the retina as mutually supportive tasks, implicitly inpainting retinal flow information missing due to occlusion by surgical tools. METHODS: As manual annotation of optical flow is infeasible, we propose a flexible algorithm for generation of large synthetic training datasets on the basis of given intra-operative retinal images. We evaluate optical flow estimation by tracking a grid and sparsely annotated ground truth points on a benchmark of challenging real intra-operative clips obtained from an extensive internally acquired dataset encompassing representative vitreoretinal surgical cases. RESULTS: The U-Net-based network trained on the synthetic dataset is shown to generalise well to the benchmark of real surgical videos. When used to track retinal points of interest, our flow estimation outperforms variational baseline methods on clips containing tool motions which occlude the points of interest, as is routinely observed in intra-operatively recorded surgery videos. CONCLUSIONS: The results indicate that complex synthetic training datasets can be used to specifically guide optical flow estimation. Our proposed algorithm therefore lays the foundation for a robust system which can assist with intra-operative tracking of moving surgical targets even when occluded.