Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 7 de 7
Filter
Add more filters










Database
Language
Publication year range
1.
Sci Rep ; 13(1): 19667, 2023 11 11.
Article in English | MEDLINE | ID: mdl-37952011

ABSTRACT

Recent developments in deep learning have shown success in accurately predicting the location of biological markers in Optical Coherence Tomography (OCT) volumes of patients with Age-Related Macular Degeneration (AMD) and Diabetic Retinopathy (DR). We propose a method that automatically locates biological markers to the Early Treatment Diabetic Retinopathy Study (ETDRS) rings, only requiring B-scan-level presence annotations. We trained a neural network using 22,723 OCT B-Scans of 460 eyes (433 patients) with AMD and DR, annotated with slice-level labels for Intraretinal Fluid (IRF) and Subretinal Fluid (SRF). The neural network outputs were mapped into the corresponding ETDRS rings. We incorporated the class annotations and domain knowledge into a loss function to constrain the output with biologically plausible solutions. The method was tested on a set of OCT volumes with 322 eyes (189 patients) with Diabetic Macular Edema, with slice-level SRF and IRF presence annotations for the ETDRS rings. Our method accurately predicted the presence of IRF and SRF in each ETDRS ring, outperforming previous baselines even in the most challenging scenarios. Our model was also successfully applied to en-face marker segmentation and showed consistency within C-scans, despite not incorporating volume information in the training process. We achieved a correlation coefficient of 0.946 for the prediction of the IRF area.


Subject(s)
Diabetic Retinopathy , Macular Degeneration , Macular Edema , Humans , Diabetic Retinopathy/diagnostic imaging , Macular Edema/diagnostic imaging , Tomography, Optical Coherence/methods , Macular Degeneration/diagnostic imaging , Biomarkers
2.
Int J Comput Assist Radiol Surg ; 18(7): 1185-1192, 2023 Jul.
Article in English | MEDLINE | ID: mdl-37184768

ABSTRACT

PURPOSE: Surgical scene understanding plays a critical role in the technology stack of tomorrow's intervention-assisting systems in endoscopic surgeries. For this, tracking the endoscope pose is a key component, but remains challenging due to illumination conditions, deforming tissues and the breathing motion of organs. METHOD: We propose a solution for stereo endoscopes that estimates depth and optical flow to minimize two geometric losses for camera pose estimation. Most importantly, we introduce two learned adaptive per-pixel weight mappings that balance contributions according to the input image content. To do so, we train a Deep Declarative Network to take advantage of the expressiveness of deep learning and the robustness of a novel geometric-based optimization approach. We validate our approach on the publicly available SCARED dataset and introduce a new in vivo dataset, StereoMIS, which includes a wider spectrum of typically observed surgical settings. RESULTS: Our method outperforms state-of-the-art methods on average and more importantly, in difficult scenarios where tissue deformations and breathing motion are visible. We observed that our proposed weight mappings attenuate the contribution of pixels on ambiguous regions of the images, such as deforming tissues. CONCLUSION: We demonstrate the effectiveness of our solution to robustly estimate the camera pose in challenging endoscopic surgical scenes. Our contributions can be used to improve related tasks like simultaneous localization and mapping (SLAM) or 3D reconstruction, therefore advancing surgical scene understanding in minimally invasive surgery.


Subject(s)
Algorithms , Imaging, Three-Dimensional , Humans , Imaging, Three-Dimensional/methods , Endoscopy/methods , Minimally Invasive Surgical Procedures/methods , Endoscopes
3.
Int J Comput Assist Radiol Surg ; 16(7): 1227-1236, 2021 Jul.
Article in English | MEDLINE | ID: mdl-34143374

ABSTRACT

PURPOSE: The detection and segmentation of surgical instruments has been a vital step for many applications in minimally invasive surgical robotics. Previously, the problem was tackled from a semantic segmentation perspective, yet these methods fail to provide good segmentation maps of instrument types and do not contain any information on the instance affiliation of each pixel. We propose to overcome this limitation by using a novel instance segmentation method which first masks instruments and then classifies them into their respective type. METHODS: We introduce a novel method for instance segmentation where a pixel-wise mask of each instance is found prior to classification. An encoder-decoder network is used to extract instrument instances, which are then separately classified using the features of the previous stages. Furthermore, we present a method to incorporate instrument priors from surgical robots. RESULTS: Experiments are performed on the robotic instrument segmentation dataset of the 2017 endoscopic vision challenge. We perform a fourfold cross-validation and show an improvement of over 18% to the previous state-of-the-art. Furthermore, we perform an ablation study which highlights the importance of certain design choices and observe an increase of 10% over semantic segmentation methods. CONCLUSIONS: We have presented a novel instance segmentation method for surgical instruments which outperforms previous semantic segmentation-based methods. Our method further provides a more informative output of instance level information, while retaining a precise segmentation mask. Finally, we have shown that robotic instrument priors can be used to further increase the performance.


Subject(s)
Endoscopy/instrumentation , Robotic Surgical Procedures/instrumentation , Surgical Instruments/standards , Humans , Semantics
4.
Ophthalmol Retina ; 5(7): 604-624, 2021 07.
Article in English | MEDLINE | ID: mdl-33971352

ABSTRACT

PURPOSE: To assess the potential of machine learning to predict low and high treatment demand in real life in patients with neovascular age-related macular degeneration (nAMD), retinal vein occlusion (RVO), and diabetic macular edema (DME) treated according to a treat-and-extend regimen (TER). DESIGN: Retrospective cohort study. PARTICIPANTS: Three hundred seventy-seven eyes (340 patients) with nAMD and 333 eyes (285 patients) with RVO or DME treated with anti-vascular endothelial growth factor agents (VEGF) according to a predefined TER from 2014 through 2018. METHODS: Eyes were grouped by disease into low, moderate, and high treatment demands, defined by the average treatment interval (low, ≥10 weeks; high, ≤5 weeks; moderate, remaining eyes). Two random forest models were trained to predict the probability of the long-term treatment demand of a new patient. Both models use morphological features automatically extracted from the OCT volumes at baseline and after 2 consecutive visits, as well as patient demographic information. Evaluation of the models included a 10-fold cross-validation ensuring that no patient was present in both the training set (nAMD, approximately 339; RVO and DME, approximately 300) and test set (nAMD, approximately 38; RVO and DME, approximately 33). MAIN OUTCOME MEASURES: Mean area under the receiver operating characteristic curve (AUC) of both models; contribution to the prediction and statistical significance of the input features. RESULTS: Based on the first 3 visits, it was possible to predict low and high treatment demand in nAMD eyes and in RVO and DME eyes with similar accuracy. The distribution of low, high, and moderate demanders was 127, 42, and 208, respectively, for nAMD and 61, 50, and 222, respectively, for RVO and DME. The nAMD-trained models yielded mean AUCs of 0.79 and 0.79 over the 10-fold crossovers for low and high demand, respectively. Models for RVO and DME showed similar results, with a mean AUC of 0.76 and 0.78 for low and high demand, respectively. Even more importantly, this study revealed that it is possible to predict low demand reasonably well at the first visit, before the first injection. CONCLUSIONS: Machine learning classifiers can predict treatment demand and may assist in establishing patient-specific treatment plans in the near future.


Subject(s)
Diabetic Retinopathy/drug therapy , Machine Learning , Macular Edema/drug therapy , Ranibizumab/administration & dosage , Retinal Vein Occlusion/drug therapy , Wet Macular Degeneration/drug therapy , Aged , Aged, 80 and over , Angiogenesis Inhibitors/administration & dosage , Diabetic Retinopathy/complications , Female , Follow-Up Studies , Humans , Intravitreal Injections , Macular Edema/etiology , Male , Middle Aged , Prognosis , Retrospective Studies , Vascular Endothelial Growth Factor A
5.
Sci Rep ; 11(1): 8621, 2021 04 21.
Article in English | MEDLINE | ID: mdl-33883573

ABSTRACT

In this paper we analyse the performance of machine learning methods in predicting patient information such as age or sex solely from retinal imaging modalities in a heterogeneous clinical population. Our dataset consists of N = 135,667 fundus images and N = 85,536 volumetric OCT scans. Deep learning models were trained to predict the patient's age and sex from fundus images, OCT cross sections and OCT volumes. For sex prediction, a ROC AUC of 0.80 was achieved for fundus images, 0.84 for OCT cross sections and 0.90 for OCT volumes. Age prediction mean absolute errors of 6.328 years for fundus, 5.625 years for OCT cross sections and 4.541 for OCT volumes were observed. We assess the performance of OCT scans containing different biomarkers and note a peak performance of AUC = 0.88 for OCT cross sections and 0.95 for volumes when there is no pathology on scans. Performance drops in case of drusen, fibrovascular pigment epitheliuum detachment and geographic atrophy present. We conclude that deep learning based methods are capable of classifying the patient's sex and age from color fundus photography and OCT for a broad spectrum of patients irrespective of underlying disease or image quality. Non-random sex prediction using fundus images seems only possible if the eye fovea and optic disc are visible.


Subject(s)
Geographic Atrophy/pathology , Photography/methods , Retina/pathology , Tomography, Optical Coherence/methods , Aged , Diagnostic Techniques, Ophthalmological , Female , Fundus Oculi , Humans , Machine Learning , Male , Middle Aged , Optic Disk/pathology
6.
Sci Rep ; 9(1): 13605, 2019 09 19.
Article in English | MEDLINE | ID: mdl-31537854

ABSTRACT

In ophthalmology, retinal biological markers, or biomarkers, play a critical role in the management of chronic eye conditions and in the development of new therapeutics. While many imaging technologies used today can visualize these, Optical Coherence Tomography (OCT) is often the tool of choice due to its ability to image retinal structures in three dimensions at micrometer resolution. But with widespread use in clinical routine, and growing prevalence in chronic retinal conditions, the quantity of scans acquired worldwide is surpassing the capacity of retinal specialists to inspect these in meaningful ways. Instead, automated analysis of scans using machine learning algorithms provide a cost effective and reliable alternative to assist ophthalmologists in clinical routine and research. We present a machine learning method capable of consistently identifying a wide range of common retinal biomarkers from OCT scans. Our approach avoids the need for costly segmentation annotations and allows scans to be characterized by biomarker distributions. These can then be used to classify scans based on their underlying pathology in a device-independent way.


Subject(s)
Radiographic Image Interpretation, Computer-Assisted/methods , Retinal Diseases/diagnostic imaging , Algorithms , Humans , Machine Learning , Tomography, Optical Coherence
7.
IEEE Trans Med Imaging ; 37(5): 1276-1287, 2018 05.
Article in English | MEDLINE | ID: mdl-29727290

ABSTRACT

Instrument detection, pose estimation, and tracking in surgical videos are an important vision component for computer-assisted interventions. While significant advances have been made in recent years, articulation detection is still a major challenge. In this paper, we propose a deep neural network for articulated multi-instrument 2-D pose estimation, which is trained on detailed annotations of endoscopic and microscopic data sets. Our model is formed by a fully convolutional detection-regression network. Joints and associations between joint pairs in our instrument model are located by the detection subnetwork and are subsequently refined through a regression subnetwork. Based on the output from the model, the poses of the instruments are inferred using maximum bipartite graph matching. Our estimation framework is powered by deep learning techniques without any direct kinematic information from a robot. Our framework is tested on single-instrument RMIT data, and also on multi-instrument EndoVis and in vivo data with promising results. In addition, the data set annotations are publicly released along with our code and model.


Subject(s)
Deep Learning , Image Processing, Computer-Assisted/methods , Robotic Surgical Procedures/methods , Algorithms , Databases, Factual , Humans , Robotic Surgical Procedures/instrumentation , Surgical Instruments
SELECTION OF CITATIONS
SEARCH DETAIL
...