Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 15 de 15
Filter
Add more filters










Publication year range
1.
Article in English | MEDLINE | ID: mdl-38082703

ABSTRACT

Ophthalmic surgery, which addresses critical eye diseases such as retinal disorders, remains a formidable and arduous surgical pursuit. Nevertheless, with the advent of cutting-edge robotics and automation technology, significant advancement has been made in recent years to enhance the safety and efficacy of these procedures through meticulous research and development efforts. Ensuring the safe and effective execution of micro-surgical procedures requires stringent quality control measures, notably concerning evaluating and testing the devices utilized. During the development phase, these instruments must undergo extensive and continual evaluation by clinical practitioners to guarantee their safety and efficacy. Ideally, the test conditions should be identical to those of an actual operation. When testing robotic systems for ophthalmology, essential variables of the human eye, such as tissue properties and movement mechanisms, should be addressed. To minimize the discrepancy of tests and actual eye surgery conditions, in this paper, we propose a developed mechanical eye model to enable the realistic evaluation of ophthalmic surgical systems. After developing a virtual and physical model, the model was tested by an eye surgeon. The eye surgeon rated the model with four out of five possible points.Clinical relevance- This method ensures minimal discrepancy in verification of ophthalmic surgical devices by allowing the mechanical eye model to behave similar to the human eye, thus providing a realistic surgical procedure.


Subject(s)
Eye Diseases , Ophthalmology , Robotics , Humans , Ophthalmologic Surgical Procedures , Eye Diseases/diagnosis , Eye Diseases/surgery
2.
Article in English | MEDLINE | ID: mdl-38083453

ABSTRACT

The field of robotic microsurgery and micro-manipulation has undergone a profound evolution in recent years, particularly with regard to the accuracy, precision, versatility, and dexterity. These advancements have the potential to revolutionize high-precision biomedical procedures, such as neurosurgery, vitreoretinal surgery, and cell micro-manipulation. However, a critical challenge in developing micron-precision robotic systems is accurately verifying the end-effector motion in 3D. Such verification is complicated due to environmental vibrations, inaccuracy of mechanical assembly, and other physical uncertainties. To overcome these challenges, this paper proposes a novel single-camera framework that utilizes mirrors with known geometric parameters to estimate the 3D position of the microsurgical instrument. Euclidean distance between reconstructed points by the algorithm and the robot movement recorded by the highly accurate encoders is considered an error. Our method exhibits an accurate estimation with the mean absolute error of 0.044 mm when tested on a 23G surgical cannula with a diameter of 0.640 mm and operates at a resolution of 4024 × 3036 at 30 frames per second.


Subject(s)
Robotics , Surgery, Computer-Assisted , Microsurgery , Motion , Movement
3.
IEEE Int Conf Robot Autom ; 2023: 4724-4731, 2023.
Article in English | MEDLINE | ID: mdl-38125032

ABSTRACT

In the last decade, various robotic platforms have been introduced that could support delicate retinal surgeries. Concurrently, to provide semantic understanding of the surgical area, recent advances have enabled microscope-integrated intraoperative Optical Coherent Tomography (iOCT) with high-resolution 3D imaging at near video rate. The combination of robotics and semantic understanding enables task autonomy in robotic retinal surgery, such as for subretinal injection. This procedure requires precise needle insertion for best treatment outcomes. However, merging robotic systems with iOCT introduces new challenges. These include, but are not limited to high demands on data processing rates and dynamic registration of these systems during the procedure. In this work, we propose a framework for autonomous robotic navigation for subretinal injection, based on intelligent real-time processing of iOCT volumes. Our method consists of an instrument pose estimation method, an online registration between the robotic and the iOCT system, and trajectory planning tailored for navigation to an injection target. We also introduce intelligent virtual B-scans, a volume slicing approach for rapid instrument pose estimation, which is enabled by Convolutional Neural Networks (CNNs). Our experiments on ex-vivo porcine eyes demonstrate the precision and repeatability of the method. Finally, we discuss identified challenges in this work and suggest potential solutions to further the development of such systems.

4.
Robotica ; 41(5): 1536-1549, 2023 May.
Article in English | MEDLINE | ID: mdl-37982126

ABSTRACT

Retinal surgery is widely considered to be a complicated and challenging task even for specialists. Image-guided robot-assisted intervention is among the novel and promising solutions that may enhance human capabilities therein. In this paper, we demonstrate the possibility of using spotlights for 5D guidance of a microsurgical instrument. The theoretical basis of the localization for the instrument based on the projection of a single spotlight is analyzed to deduce the position and orientation of the spotlight source. The usage of multiple spotlights is also proposed to check the possibility of further improvements for the performance boundaries. The proposed method is verified within a high-fidelity simulation environment using the 3D creation suite Blender. Experimental results show that the average positioning error is 0.029 mm using a single spotlight and 0.025 mm with three spotlights, respectively, while the rotational errors are 0.124 and 0.101, which shows the application to be promising in instrument localization for retinal surgery.

5.
Biomed Opt Express ; 14(10): 5466-5483, 2023 Oct 01.
Article in English | MEDLINE | ID: mdl-37854552

ABSTRACT

With the incremental popularity of ophthalmic imaging techniques, anonymization of the clinical image datasets is becoming a critical issue, especially the fundus images, which would have unique patient-specific biometric content. Towards achieving a framework to anonymize ophthalmic images, we propose an image-specific de-identification method on the vascular structure of retinal fundus images while preserving important clinical features such as hard exudates. Our method calculates the contribution of latent code in latent space to the vascular structure by computing the gradient map of the generated image with respect to latent space and then by computing the overlap between the vascular mask and the gradient map. The proposed method is designed to specifically target and effectively manipulate the latent code with the highest contribution score in vascular structures. Extensive experimental results show that our proposed method is competitive with other state-of-the-art approaches in terms of identity similarity and lesion similarity, respectively. Additionally, our approach allows for a better balance between identity similarity and lesion similarity, thus ensuring optimal performance in a trade-off manner.

6.
J Robot Surg ; 17(6): 2735-2742, 2023 Dec.
Article in English | MEDLINE | ID: mdl-37670151

ABSTRACT

The purpose of this study is to compare robot-assisted and manual subretinal injections in terms of successful subretinal blistering, reflux incidences and damage of the retinal pigment epithelium (RPE). Subretinal injection was simulated on 84 ex-vivo porcine eyes with half of the interventions being carried out manually and the other half by controlling a custom-built robot in a master-slave fashion. After pars plana vitrectomy (PPV), the retinal target spot was determined under a LUMERA 700 microscope with microscope-integrated intraoperative optical coherence tomography (iOCT) RESCAN 700 (Carl Zeiss Meditec, Germany). For injection, a 1 ml syringe filled with perfluorocarbon liquid (PFCL) was tipped with a 40-gauge metal cannula (Incyto Co., Ltd., South Korea). In one set of trials, the needle was attached to the robot's end joint and maneuvered robotically to the retinal target site. In another set of trials, approaching the retina was performed manually. Intraretinal cannula-tip depth was monitored continuously via iOCT. At sufficient depth, PFCL was injected into the subretinal space. iOCT images and fundus video recordings were used to evaluate the surgical outcome. Robotic injections showed more often successful subretinal blistering (73.7% vs. 61.8%, p > 0.05) and a significantly lower incidence of reflux (23.7% vs. 58.8%, p < 0.01). Although larger tip depths were achieved in successful manual trials, RPE penetration occurred in 10.5% of robotic but in 26.5% of manual cases (p > 0.05). In conclusion, significantly less reflux incidences were achieved with the use of a robot. Furthermore, RPE penetrations occurred less and successful blistering more frequently when performing robotic surgery.


Subject(s)
Robotic Surgical Procedures , Robotics , Humans , Animals , Swine , Tomography, Optical Coherence/methods , Robotic Surgical Procedures/methods , Retina , Vitrectomy/methods
7.
IEEE Int Conf Robot Autom ; 2022: 7717-7723, 2022 May.
Article in English | MEDLINE | ID: mdl-36128019

ABSTRACT

Retinal surgery is a complex medical procedure that requires exceptional expertise and dexterity. For this purpose, several robotic platforms are currently under development to enable or improve the outcome of microsurgical tasks. Since the control of such robots is often designed for navigation inside the eye in proximity to the retina, successful trocar docking and insertion of the instrument into the eye represents an additional cognitive effort, and is therefore one of the open challenges in robotic retinal surgery. For this purpose, we present a platform for autonomous trocar docking that combines computer vision and a robotic setup. Inspired by the Cuban Colibri (hummingbird) aligning its beak to a flower using only vision, we mount a camera onto the endeffector of a robotic system. By estimating the position and pose of the trocar, the robot is able to autonomously align and navigate the instrument towards the Trocar Entry Point (TEP) and finally perform the insertion. Our experiments show that the proposed method is able to accurately estimate the position and pose of the trocar and achieve repeatable autonomous docking. The aim of this work is to reduce the complexity of the robotic setup prior to the surgical task and therefore, increase the intuitiveness of the system integration into clinical workflow.

8.
Article in English | MEDLINE | ID: mdl-37396671

ABSTRACT

Subretinal injection (SI) is an ophthalmic surgical procedure that allows for the direct injection of therapeutic substances into the subretinal space to treat vitreoretinal disorders. Although this treatment has grown in popularity, various factors contribute to its difficulty. These include the retina's fragile, nonregenerative tissue, as well as hand tremor and poor visual depth perception. In this context, the usage of robotic devices may reduce hand tremors and facilitate gradual and controlled SI. For the robot to successfully move to the target area, it needs to understand the spatial relationship between the attached needle and the tissue. The development of optical coherence tomography (OCT) imaging has resulted in a substantial advancement in visualizing retinal structures at micron resolution. This paper introduces a novel foundation for an OCT-guided robotic steering framework that enables a surgeon to plan and select targets within the OCT volume. At the same time, the robot automatically executes the trajectories necessary to achieve the selected targets. Our contribution consists of a novel combination of existing methods, creating an intraoperative OCT-Robot registration pipeline. We combined straightforward affine transformation computations with robot kinematics and a deep neural network-determined tool-tip location in OCT. We evaluate our framework's capability in a cadaveric pig eye open-sky procedure and using an aluminum target board. Targeting the subretinal space of the pig eye produced encouraging results with a mean Euclidean error of 23.8µm.

9.
IEEE Robot Autom Lett ; 6(4): 7750-7757, 2021 Oct.
Article in English | MEDLINE | ID: mdl-35309100

ABSTRACT

Retinal surgery is known to be a complicated and challenging task for an ophthalmologist even for retina specialists. Image guided robot-assisted intervention is among the novel and promising solutions that may enhance human capabilities during microsurgery. In this paper, a novel method is proposed for 3D navigation of a microsurgical instrument based on the projection of a spotlight during robot-assisted retinal surgery. To test the feasibility and effectiveness of the proposed method, a vessel tracking task in a phantom with a Remote Center of Motion (RCM) constraint is performed by the Steady-Hand Eye Robot (SHER). The results are compared to manual tracking, cooperative control tracking with the SHER and spotlight-based automatic tracking with SHER. The reported results are that the spotlight-based automatic tracking with SHER can reach an average tracking error of 0.013 mm and keeping distance error of 0.1 mm from the desired range demonstrating a significant improvement compared to manual or cooperative control methods alone.

10.
IEEE J Biomed Health Inform ; 24(12): 3338-3350, 2020 12.
Article in English | MEDLINE | ID: mdl-32750971

ABSTRACT

Machine learning and especially deep learning techniques are dominating medical image and data analysis. This article reviews machine learning approaches proposed for diagnosing ophthalmic diseases during the last four years. Three diseases are addressed in this survey, namely diabetic retinopathy, age-related macular degeneration, and glaucoma. The review covers over 60 publications and 25 public datasets and challenges related to the detection, grading, and lesion segmentation of the three considered diseases. Each section provides a summary of the public datasets and challenges related to each pathology and the current methods that have been applied to the problem. Furthermore, the recent machine learning approaches used for retinal vessels segmentation, and methods of retinal layers and fluid segmentation are reviewed. Two main imaging modalities are considered in this survey, namely color fundus imaging, and optical coherence tomography. Machine learning approaches that use eye measurements and visual field data for glaucoma detection are also included in the survey. Finally, the authors provide their views, expectations and the limitations of the future of these techniques in the clinical practice.


Subject(s)
Diagnostic Techniques, Ophthalmological , Image Interpretation, Computer-Assisted , Machine Learning , Deep Learning , Glaucoma/diagnostic imaging , Humans , Retinal Diseases/diagnostic imaging , Tomography, Optical Coherence
11.
Int Symp Med Robot ; 20202020 Nov.
Article in English | MEDLINE | ID: mdl-34595483

ABSTRACT

Retinal surgery is a complex activity that can be challenging for a surgeon to perform effectively and safely. Image guided robot-assisted surgery is one of the promising solutions that bring significant surgical enhancement in treatment outcome and reduce the physical limitations of human surgeons. In this paper, we demonstrate a novel method for 3D guidance of the instrument based on the projection of spotlight in the single microscope images. The spotlight projection mechanism is firstly analyzed and modeled with a projection on both a plane and a sphere surface. To test the feasibility of the proposed method, a light fiber is integrated into the instrument which is driven by the Steady-Hand Eye Robot (SHER). The spot of light is segmented and tracked on a phantom retina using the proposed algorithm. The static calibration and dynamic test results both show that the proposed method can easily archive 0.5 mm of tip-to-surface distance which is within the clinically acceptable accuracy for intraocular visual guidance.

12.
Annu Int Conf IEEE Eng Med Biol Soc ; 2019: 5403-5406, 2019 Jul.
Article in English | MEDLINE | ID: mdl-31947077

ABSTRACT

This paper introduces an optimized input device workflow to control an eye surgical robot in a simulated vitreoretinal environment. The input device is a joystick with four Degrees of Freedom (DOF) that controls a six DOFs robot. This aim is achieved through a segmentation plan for an eye surgeon. In this study, the different surgical phases are defined while each phase includes their specific number of DOFs. The segmentation plan is divided into four surgical phases: Phase I: Approach with three DOFs; Phase II: Introduction with three DOFs; Phase III: Aim with 3+1 DOFs; and Phase IV: Injection with one DOF. Taking these phases into consideration, an eye surgical robot with six DOFs could be controlled through a joystick with only four DOFs intuitively. In this work we show that reducing the number of DOFs will decrease the complexity of the surgery with a robotic platform.


Subject(s)
Ophthalmologic Surgical Procedures/instrumentation , Robotic Surgical Procedures/instrumentation , Eye , Humans
13.
Int J Comput Assist Radiol Surg ; 13(9): 1345-1355, 2018 Sep.
Article in English | MEDLINE | ID: mdl-30054775

ABSTRACT

PURPOSE: Advances in sensing and digitalization enable us to acquire and present various heterogeneous datasets to enhance clinical decisions. Visual feedback is the dominant way of conveying such information. However, environments rich with many sources of information all presented through the same channel pose the risk of over stimulation and missing crucial information. The augmentation of the cognitive field by additional perceptual modalities such as sound is a workaround to this problem. A major challenge in auditory augmentation is the automatic generation of pleasant and ergonomic audio in complex routines, as opposed to overly simplistic feedback, to avoid alarm fatigue. METHODS: In this work, without loss of generality to other procedures, we propose a method for aural augmentation of medical procedures via automatic modification of musical pieces. RESULTS: Evaluations of this concept regarding recognizability of the conveyed information along with qualitative aesthetics show the potential of our method. CONCLUSION: In this paper, we proposed a novel sonification method for automatic musical augmentation of tasks within surgical procedures. Our experimental results suggest that these augmentations are aesthetically pleasing and have the potential to successfully convey useful information. This work opens a path for advanced sonification techniques in the operating room, in order to complement traditional visual displays and convey information more efficiently.


Subject(s)
Algorithms , Audiovisual Aids , Feedback, Sensory , Sound , Surgery, Computer-Assisted/methods , Vitreoretinal Surgery/methods , Humans
14.
Annu Int Conf IEEE Eng Med Biol Soc ; 2017: 1836-1839, 2017 Jul.
Article in English | MEDLINE | ID: mdl-29060247

ABSTRACT

Image-based pose measurements relative to phantoms are used for various applications. Some examples are: tracking, registration or calibration. If highly precise measurements are needed, even changes of environment factors influence the measurements.


Subject(s)
Phantoms, Imaging , Calibration , Radiography , Tomography, X-Ray Computed , X-Rays
15.
Annu Int Conf IEEE Eng Med Biol Soc ; 2016: 3859-3862, 2016 Aug.
Article in English | MEDLINE | ID: mdl-28269128

ABSTRACT

C-arm X-ray systems need a high spatial accuracy for applications like cone beam computed tomography and 2D/3D overlay. One way to achieve the needed precision is a model-based calibration of the C-arm system. For such a calibration a kinematic and dynamic model of the system is constructed whose parameters are computed by pose measurements of the C-arm. Instead of common measurement systems used for a model-based calibration for robots like laser trackers, we use X-ray images of a calibration phantom to measure the C-arm pose. By the direct use of the imaging system, we overcome registration errors between the measurement device and the C-arm system. The C-arm pose measurement by X-ray imaging, the new measurement technique, has to be evaluated to check if the measurement accuracy is sufficient for the model-based calibration regarding the two mentioned applications. The scope of this work is a real world evaluation of the C-arm pose measurement accuracy with X-ray images of a calibration phantom using relative phantom movements and a laser tracker as ground truth.


Subject(s)
Cone-Beam Computed Tomography/instrumentation , Cone-Beam Computed Tomography/methods , Imaging, Three-Dimensional/methods , Calibration , Humans , Models, Theoretical , Phantoms, Imaging , X-Rays
SELECTION OF CITATIONS
SEARCH DETAIL
...