Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 6 de 6
Filter
Add more filters










Database
Language
Publication year range
1.
Article in English | MEDLINE | ID: mdl-38083453

ABSTRACT

The field of robotic microsurgery and micro-manipulation has undergone a profound evolution in recent years, particularly with regard to the accuracy, precision, versatility, and dexterity. These advancements have the potential to revolutionize high-precision biomedical procedures, such as neurosurgery, vitreoretinal surgery, and cell micro-manipulation. However, a critical challenge in developing micron-precision robotic systems is accurately verifying the end-effector motion in 3D. Such verification is complicated due to environmental vibrations, inaccuracy of mechanical assembly, and other physical uncertainties. To overcome these challenges, this paper proposes a novel single-camera framework that utilizes mirrors with known geometric parameters to estimate the 3D position of the microsurgical instrument. Euclidean distance between reconstructed points by the algorithm and the robot movement recorded by the highly accurate encoders is considered an error. Our method exhibits an accurate estimation with the mean absolute error of 0.044 mm when tested on a 23G surgical cannula with a diameter of 0.640 mm and operates at a resolution of 4024 × 3036 at 30 frames per second.


Subject(s)
Robotics , Surgery, Computer-Assisted , Microsurgery , Motion , Movement
2.
IEEE Int Conf Robot Autom ; 2023: 4724-4731, 2023.
Article in English | MEDLINE | ID: mdl-38125032

ABSTRACT

In the last decade, various robotic platforms have been introduced that could support delicate retinal surgeries. Concurrently, to provide semantic understanding of the surgical area, recent advances have enabled microscope-integrated intraoperative Optical Coherent Tomography (iOCT) with high-resolution 3D imaging at near video rate. The combination of robotics and semantic understanding enables task autonomy in robotic retinal surgery, such as for subretinal injection. This procedure requires precise needle insertion for best treatment outcomes. However, merging robotic systems with iOCT introduces new challenges. These include, but are not limited to high demands on data processing rates and dynamic registration of these systems during the procedure. In this work, we propose a framework for autonomous robotic navigation for subretinal injection, based on intelligent real-time processing of iOCT volumes. Our method consists of an instrument pose estimation method, an online registration between the robotic and the iOCT system, and trajectory planning tailored for navigation to an injection target. We also introduce intelligent virtual B-scans, a volume slicing approach for rapid instrument pose estimation, which is enabled by Convolutional Neural Networks (CNNs). Our experiments on ex-vivo porcine eyes demonstrate the precision and repeatability of the method. Finally, we discuss identified challenges in this work and suggest potential solutions to further the development of such systems.

3.
Sci Rep ; 13(1): 5930, 2023 04 12.
Article in English | MEDLINE | ID: mdl-37045878

ABSTRACT

Despite the undeniable advantages of image-guided surgical assistance systems in terms of accuracy, such systems have not yet fully met surgeons' needs or expectations regarding usability, time efficiency, and their integration into the surgical workflow. On the other hand, perceptual studies have shown that presenting independent but causally correlated information via multimodal feedback involving different sensory modalities can improve task performance. This article investigates an alternative method for computer-assisted surgical navigation, introduces a novel four-DOF sonification methodology for navigated pedicle screw placement, and discusses advanced solutions based on multisensory feedback. The proposed method comprises a novel four-DOF sonification solution for alignment tasks in four degrees of freedom based on frequency modulation synthesis. We compared the resulting accuracy and execution time of the proposed sonification method with visual navigation, which is currently considered the state of the art. We conducted a phantom study in which 17 surgeons executed the pedicle screw placement task in the lumbar spine, guided by either the proposed sonification-based or the traditional visual navigation method. The results demonstrated that the proposed method is as accurate as the state of the art while decreasing the surgeon's need to focus on visual navigation displays instead of the natural focus on surgical tools and targeted anatomy during task execution.


Subject(s)
Pedicle Screws , Spinal Fusion , Surgery, Computer-Assisted , Spinal Fusion/methods , Lumbar Vertebrae/surgery , Surgery, Computer-Assisted/methods , Phantoms, Imaging
4.
J Imaging ; 9(3)2023 Feb 23.
Article in English | MEDLINE | ID: mdl-36976107

ABSTRACT

The "Remote Interactive Surgery Platform" (RISP) is an augmented reality (AR)-based platform for surgical telementoring. It builds upon recent advances of mixed reality head-mounted displays (MR-HMD) and associated immersive visualization technologies to assist the surgeon during an operation. It enables an interactive, real-time collaboration with a remote consultant by sharing the operating surgeon's field of view through the Microsoft (MS) HoloLens2 (HL2). Development of the RISP started during the Medical Augmented Reality Summer School 2021 and is currently still ongoing. It currently includes features such as three-dimensional annotations, bidirectional voice communication and interactive windows to display radiographs within the sterile field. This manuscript provides an overview of the RISP and preliminary results regarding its annotation accuracy and user experience measured with ten participants.

5.
IEEE Int Conf Robot Autom ; 2022: 7717-7723, 2022 May.
Article in English | MEDLINE | ID: mdl-36128019

ABSTRACT

Retinal surgery is a complex medical procedure that requires exceptional expertise and dexterity. For this purpose, several robotic platforms are currently under development to enable or improve the outcome of microsurgical tasks. Since the control of such robots is often designed for navigation inside the eye in proximity to the retina, successful trocar docking and insertion of the instrument into the eye represents an additional cognitive effort, and is therefore one of the open challenges in robotic retinal surgery. For this purpose, we present a platform for autonomous trocar docking that combines computer vision and a robotic setup. Inspired by the Cuban Colibri (hummingbird) aligning its beak to a flower using only vision, we mount a camera onto the endeffector of a robotic system. By estimating the position and pose of the trocar, the robot is able to autonomously align and navigate the instrument towards the Trocar Entry Point (TEP) and finally perform the insertion. Our experiments show that the proposed method is able to accurately estimate the position and pose of the trocar and achieve repeatable autonomous docking. The aim of this work is to reduce the complexity of the robotic setup prior to the surgical task and therefore, increase the intuitiveness of the system integration into clinical workflow.

6.
Sensors (Basel) ; 22(3)2022 Feb 02.
Article in English | MEDLINE | ID: mdl-35161880

ABSTRACT

Optical coherence tomography (OCT) is a medical imaging modality that is commonly used to diagnose retinal diseases. In recent years, linear and radial scanning patterns have been proposed to acquire three-dimensional OCT data. These patterns show differences in A-scan acquisition density across the generated volumes, and thus differ in their suitability for the diagnosis of retinal diseases. While radial OCT volumes exhibit a higher A-scan sampling rate around the scan center, linear scans contain more information in the peripheral scan areas. In this paper, we propose a method to combine a linearly and radially acquired OCT volume to generate a single compound volume, which merges the advantages of both scanning patterns to increase the information that can be gained from the three-dimensional OCT data. We initially generate 3D point clouds of the linearly and radially acquired OCT volumes and use an Iterative Closest Point (ICP) variant to register both volumes. After registration, the compound volume is created by selectively exploiting linear and radial scanning data, depending on the A-scan density of the individual scans. Fusing regions from both volumes with respect to their local A-scan sampling density, we achieve improved overall anatomical OCT information in a high-resolution compound volume. We demonstrate our method on linear and radial OCT volumes for the visualization and analysis of macular holes and the surrounding anatomical structures.


Subject(s)
Retinal Perforations , Tomography, Optical Coherence , Humans
SELECTION OF CITATIONS
SEARCH DETAIL
...