Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 8 de 8
Filtrar
Mais filtros

Base de dados
Tipo de documento
Intervalo de ano de publicação
1.
IEEE Robot Autom Lett ; 9(5): 4154-4161, 2024 May.
Artigo em Inglês | MEDLINE | ID: mdl-38550718

RESUMO

Subretinal injection is an effective method for direct delivery of therapeutic agents to treat prevalent subretinal diseases. Among the challenges for surgeons are physiological hand tremor, difficulty resolving single-micron scale depth perception, and lack of tactile feedback. The recent introduction of intraoperative Optical Coherence Tomography (iOCT) enables precise depth information during subretinal surgery. However, even when relying on iOCT, achieving the required micron-scale precision remains a significant surgical challenge. This work presents a robot-assisted workflow for high-precision autonomous needle navigation for subretinal injection. The workflow includes online registration between robot and iOCT coordinates; tool-tip localization in iOCT coordinates using a Convolutional Neural Network (CNN); and tool-tip planning and tracking system using real-time Model Predictive Control (MPC). The proposed workflow is validated using a silicone eye phantom and ex vivo porcine eyes. The experimental results demonstrate that the mean error to reach the user-defined target and the mean procedure duration are within an acceptable precision range. The proposed workflow achieves a 100% success rate for subretinal injection, while maintaining scleral forces at the scleral insertion point below 15mN throughout the navigation procedures.

2.
IEEE Int Conf Robot Autom ; 2023: 4724-4731, 2023.
Artigo em Inglês | MEDLINE | ID: mdl-38125032

RESUMO

In the last decade, various robotic platforms have been introduced that could support delicate retinal surgeries. Concurrently, to provide semantic understanding of the surgical area, recent advances have enabled microscope-integrated intraoperative Optical Coherent Tomography (iOCT) with high-resolution 3D imaging at near video rate. The combination of robotics and semantic understanding enables task autonomy in robotic retinal surgery, such as for subretinal injection. This procedure requires precise needle insertion for best treatment outcomes. However, merging robotic systems with iOCT introduces new challenges. These include, but are not limited to high demands on data processing rates and dynamic registration of these systems during the procedure. In this work, we propose a framework for autonomous robotic navigation for subretinal injection, based on intelligent real-time processing of iOCT volumes. Our method consists of an instrument pose estimation method, an online registration between the robotic and the iOCT system, and trajectory planning tailored for navigation to an injection target. We also introduce intelligent virtual B-scans, a volume slicing approach for rapid instrument pose estimation, which is enabled by Convolutional Neural Networks (CNNs). Our experiments on ex-vivo porcine eyes demonstrate the precision and repeatability of the method. Finally, we discuss identified challenges in this work and suggest potential solutions to further the development of such systems.

3.
IEEE Trans Med Robot Bionics ; 5(2): 230-241, 2023 May.
Artigo em Inglês | MEDLINE | ID: mdl-38250652

RESUMO

Atherosclerosis is a medical condition that causes buildup of plaque in the blood vessels and narrowing of the arteries. Surgeons often treat this condition through angioplasty with catheter placements. Continuum guidewire robots offer significant advantages for catheter placements due to their dexterity. Tracking these guidewire robots and their surrounding workspace under fluoroscopy in real-time can be useful for visualization and accurate control. This paper discusses algorithms and methods to track the shape and orientation of the guidewire and the surrounding workspaces of phantom vasculatures in real-time under C-arm fluoroscopy. The shape of continuum guidewires is found through a semantic segmentation architecture based on MobileNetv2 with a Tversky loss function to deal with class imbalances. This shape is refined through medial axis filtering and parametric curve fitting to quantitatively describe the guidewire's pose. Using a constant curvature assumption for the guidewire's bending segments, the parameters that describe the joint variables are estimated in real-time for a tendon-actuated COaxially Aligned STeerable (COAST) guidewire robot and tracked through its traversal of an aortic bifurcation phantom. The accuracy of the tracking is ~90% and the execution times are within 100ms, and hence this method is deemed suitable for real-time tracking.

4.
IEEE Int Conf Robot Autom ; 2022: 7717-7723, 2022 May.
Artigo em Inglês | MEDLINE | ID: mdl-36128019

RESUMO

Retinal surgery is a complex medical procedure that requires exceptional expertise and dexterity. For this purpose, several robotic platforms are currently under development to enable or improve the outcome of microsurgical tasks. Since the control of such robots is often designed for navigation inside the eye in proximity to the retina, successful trocar docking and insertion of the instrument into the eye represents an additional cognitive effort, and is therefore one of the open challenges in robotic retinal surgery. For this purpose, we present a platform for autonomous trocar docking that combines computer vision and a robotic setup. Inspired by the Cuban Colibri (hummingbird) aligning its beak to a flower using only vision, we mount a camera onto the endeffector of a robotic system. By estimating the position and pose of the trocar, the robot is able to autonomously align and navigate the instrument towards the Trocar Entry Point (TEP) and finally perform the insertion. Our experiments show that the proposed method is able to accurately estimate the position and pose of the trocar and achieve repeatable autonomous docking. The aim of this work is to reduce the complexity of the robotic setup prior to the surgical task and therefore, increase the intuitiveness of the system integration into clinical workflow.

5.
IEEE Robot Autom Lett ; 7(4): 11918-11925, 2022 Oct.
Artigo em Inglês | MEDLINE | ID: mdl-36275193

RESUMO

Cardiovascular diseases are the leading cause of death globally and surgical treatments for these often begin with the manual placement of a long compliant wire, called a guidewire, through different vasculature. To improve procedure outcomes and reduce radiation exposure, we propose steps towards a fully automated approach for steerable guidewire navigation within vessels. In this paper, we utilize fluoroscopic images to fully reconstruct 3-D printed phantom vasculature models by using a shape-from-silhouette algorithm. The reconstruction is subsequently de-noised using a deep learning-based encoder-decoder network and morphological filtering. This volume is used to model the environment for guidewire traversal. Following this, we present a novel method to plan an optimal path for guidewire traversal in three-dimensional vascular models through the use of slice planes and a modified hybrid A-star algorithm. Finally, the developed reconstruction and planning approaches are applied to an ex vivo porcine aorta, and navigation is demonstrated through the use of a tendon-actuated COaxially Aligned STeerable guidewire (COAST).

6.
Rep U S ; 2021: 524-531, 2021.
Artigo em Inglês | MEDLINE | ID: mdl-35223133

RESUMO

Real-time visual localization of needles is necessary for various surgical applications, including surgical automation and visual feedback. In this study we investigate localization and autonomous robotic control of needles in the context of our magneto-suturing system. Our system holds the potential for surgical manipulation with the benefit of minimal invasiveness and reduced patient side effects. However, the nonlinear magnetic fields produce unintuitive forces and demand delicate position-based control that exceeds the capabilities of direct human manipulation. This makes automatic needle localization a necessity. Our localization method combines neural network-based segmentation and classical techniques, and we are able to consistently locate our needle with 0.73 mm RMS error in clean environments and 2.72 mm RMS error in challenging environments with blood and occlusion. The average localization RMS error is 2.16 mm for all environments we used in the experiments. We combine this localization method with our closed-loop feedback control system to demonstrate the further applicability of localization to autonomous control. Our needle is able to follow a running suture path in (1) no blood, no tissue; (2) heavy blood, no tissue; (3) no blood, with tissue; and (4) heavy blood, with tissue environments. The tip position tracking error ranges from 2.6 mm to 3.7 mm RMS, opening the door towards autonomous suturing tasks.

7.
IEEE Robot Autom Lett ; 6(3): 5261-5268, 2021 Jul.
Artigo em Inglês | MEDLINE | ID: mdl-34621980

RESUMO

The overarching goal of this work is to demonstrate the feasibility of using optical coherence tomography (OCT) to guide a robotic system to extract lens fragments from ex vivo pig eyes. A convolutional neural network (CNN) was developed to semantically segment four intraocular structures (lens material, capsule, cornea, and iris) from OCT images. The neural network was trained on images from ten pig eyes, validated on images from eight different eyes, and tested on images from another ten eyes. This segmentation algorithm was incorporated into the Intraocular Robotic Interventional Surgical System (IRISS) to realize semi-automated detection and extraction of lens material. To demonstrate the system, the semi-automated detection and extraction task was performed on seven separate ex vivo pig eyes. The developed neural network exhibited 78.20% for the validation set and 83.89% for the test set in mean intersection over union metrics. Successful implementation and efficacy of the developed method were confirmed by comparing the preoperative and postoperative OCT volume scans from the seven experiments.

8.
IEEE Robot Autom Lett ; 5(3): 4859-4866, 2020 Jul.
Artigo em Inglês | MEDLINE | ID: mdl-33880401

RESUMO

Untethered miniature robots have significant poten-tial and promise in diverse minimally invasive medical applications inside the human body. For drug delivery and physical contra-ception applications inside tubular structures, it is desirable to have a miniature anchoring robot with self-locking mechanism at a target tubular region. Moreover, the behavior of this robot should be tracked and feedback-controlled by a medical imaging-based system. While such a system is unavailable, we report a reversible untethered anchoring robot design based on remote magnetic actuation. The current robot prototype's dimension is 7.5 mm in diameter, 17.8 mm in length, and made of soft polyurethane elastomer, photopolymer, and two tiny permanent magnets. Its relaxation and anchoring states can be maintained in a stable manner without supplying any control and actuation input. To control the robot's locomotion, we implement a two-dimensional (2D) ultrasound imaging-based tracking and control system, which automatically sweeps locally and updates the robot's position. With such a system, we demonstrate that the robot can be controlled to follow a pre-defined 1D path with the maximal position error of 0.53 ± 0.05 mm inside a tubular phantom, where the reversible anchoring could be achieved under the monitoring of ultrasound imaging.

SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA