Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 212
Filtrar
1.
IEEE Robot Autom Lett ; 9(5): 4154-4161, 2024 May.
Artículo en Inglés | MEDLINE | ID: mdl-38550718

RESUMEN

Subretinal injection is an effective method for direct delivery of therapeutic agents to treat prevalent subretinal diseases. Among the challenges for surgeons are physiological hand tremor, difficulty resolving single-micron scale depth perception, and lack of tactile feedback. The recent introduction of intraoperative Optical Coherence Tomography (iOCT) enables precise depth information during subretinal surgery. However, even when relying on iOCT, achieving the required micron-scale precision remains a significant surgical challenge. This work presents a robot-assisted workflow for high-precision autonomous needle navigation for subretinal injection. The workflow includes online registration between robot and iOCT coordinates; tool-tip localization in iOCT coordinates using a Convolutional Neural Network (CNN); and tool-tip planning and tracking system using real-time Model Predictive Control (MPC). The proposed workflow is validated using a silicone eye phantom and ex vivo porcine eyes. The experimental results demonstrate that the mean error to reach the user-defined target and the mean procedure duration are within an acceptable precision range. The proposed workflow achieves a 100% success rate for subretinal injection, while maintaining scleral forces at the scleral insertion point below 15mN throughout the navigation procedures.

2.
Int J Med Robot ; 20(1): e2618, 2024 Feb.
Artículo en Inglés | MEDLINE | ID: mdl-38536711

RESUMEN

PURPOSE: This work presents the design and preliminary validation of a Magnetic Resonance (MR) conditional robot for lumbar injection for the treatment of lower back pain. METHODS: This is a 4-degree-of-freedom (DOF) robot that is 200 × 230 × 130 mm3 in volume and has a mass of 0.8 kg. Its lightweight and compact features allow it to be directly affixed to patient's back, establishing a rigid connection, thus reducing positional errors caused by patient movements during treatment. RESULTS: To validate the positioning accuracy of the needle by the robot, an electromagnetic (EM) tracking system and a needle with an EM sensor embedded in the tip were used for the free space evaluation with position accuracy of 0.88 ± 0.46 mm and phantom mock insertions using the Loop-X CBCT scanner with target position accuracy of 3.62 ± 0.92 mm. CONCLUSION: Preliminary experiments demonstrated that the proposed robot showed improvements and benefits in its rotation range, flexible needle adjustment, and sensor protection compared with previous and existing systems, offering broader clinical applications.


Asunto(s)
Robótica , Humanos , Imagen por Resonancia Magnética , Agujas , Espectroscopía de Resonancia Magnética , Inyecciones Espinales
3.
IEEE Trans Med Robot Bionics ; 6(1): 135-145, 2024 Feb.
Artículo en Inglés | MEDLINE | ID: mdl-38304756

RESUMEN

Subretinal injection methods and other procedures for treating retinal conditions and diseases (many considered incurable) have been limited in scope due to limited human motor control. This study demonstrates the next generation, cooperatively controlled Steady-Hand Eye Robot (SHER 3.0), a precise and intuitive-to-use robotic platform achieving clinical standards for targeting accuracy and resolution for subretinal injections. The system design and basic kinematics are reported and a deflection model for the incorporated delta stage and validation experiments are presented. This model optimizes the delta stage parameters, maximizing the global conditioning index and minimizing torsional compliance. Five tests measuring accuracy, repeatability, and deflection show the optimized stage design achieves a tip accuracy of < 30 µm, tip repeatability of 9.3 µm and 0.02°, and deflections between 20-350 µm/N. Future work will use updated control models to refine tip positioning outcomes and will be tested on in vivo animal models.

4.
Adv Sci (Weinh) ; 11(7): e2305495, 2024 Feb.
Artículo en Inglés | MEDLINE | ID: mdl-38072667

RESUMEN

Magnetic resonance imaging (MRI) demonstrates clear advantages over other imaging modalities in neurosurgery with its ability to delineate critical neurovascular structures and cancerous tissue in high-resolution 3D anatomical roadmaps. However, its application has been limited to interventions performed based on static pre/post-operative imaging, where errors accrue from stereotactic frame setup, image registration, and brain shift. To leverage the powerful intra-operative functions of MRI, e.g., instrument tracking, monitoring of physiological changes and tissue temperature in MRI-guided bilateral stereotactic neurosurgery, a multi-stage robotic positioner is proposed. The system positions cannula/needle instruments using a lightweight (203 g) and compact (Ø97 × 81 mm) skull-mounted structure that fits within most standard imaging head coils. With optimized design in soft robotics, the system operates in two stages: i) manual coarse adjustment performed interactively by the surgeon (workspace of ±30°), ii) automatic fine adjustment with precise (<0.2° orientation error), responsive (1.4 Hz bandwidth), and high-resolution (0.058°) soft robotic positioning. Orientation locking provides sufficient transmission stiffness (4.07 N/mm) for instrument advancement. The system's clinical workflow and accuracy is validated with lab-based (<0.8 mm) and MRI-based testing on skull phantoms (<1.7 mm) and a cadaver subject (<2.2 mm). Custom-made wireless omni-directional tracking markers facilitated robot registration under MRI.


Asunto(s)
Neurocirugia , Robótica , Procedimientos Neuroquirúrgicos/métodos , Encéfalo , Imagen por Resonancia Magnética/métodos
5.
IEEE Int Conf Robot Autom ; 2023: 4661-4667, 2023.
Artículo en Inglés | MEDLINE | ID: mdl-38107423

RESUMEN

Important challenges in retinal microsurgery include prolonged operating time, inadequate force feedback, and poor depth perception due to a constrained top-down view of the surgery. The introduction of robot-assisted technology could potentially deal with such challenges and improve the surgeon's performance. Motivated by such challenges, this work develops a strategy for autonomous needle navigation in retinal microsurgery aiming to achieve precise manipulation, reduced end-to-end surgery time, and enhanced safety. This is accomplished through real-time geometry estimation and chance-constrained Model Predictive Control (MPC) resulting in high positional accuracy while keeping scleral forces within a safe level. The robotic system is validated using both open-sky and intact (with lens and partial vitreous removal) ex vivo porcine eyes. The experimental results demonstrate that the generation of safe control trajectories is robust to small motions associated with head drift. The mean navigation time and scleral force for MPC navigation experiments are 7.208 s and 11.97 mN, which can be considered efficient and well within acceptable safe limits. The resulting mean errors along lateral directions of the retina are below 0.06 mm, which is below the typical hand tremor amplitude in retinal microsurgery.

6.
IEEE Int Conf Robot Autom ; 2023: 4724-4731, 2023.
Artículo en Inglés | MEDLINE | ID: mdl-38125032

RESUMEN

In the last decade, various robotic platforms have been introduced that could support delicate retinal surgeries. Concurrently, to provide semantic understanding of the surgical area, recent advances have enabled microscope-integrated intraoperative Optical Coherent Tomography (iOCT) with high-resolution 3D imaging at near video rate. The combination of robotics and semantic understanding enables task autonomy in robotic retinal surgery, such as for subretinal injection. This procedure requires precise needle insertion for best treatment outcomes. However, merging robotic systems with iOCT introduces new challenges. These include, but are not limited to high demands on data processing rates and dynamic registration of these systems during the procedure. In this work, we propose a framework for autonomous robotic navigation for subretinal injection, based on intelligent real-time processing of iOCT volumes. Our method consists of an instrument pose estimation method, an online registration between the robotic and the iOCT system, and trajectory planning tailored for navigation to an injection target. We also introduce intelligent virtual B-scans, a volume slicing approach for rapid instrument pose estimation, which is enabled by Convolutional Neural Networks (CNNs). Our experiments on ex-vivo porcine eyes demonstrate the precision and repeatability of the method. Finally, we discuss identified challenges in this work and suggest potential solutions to further the development of such systems.

7.
Int Symp Med Robot ; 20232023 Apr.
Artículo en Inglés | MEDLINE | ID: mdl-38031531

RESUMEN

This paper investigates the possibility of robotically performing in situ needle manipulations to correct the needle tip position in the setting of robot-assisted, MRI-guided spinal injections, where real time MRI images cannot be effectively used to guide the needle. Open-loop control of the needle tip is derived from finite element simulation, and the proposed method is tested with ex vivo animal muscle tissues and validated by cone beam computed tomography. Preliminary results have shown promise of performing needle tip correction in situ to improve needle insertion accuracy when real-time feedback is not readily available.

8.
Robotica ; 41(5): 1536-1549, 2023 May.
Artículo en Inglés | MEDLINE | ID: mdl-37982126

RESUMEN

Retinal surgery is widely considered to be a complicated and challenging task even for specialists. Image-guided robot-assisted intervention is among the novel and promising solutions that may enhance human capabilities therein. In this paper, we demonstrate the possibility of using spotlights for 5D guidance of a microsurgical instrument. The theoretical basis of the localization for the instrument based on the projection of a single spotlight is analyzed to deduce the position and orientation of the spotlight source. The usage of multiple spotlights is also proposed to check the possibility of further improvements for the performance boundaries. The proposed method is verified within a high-fidelity simulation environment using the 3D creation suite Blender. Experimental results show that the average positioning error is 0.029 mm using a single spotlight and 0.025 mm with three spotlights, respectively, while the rotational errors are 0.124 and 0.101, which shows the application to be promising in instrument localization for retinal surgery.

9.
ArXiv ; 2023 Sep 08.
Artículo en Inglés | MEDLINE | ID: mdl-37731661

RESUMEN

Flexible needle insertion procedures are common for minimally-invasive surgeries for diagnosing and treating prostate cancer. Bevel-tip needles provide physicians the capability to steer the needle during long insertions to avoid vital anatomical structures in the patient and reduce post-operative patient discomfort. To provide needle placement feedback to the physician, sensors are embedded into needles for determining the real-time 3D shape of the needle during operation without needing to visualize the needle intra-operatively. Through expansive research in fiber optics, a plethora of bio-compatible, MRI-compatible, optical shape-sensors have been developed to provide real-time shape feedback, such as single-core and multicore fiber Bragg gratings. In this paper, we directly compare single-core fiber-based and multicore fiber-based needle shape-sensing through identically constructed, four-active area sensorized bevel-tip needles inserted into phantom and ex-vivo tissue on the same experimental platform. In this work, we found that for shape-sensing in phantom tissue, the two needles performed identically with a p-value of 0.164 > 0.05, but in ex-vivo real tissue, the single-core fiber sensorized needle significantly outperformed the multicore fiber configuration with a p-value of 0.0005 < 0.05. This paper also presents the experimental platform and method for directly comparing these optical shape sensors for the needle shape-sensing task, as well as provides direction, insight and required considerations for future work in constructively optimizing sensorized needles.

10.
IEEE Robot Autom Lett ; 8(3): 1343-1350, 2023 Mar.
Artículo en Inglés | MEDLINE | ID: mdl-37637101

RESUMEN

An in situ needle manipulation technique used by physicians when performing spinal injections is modeled to study its effect on needle shape and needle tip position. A mechanics-based model is proposed and solved using finite element method. A test setup is presented to mimic the needle manipulation motion. Tissue phantoms made from plastisol as well as porcine skeletal muscle samples are used to evaluate the model accuracy against medical images. The effect of different compression models as well as model parameters on model accuracy is studied, and the effect of needle-tissue interaction on the needle remote center of motion is examined. With the correct combination of compression model and model parameters, the model simulation is able to predict needle tip position within submillimeter accuracy.

11.
Med Phys ; 50(10): 6433-6453, 2023 Oct.
Artículo en Inglés | MEDLINE | ID: mdl-37633836

RESUMEN

BACKGROUND: Widely used Cone-beam computed tomography (CBCT)-guided irradiators have limitations in localizing soft tissue targets growing in a low-contrast environment. This hinders small animal irradiators achieving precise focal irradiation. PURPOSE: To advance image-guidance for soft tissue targeting, we developed a commercial-grade bioluminescence tomography-guided system (BLT, MuriGlo) for pre-clinical radiation research. We characterized the system performance and demonstrated its capability in target localization. We expect this study can provide a comprehensive guideline for the community in utilizing the BLT system for radiation studies. METHODS: MuriGlo consists of four mirrors, filters, lens, and charge-coupled device (CCD) camera, enabling a compact imaging platform and multi-projection and multi-spectral BLT. A newly developed mouse bed allows animals imaged in MuriGlo and transferred to a small animal radiation research platform (SARRP) for CBCT imaging and BLT-guided irradiation. Methods and tools were developed to evaluate the CCD response linearity, minimal detectable signal, focusing, spatial resolution, distortion, and uniformity. A transparent polycarbonate plate covering the middle of the mouse bed was used to support and image animals from underneath the bed. We investigated its effect on 2D Bioluminescence images and 3D BLT reconstruction accuracy, and studied its dosimetric impact along with the rest of mouse bed. A method based on pinhole camera model was developed to map multi-projection bioluminescence images to the object surface generated from CBCT image. The mapped bioluminescence images were used as the input data for the optical reconstruction. To account for free space light propagation from object surface to optical detector, a spectral derivative (SD) method was implemented for BLT reconstruction. We assessed the use of the SD data (ratio imaging of adjacent wavelength) in mitigating out of focusing and non-uniformity seen in the images. A mouse phantom was used to validate the data mapping. The phantom and an in vivo glioblastoma model were utilized to demonstrate the accuracy of the BLT target localization. RESULTS: The CCD response shows good linearity with < 0.6% residual from a linear fit. The minimal detectable level is 972 counts for 10 × 10 binning. The focal plane position is within the range of 13-18 mm above the mouse bed. The spatial resolution of 2D optical imaging is < 0.3 mm at Rayleigh criterion. Within the region of interest, the image uniformity is within 5% variation, and image shift due to distortion is within 0.3 mm. The transparent plate caused < 6% light attenuation. The use of the SD imaging data can effectively mitigate out of focusing, image non-uniformity, and the plate attenuation, to support accurate multi-spectral BLT reconstruction. There is < 0.5% attenuation on dose delivery caused by the bed. The accuracy of data mapping from the 2D bioluminescence images to CBCT image is within 0.7 mm. Our phantom test shows the BLT system can localize a bioluminescent target within 1 mm with an optimal threshold and only 0.2 mm deviation was observed for the case with and without a transparent plate. The same localization accuracy can be maintained for the in vivo GBM model. CONCLUSIONS: This work is the first systematic study in characterizing the commercial BLT-guided system. The information and methods developed will be useful for the community to utilize the imaging system for image-guided radiation research.

12.
Int Symp Med Robot ; 20232023 Apr.
Artículo en Inglés | MEDLINE | ID: mdl-37292169

RESUMEN

Bevel-tip needles are commonly utilized in percutaneous medical interventions where a curved insertion trajectory is required. To avoid deviation from the intended trajectory, needle shape sensing and tip localization is crucial in providing the operator with feedback. There is an abundance of previous work that investigate the medical application of fiber Bragg grating (FBG) sensors, but most works select only one specific type of fiber among the many available sensor options to integrate into their hardware designs. In this work, we compare two different types of FBG sensors under identical conditions and application, namely, acting as the sensor for needle insertion shape reconstruction. We built a three-channel single core needle and a seven-channel multicore fiber (MCF) needle and discuss the pros and cons of both constructions for shape sensing experiments into constant curvature jigs. The overall needle tip error is 1.23 mm for the single core needle and 2.08 mm for the multicore needle.

13.
IEEE Trans Robot ; 39(2): 1373-1387, 2023 Apr.
Artículo en Inglés | MEDLINE | ID: mdl-37377922

RESUMEN

Notable challenges during retinal surgery lend themselves to robotic assistance which has proven beneficial in providing a safe steady-hand manipulation. Efficient assistance from the robots heavily relies on accurate sensing of surgery states (e.g. instrument tip localization and tool-to-tissue interaction forces). Many of the existing tool tip localization methods require preoperative frame registrations or instrument calibrations. In this study using an iterative approach and by combining vision and force-based methods, we develop calibration- and registration-independent (RI) algorithms to provide online estimates of instrument stiffness (least squares and adaptive). The estimations are then combined with a state-space model based on the forward kinematics (FWK) of the Steady-Hand Eye Robot (SHER) and Fiber Brag Grating (FBG) sensor measurements. This is accomplished using a Kalman Filtering (KF) approach to improve the deflected instrument tip position estimations during robot-assisted eye surgery. The conducted experiments demonstrate that when the online RI stiffness estimations are used, the instrument tip localization results surpass those obtained from pre-operative offline calibrations for stiffness.

14.
Med Phys ; 50(6): 3418-3434, 2023 Jun.
Artículo en Inglés | MEDLINE | ID: mdl-36841948

RESUMEN

BACKGROUND: In breast CT, scattered photons form a large portion of the acquired signal, adversely impacting image quality throughout the frequency response of the imaging system. Prior studies provided evidence for a new image acquisition design, dubbed Narrow Beam Breast CT (NB-bCT), in preventing scatter acquisition. PURPOSE: Here, we report the design, implementation, and initial characterization of the first NB-bCT prototype. METHODS: The imaging system's apparatus is composed of two primary assemblies: a dynamic Fluence Modulator (collimator) and a photon-counting line detector. The design of the assemblies enables them to operate in lockstep during image acquisition, converting sourced x-rays into a moving narrow beam. During a projection, this narrow beam sweeps the entire fan angle coverage of the imaging system. The assemblies are each comprised of a metal housing, a sensory system, and a robotic system. A controller unit handles their relative movements. To study the impact of fluence modulation on the signal received in the detector, three physical breast phantoms, representative of small, average, and large size breasts, were developed and imaged, and acquired projections analyzed. The scatter acquisition in each projection as a function of breast phantom size was investigated. The imaging system's spatial resolution at the center and periphery of the field of view was measured. RESULTS: Minimal acquisition of scattered rays occurs during image acquisition with NB-bCT; results in minimal scatter to primary ratios in small, average, and large breast phantoms imaged were 0.05, 0.07, and 0.9, respectively. System spatial resolution of 5.2 lp/mm at 10% max MTF and 2.9 lp/mm at 50% max MTF at the center of the field of view was achieved, with minimal loss with the shift toward the corner (5.0 lp/mm at 10% max MTF and 2.5 lp/mm at 50% max MTF). CONCLUSION: The disclosed development, implementation, and characterization of a physical NB-bCT prototype system demonstrates a new method of CT-based image acquisition that yields high spatial resolution while minimizing scatter-components in acquired projections. This methodology holds promise for high-resolution CT-imaging applications in which reduction of scatter contamination is desirable.


Asunto(s)
Tomografía Computarizada por Rayos X , Tomografía Computarizada por Rayos X/métodos , Fantasmas de Imagen , Dispersión de Radiación
15.
IEEE Sens J ; 23(12): 12915-12929, 2023 Jun 15.
Artículo en Inglés | MEDLINE | ID: mdl-38558829

RESUMEN

Continuum dexterous manipulators (CDMs) are suitable for performing tasks in a constrained environment due to their high dexterity and maneuverability. Despite the inherent advantages of CDMs in minimally invasive surgery, real-time control of CDMs' shape during nonconstant curvature bending is still challenging. This study presents a novel approach for the design and fabrication of a large deflection fiber Bragg grating (FBG) shape sensor embedded within the lumens inside the walls of a CDM with a large instrument channel. The shape sensor consisted of two fibers, each with three FBG nodes. A shape-sensing model was introduced to reconstruct the centerline of the CDM based on FBG wavelengths. Different experiments, including shape sensor tests and CDM shape reconstruction tests, were conducted to assess the overall accuracy of the shape-sensing. The FBG sensor evaluation results revealed the linear curvature-wavelength relationship with the large curvature detection of 0.045 mm and a high wavelength shift of up to 5.50 nm at a 90° bending angle in both the bending directions. The CDM's shape reconstruction experiments in a free environment demonstrated the shape-tracking accuracy of 0.216 ± 0.126 mm for positive/negative deflections. Also, the CDM shape reconstruction error for three cases of bending with obstacles was observed to be 0.436 ± 0.370 mm for the proximal case, 0.485 ± 0.418 mm for the middle case, and 0.312 ± 0.261 mm for the distal case. This study indicates the adequate performance of the FBG sensor and the effectiveness of the model for tracking the shape of the large-deflection CDM with nonconstant-curvature bending for minimally invasive orthopedic applications.

16.
ROMAN ; 2023: 2359-2365, 2023 Aug.
Artículo en Inglés | MEDLINE | ID: mdl-38347956

RESUMEN

Cooperative robots for intraocular surgery allow surgeons to perform vitreoretinal surgery with high precision and stability. Several robot structural designs have shown capabilities to perform these surgeries. This research investigates the comparative performance of a serial and parallel cooperative-controlled robot in completing a retinal vessel-following task, with a focus on human-robot interaction performance and user experience. Our results indicate that despite differences in robot structure and interaction forces and torques, the two robots exhibited similar levels of performance in terms of general robot-to-patient interaction and average operating time. These findings have implications for the development and implementation of surgical robotics, suggesting that both serial and parallel cooperative-controlled robots can be effective for vitreoretinal surgery tasks.

17.
Front Oncol ; 12: 996537, 2022.
Artículo en Inglés | MEDLINE | ID: mdl-36237341

RESUMEN

Purpose: In this study, we aim to further evaluate the accuracy of ultrasound tracking for intra-fraction pancreatic tumor motion during radiotherapy by a phantom-based study. Methods: Twelve patients with pancreatic cancer who were treated with stereotactic body radiation therapy were enrolled in this study. The displacement points of the respiratory cycle were acquired from 4DCT and transferred to a motion platform to mimic realistic breathing movements in our phantom study. An ultrasound abdominal phantom was placed and fixed in the motion platform. The ground truth of phantom movement was recorded by tracking an optical tracker attached to this phantom. One tumor inside the phantom was the tracking target. In the evaluation of the results, the monitoring results from the ultrasound system were compared with the phantom motion results from the infrared camera. Differences between infrared monitoring motion and ultrasound tracking motion were analyzed by calculating the root-mean-square error. Results: The 82.2% ultrasound tracking motion was within a 0.5 mm difference value between ultrasound tracking displacement and infrared monitoring motion. 0.7% ultrasound tracking failed to track accurately (a difference value > 2.5 mm). These differences between ultrasound tracking motion and infrared monitored motion do not correlate with respiratory displacements, respiratory velocity, or respiratory acceleration by linear regression analysis. Conclusions: The highly accurate monitoring results of this phantom study prove that the ultrasound tracking system may be a potential method for real-time monitoring targets, allowing more accurate delivery of radiation doses.

18.
Int Symp Med Robot ; 20222022 Apr.
Artículo en Inglés | MEDLINE | ID: mdl-36212509

RESUMEN

Vitreoretinal surgery requires dexterity and force sensitivity from the clinician. A system to cooperatively control an integrated surgical robot for high dexterity manipulation within the eye's vitreous space was developed and validated in simulation. The system is composed of a 2 degrees of freedom (DoF) snake-like continuum manipulator that is attached to the end-effector of a 5-DoF rigid robot arm. It is capable of receiving position and orientation commands from a 5-DoF input device in real-time, as well as following pre-planned trajectories. The manipulator is moved to each target pose in real-time, using an optimization method to calculate the inverse kinematics solution. Constraints on the position and orientation ensure the target pose does not harm the patient within the vitreous space, enabling the robot to safely assist the clinician with vitreoretinal surgery when operating in real-time. The simulation demonstrates the system's feasibility and benefits over the existing non-dexterous system.

19.
Biomed Opt Express ; 13(9): 4970-4989, 2022 Sep 01.
Artículo en Inglés | MEDLINE | ID: mdl-36187243

RESUMEN

Due to low imaging contrast, a widely-used cone-beam computed tomography-guided small animal irradiator is less adept at localizing in vivo soft tissue targets. Bioluminescence tomography (BLT), which combines a model of light propagation through tissue with an optimization algorithm, can recover a spatially resolved tomographic volume for an internal bioluminescent source. We built a novel mobile BLT system for a small animal irradiator to localize soft tissue targets for radiation guidance. In this study, we elaborate its configuration and features that are indispensable for accurate image guidance. Phantom and in vivo validations show the BLT system can localize targets with accuracy within 1 mm. With the optimal choice of threshold and margin for target volume, BLT can provide a distinctive opportunity for investigators to perform conformal biology-guided irradiation to malignancy.

20.
Annu Int Conf IEEE Eng Med Biol Soc ; 2022: 4397-4401, 2022 07.
Artículo en Inglés | MEDLINE | ID: mdl-36086006

RESUMEN

The determination of flexible needle shape during insertion is critical for planning and validation in minimally invasive surgical percutaneous procedures. In this paper, we validate a needle shape-sensing method using fiber Bragg grating (FBG) sensors over sequential needle insertion lengths in gel phantom and real tissue. Experiments on a four-active area, FBG-sensorized needle were performed in both isotropic simulated tissue and inhomogeneous animal tissue with computed tomography (CT) as the ground truth of the needle shape. The results show that the needle shape obtained from the FBG sensors has an overall consistent accuracy in real tissue in comparison to the phantom gel. The results validate a viable 3D needle shape-sensing model and reconstruction method over various insertion depths in comparison to the needle shapes determined from CT in both gel phantom and real tissue.


Asunto(s)
Agujas , Tomografía Computarizada por Rayos X , Animales , Procedimientos Quirúrgicos Mínimamente Invasivos , Fantasmas de Imagen
SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA
...