Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 174
Filtrar
1.
Artigo em Inglês | MEDLINE | ID: mdl-38753135

RESUMO

PURPOSE: Preoperative imaging plays a pivotal role in sinus surgery where CTs offer patient-specific insights of complex anatomy, enabling real-time intraoperative navigation to complement endoscopy imaging. However, surgery elicits anatomical changes not represented in the preoperative model, generating an inaccurate basis for navigation during surgery progression. METHODS: We propose a first vision-based approach to update the preoperative 3D anatomical model leveraging intraoperative endoscopic video for navigated sinus surgery where relative camera poses are known. We rely on comparisons of intraoperative monocular depth estimates and preoperative depth renders to identify modified regions. The new depths are integrated in these regions through volumetric fusion in a truncated signed distance function representation to generate an intraoperative 3D model that reflects tissue manipulation RESULTS: We quantitatively evaluate our approach by sequentially updating models for a five-step surgical progression in an ex vivo specimen. We compute the error between correspondences from the updated model and ground-truth intraoperative CT in the region of anatomical modification. The resulting models show a decrease in error during surgical progression as opposed to increasing when no update is employed. CONCLUSION: Our findings suggest that preoperative 3D anatomical models can be updated using intraoperative endoscopy video in navigated sinus surgery. Future work will investigate improvements to monocular depth estimation as well as removing the need for external navigation systems. The resulting ability to continuously update the patient model may provide surgeons with a more precise understanding of the current anatomical state and paves the way toward a digital twin paradigm for sinus surgery.

2.
Artigo em Inglês | MEDLINE | ID: mdl-38775904

RESUMO

PURPOSE: Monocular SLAM algorithms are the key enabling technology for image-based surgical navigation systems for endoscopic procedures. Due to the visual feature scarcity and unique lighting conditions encountered in endoscopy, classical SLAM approaches perform inconsistently. Many of the recent approaches to endoscopic SLAM rely on deep learning models. They show promising results when optimized on singular domains such as arthroscopy, sinus endoscopy, colonoscopy or laparoscopy, but are limited by an inability to generalize to different domains without retraining. METHODS: To address this generality issue, we propose OneSLAM a monocular SLAM algorithm for surgical endoscopy that works out of the box for several endoscopic domains, including sinus endoscopy, colonoscopy, arthroscopy and laparoscopy. Our pipeline builds upon robust tracking any point (TAP) foundation models to reliably track sparse correspondences across multiple frames and runs local bundle adjustment to jointly optimize camera poses and a sparse 3D reconstruction of the anatomy. RESULTS: We compare the performance of our method against three strong baselines previously proposed for monocular SLAM in endoscopy and general scenes. OneSLAM presents better or comparable performance over existing approaches targeted to that specific data in all four tested domains, generalizing across domains without the need for retraining. CONCLUSION: OneSLAM benefits from the convincing performance of TAP foundation models but generalizes to endoscopic sequences of different anatomies all while demonstrating better or comparable performance over domain-specific SLAM approaches. Future research on global loop closure will investigate how to reliably detect loops in endoscopic scenes to reduce accumulated drift and enhance long-term navigation capabilities.

3.
Artigo em Inglês | MEDLINE | ID: mdl-38642297

RESUMO

PURPOSE: Teamwork in surgery depends on a shared mental model of success, i.e., a common understanding of objectives in the operating room. A shared model leads to increased engagement among team members and is associated with fewer complications and overall better outcomes for patients. However, clinical training typically focuses on role-specific skills, leaving individuals to acquire a shared model indirectly through on-the-job experience. METHODS: We investigate whether virtual reality (VR) cross-training, i.elet@tokeneonedotexposure to other roles, can enhance a shared mental model for non-surgeons more directly. Our study focuses on X-ray guided pelvic trauma surgery, a procedure where successful communication depends on the shared model between the surgeon and a C-arm technologist. We present a VR environment supporting both roles and evaluate a cross-training curriculum in which non-surgeons swap roles with the surgeon. RESULTS: Exposure to the surgical task resulted in higher engagement with the C-arm technologist role in VR, as measured by the mental demand and effort expended by participants ( p < 0.001 ). It also has a significant effect on non-surgeon's mental model of the overall task; novice participants' estimation of the mental demand and effort required for the surgeon's task increases after training, while their perception of overall performance decreases ( p < 0.05 ), indicating a gap in understanding based solely on observation. This phenomenon was also present for a professional C-arm technologist. CONCLUSION: Until now, VR applications for clinical training have focused on virtualizing existing curricula. We demonstrate how novel approaches which are not possible outside of a virtual environment, such as role swapping, may enhance the shared mental model of surgical teams by contextualizing each individual's role within the overall task in a time- and cost-efficient manner. As workflows grow increasingly sophisticated, we see VR curricula as being able to directly foster a shared model for success, ultimately benefiting patient outcomes through more effective teamwork in surgery.

4.
Artigo em Inglês | MEDLINE | ID: mdl-38488231

RESUMO

OBJECTIVE: Use microscopic video-based tracking of laryngeal surgical instruments to investigate the effect of robot assistance on instrument tremor. STUDY DESIGN: Experimental trial. SETTING: Tertiary Academic Medical Center. METHODS: In this randomized cross-over trial, 36 videos were recorded from 6 surgeons performing left and right cordectomies on cadaveric pig larynges. These recordings captured 3 distinct conditions: without robotic assistance, with robot-assisted scissors, and with robot-assisted graspers. To assess tool tremor, we employed computer vision-based algorithms for tracking surgical tools. Absolute tremor bandpower and normalized path length were utilized as quantitative measures. Wilcoxon rank sum exact tests were employed for statistical analyses and comparisons between trials. Additionally, surveys were administered to assess the perceived ease of use of the robotic system. RESULTS: Absolute tremor bandpower showed a significant decrease when using robot-assisted instruments compared to freehand instruments (P = .012). Normalized path length significantly decreased with robot-assisted compared to freehand trials (P = .001). For the scissors, robot-assisted trials resulted in a significant decrease in absolute tremor bandpower (P = .002) and normalized path length (P < .001). For the graspers, there was no significant difference in absolute tremor bandpower (P = .4), but there was a significantly lower normalized path length in the robot-assisted trials (P = .03). CONCLUSION: This study demonstrated that computer-vision-based approaches can be used to assess tool motion in simulated microlaryngeal procedures. The results suggest that robot assistance is capable of reducing instrument tremor.

5.
IEEE Trans Med Robot Bionics ; 6(1): 135-145, 2024 Feb.
Artigo em Inglês | MEDLINE | ID: mdl-38304756

RESUMO

Subretinal injection methods and other procedures for treating retinal conditions and diseases (many considered incurable) have been limited in scope due to limited human motor control. This study demonstrates the next generation, cooperatively controlled Steady-Hand Eye Robot (SHER 3.0), a precise and intuitive-to-use robotic platform achieving clinical standards for targeting accuracy and resolution for subretinal injections. The system design and basic kinematics are reported and a deflection model for the incorporated delta stage and validation experiments are presented. This model optimizes the delta stage parameters, maximizing the global conditioning index and minimizing torsional compliance. Five tests measuring accuracy, repeatability, and deflection show the optimized stage design achieves a tip accuracy of < 30 µm, tip repeatability of 9.3 µm and 0.02°, and deflections between 20-350 µm/N. Future work will use updated control models to refine tip positioning outcomes and will be tested on in vivo animal models.

6.
Adv Sci (Weinh) ; 11(7): e2305495, 2024 Feb.
Artigo em Inglês | MEDLINE | ID: mdl-38072667

RESUMO

Magnetic resonance imaging (MRI) demonstrates clear advantages over other imaging modalities in neurosurgery with its ability to delineate critical neurovascular structures and cancerous tissue in high-resolution 3D anatomical roadmaps. However, its application has been limited to interventions performed based on static pre/post-operative imaging, where errors accrue from stereotactic frame setup, image registration, and brain shift. To leverage the powerful intra-operative functions of MRI, e.g., instrument tracking, monitoring of physiological changes and tissue temperature in MRI-guided bilateral stereotactic neurosurgery, a multi-stage robotic positioner is proposed. The system positions cannula/needle instruments using a lightweight (203 g) and compact (Ø97 × 81 mm) skull-mounted structure that fits within most standard imaging head coils. With optimized design in soft robotics, the system operates in two stages: i) manual coarse adjustment performed interactively by the surgeon (workspace of ±30°), ii) automatic fine adjustment with precise (<0.2° orientation error), responsive (1.4 Hz bandwidth), and high-resolution (0.058°) soft robotic positioning. Orientation locking provides sufficient transmission stiffness (4.07 N/mm) for instrument advancement. The system's clinical workflow and accuracy is validated with lab-based (<0.8 mm) and MRI-based testing on skull phantoms (<1.7 mm) and a cadaver subject (<2.2 mm). Custom-made wireless omni-directional tracking markers facilitated robot registration under MRI.


Assuntos
Neurocirurgia , Robótica , Procedimentos Neurocirúrgicos/métodos , Encéfalo , Imageamento por Ressonância Magnética/métodos
7.
Int J Comput Assist Radiol Surg ; 19(2): 199-208, 2024 Feb.
Artigo em Inglês | MEDLINE | ID: mdl-37610603

RESUMO

PURPOSE: To achieve effective robot-assisted laparoscopic prostatectomy, the integration of transrectal ultrasound (TRUS) imaging system which is the most widely used imaging modality in prostate imaging is essential. However, manual manipulation of the ultrasound transducer during the procedure will significantly interfere with the surgery. Therefore, we propose an image co-registration algorithm based on a photoacoustic marker (PM) method, where the ultrasound/photoacoustic (US/PA) images can be registered to the endoscopic camera images to ultimately enable the TRUS transducer to automatically track the surgical instrument. METHODS: An optimization-based algorithm is proposed to co-register the images from the two different imaging modalities. The principle of light propagation and an uncertainty in PM detection were assumed in this algorithm to improve the stability and accuracy of the algorithm. The algorithm is validated using the previously developed US/PA image-guided system with a da Vinci surgical robot. RESULTS: The target-registration-error (TRE) is measured to evaluate the proposed algorithm. In both simulation and experimental demonstration, the proposed algorithm achieved a sub-centimeter accuracy which is acceptable in practical clinics (i.e., 1.15 ± 0.29 mm from the experimental evaluation). The result is also comparable with our previous approach (i.e., 1.05 ± 0.37 mm), and the proposed method can be implemented with a normal white light stereo camera and does not require highly accurate localization of the PM. CONCLUSION: The proposed frame registration algorithm enabled a simple yet efficient integration of commercial US/PA imaging system into laparoscopic surgical setting by leveraging the characteristic properties of acoustic wave propagation and laser excitation, contributing to automated US/PA image-guided surgical intervention applications.


Assuntos
Laparoscopia , Neoplasias da Próstata , Robótica , Cirurgia Assistida por Computador , Masculino , Humanos , Imageamento Tridimensional/métodos , Ultrassonografia/métodos , Cirurgia Assistida por Computador/métodos , Algoritmos , Prostatectomia/métodos , Neoplasias da Próstata/cirurgia
8.
Int J Comput Assist Radiol Surg ; 19(1): 51-59, 2024 Jan.
Artigo em Inglês | MEDLINE | ID: mdl-37347346

RESUMO

PURPOSE: A virtual reality (VR) system, where surgeons can practice procedures on virtual anatomies, is a scalable and cost-effective alternative to cadaveric training. The fully digitized virtual surgeries can also be used to assess the surgeon's skills using measurements that are otherwise hard to collect in reality. Thus, we present the Fully Immersive Virtual Reality System (FIVRS) for skull-base surgery, which combines surgical simulation software with a high-fidelity hardware setup. METHODS: FIVRS allows surgeons to follow normal clinical workflows inside the VR environment. FIVRS uses advanced rendering designs and drilling algorithms for realistic bone ablation. A head-mounted display with ergonomics similar to that of surgical microscopes is used to improve immersiveness. Extensive multi-modal data are recorded for post-analysis, including eye gaze, motion, force, and video of the surgery. A user-friendly interface is also designed to ease the learning curve of using FIVRS. RESULTS: We present results from a user study involving surgeons with various levels of expertise. The preliminary data recorded by FIVRS differentiate between participants with different levels of expertise, promising future research on automatic skill assessment. Furthermore, informal feedback from the study participants about the system's intuitiveness and immersiveness was positive. CONCLUSION: We present FIVRS, a fully immersive VR system for skull-base surgery. FIVRS features a realistic software simulation coupled with modern hardware for improved realism. The system is completely open source and provides feature-rich data in an industry-standard format.


Assuntos
Realidade Virtual , Humanos , Simulação por Computador , Software , Interface Usuário-Computador , Competência Clínica , Crânio/cirurgia
9.
IEEE Trans Med Imaging ; 43(1): 275-285, 2024 Jan.
Artigo em Inglês | MEDLINE | ID: mdl-37549070

RESUMO

Image-based 2D/3D registration is a critical technique for fluoroscopic guided surgical interventions. Conventional intensity-based 2D/3D registration approa- ches suffer from a limited capture range due to the presence of local minima in hand-crafted image similarity functions. In this work, we aim to extend the 2D/3D registration capture range with a fully differentiable deep network framework that learns to approximate a convex-shape similarity function. The network uses a novel Projective Spatial Transformer (ProST) module that has unique differentiability with respect to 3D pose parameters, and is trained using an innovative double backward gradient-driven loss function. We compare the most popular learning-based pose regression methods in the literature and use the well-established CMAES intensity-based registration as a benchmark. We report registration pose error, target registration error (TRE) and success rate (SR) with a threshold of 10mm for mean TRE. For the pelvis anatomy, the median TRE of ProST followed by CMAES is 4.4mm with a SR of 65.6% in simulation, and 2.2mm with a SR of 73.2% in real data. The CMAES SRs without using ProST registration are 28.5% and 36.0% in simulation and real data, respectively. Our results suggest that the proposed ProST network learns a practical similarity function, which vastly extends the capture range of conventional intensity-based 2D/3D registration. We believe that the unique differentiable property of ProST has the potential to benefit related 3D medical imaging research applications. The source code is available at https://github.com/gaocong13/Projective-Spatial-Transformers.


Assuntos
Imageamento Tridimensional , Pelve , Imageamento Tridimensional/métodos , Fluoroscopia/métodos , Software , Algoritmos
10.
IEEE Robot Autom Lett ; 8(3): 1287-1294, 2023 Mar.
Artigo em Inglês | MEDLINE | ID: mdl-37997605

RESUMO

This paper introduces the first integrated real-time intraoperative surgical guidance system, in which an endoscope camera of da Vinci surgical robot and a transrectal ultrasound (TRUS) transducer are co-registered using photoacoustic markers that are detected in both fluorescence (FL) and photoacoustic (PA) imaging. The co-registered system enables the TRUS transducer to track the laser spot illuminated by a pulsed-laser-diode attached to the surgical instrument, providing both FL and PA images of the surgical region-of-interest (ROI). As a result, the generated photoacoustic marker is visualized and localized in the da Vinci endoscopic FL images, and the corresponding tracking can be conducted by rotating the TRUS transducer to display the PA image of the marker. A quantitative evaluation revealed that the average registration and tracking errors were 0.84 mm and 1.16°, respectively. This study shows that the co-registered photoacoustic marker tracking can be effectively deployed intraoperatively using TRUS+PA imaging providing functional guidance of the surgical ROI.

11.
IEEE Robot Autom Lett ; 8(3): 1343-1350, 2023 Mar.
Artigo em Inglês | MEDLINE | ID: mdl-37637101

RESUMO

An in situ needle manipulation technique used by physicians when performing spinal injections is modeled to study its effect on needle shape and needle tip position. A mechanics-based model is proposed and solved using finite element method. A test setup is presented to mimic the needle manipulation motion. Tissue phantoms made from plastisol as well as porcine skeletal muscle samples are used to evaluate the model accuracy against medical images. The effect of different compression models as well as model parameters on model accuracy is studied, and the effect of needle-tissue interaction on the needle remote center of motion is examined. With the correct combination of compression model and model parameters, the model simulation is able to predict needle tip position within submillimeter accuracy.

12.
Artigo em Inglês | MEDLINE | ID: mdl-37555199

RESUMO

Robotic X-ray C-arm imaging systems can precisely achieve any position and orientation relative to the patient. Informing the system, however, what pose exactly corresponds to a desired view is challenging. Currently these systems are operated by the surgeon using joysticks, but this interaction paradigm is not necessarily effective because users may be unable to efficiently actuate more than a single axis of the system simultaneously. Moreover, novel robotic imaging systems, such as the Brainlab Loop-X, allow for independent source and detector movements, adding even more complexity. To address this challenge, we consider complementary interfaces for the surgeon to command robotic X-ray systems effectively. Specifically, we consider three interaction paradigms: (1) the use of a pointer to specify the principal ray of the desired view relative to the anatomy, (2) the same pointer, but combined with a mixed reality environment to synchronously render digitally reconstructed radiographs from the tool's pose, and (3) the same mixed reality environment but with a virtual X-ray source instead of the pointer. Initial human-in-the-loop evaluation with an attending trauma surgeon indicates that mixed reality interfaces for robotic X-ray system control are promising and may contribute to substantially reducing the number of X-ray images acquired solely during "fluoro hunting" for the desired view or standard plane.

13.
Int J Comput Assist Radiol Surg ; 18(7): 1303-1310, 2023 Jul.
Artigo em Inglês | MEDLINE | ID: mdl-37266885

RESUMO

PURPOSE: Tracking the 3D motion of the surgical tool and the patient anatomy is a fundamental requirement for computer-assisted skull-base surgery. The estimated motion can be used both for intra-operative guidance and for downstream skill analysis. Recovering such motion solely from surgical videos is desirable, as it is compliant with current clinical workflows and instrumentation. METHODS: We present Tracker of Anatomy and Tool (TAToo). TAToo jointly tracks the rigid 3D motion of the patient skull and surgical drill from stereo microscopic videos. TAToo estimates motion via an iterative optimization process in an end-to-end differentiable form. For robust tracking performance, TAToo adopts a probabilistic formulation and enforces geometric constraints on the object level. RESULTS: We validate TAToo on both simulation data, where ground truth motion is available, as well as on anthropomorphic phantom data, where optical tracking provides a strong baseline. We report sub-millimeter and millimeter inter-frame tracking accuracy for skull and drill, respectively, with rotation errors below [Formula: see text]. We further illustrate how TAToo may be used in a surgical navigation setting. CONCLUSIONS: We present TAToo, which simultaneously tracks the surgical tool and the patient anatomy in skull-base surgery. TAToo directly predicts the motion from surgical videos, without the need of any markers. Our results show that the performance of TAToo compares favorably to competing approaches. Future work will include fine-tuning of our depth network to reach a 1 mm clinical accuracy goal desired for surgical applications in the skull base.


Assuntos
Procedimentos Neurocirúrgicos , Cirurgia Assistida por Computador , Humanos , Procedimentos Neurocirúrgicos/métodos , Cirurgia Assistida por Computador/métodos , Simulação por Computador , Base do Crânio/diagnóstico por imagem , Base do Crânio/cirurgia
14.
IEEE Trans Robot ; 39(2): 1373-1387, 2023 Apr.
Artigo em Inglês | MEDLINE | ID: mdl-37377922

RESUMO

Notable challenges during retinal surgery lend themselves to robotic assistance which has proven beneficial in providing a safe steady-hand manipulation. Efficient assistance from the robots heavily relies on accurate sensing of surgery states (e.g. instrument tip localization and tool-to-tissue interaction forces). Many of the existing tool tip localization methods require preoperative frame registrations or instrument calibrations. In this study using an iterative approach and by combining vision and force-based methods, we develop calibration- and registration-independent (RI) algorithms to provide online estimates of instrument stiffness (least squares and adaptive). The estimations are then combined with a state-space model based on the forward kinematics (FWK) of the Steady-Hand Eye Robot (SHER) and Fiber Brag Grating (FBG) sensor measurements. This is accomplished using a Kalman Filtering (KF) approach to improve the deflected instrument tip position estimations during robot-assisted eye surgery. The conducted experiments demonstrate that when the online RI stiffness estimations are used, the instrument tip localization results surpass those obtained from pre-operative offline calibrations for stiffness.

15.
Int J Comput Assist Radiol Surg ; 18(7): 1329-1334, 2023 Jul.
Artigo em Inglês | MEDLINE | ID: mdl-37162733

RESUMO

PURPOSE: The use of robotic continuum manipulators has been proposed to facilitate less-invasive orthopedic surgical procedures. While tools and strategies have been developed, critical challenges such as system control and intra-operative guidance are under-addressed. Simulation tools can help solve these challenges, but several gaps limit their utility for orthopedic surgical systems, particularly those with continuum manipulators. Herein, a simulation platform which addresses these gaps is presented as a tool to better understand and solve challenges for minimally invasive orthopedic procedures. METHODS: An open-source surgical simulation software package was developed in which a continuum manipulator can interact with any volume model such as to drill bone volumes segmented from a 3D computed tomography (CT) image. Paired simulated X-ray images of the scene can also be generated. As compared to previous works, tool-anatomy interactions use a physics-based approach which leads to more stable behavior and wider procedure applicability. A new method for representing low-level volumetric drilling behavior is also introduced to capture material variability within bone as well as patient-specific properties from a CT. RESULTS: Similar interaction between a continuum manipulator and phantom bone was also demonstrated between a simulated manipulator and volumetric obstacle models. High-level material- and tool-driven behavior was shown to emerge directly from the improved low-level interactions, rather than by need of manual programming. CONCLUSION: This platform is a promising tool for developing and investigating control algorithms for tasks such as curved drilling. The generation of simulated X-ray images that correspond to the scene is useful for developing and validating image guidance models. The improvements to volumetric drilling offer users the ability to better tune behavior for specific tools and procedures and enable research to improve surgical simulation model fidelity. This platform will be used to develop and test control algorithms for image-guided curved drilling procedures in the femur.


Assuntos
Procedimentos Ortopédicos , Ortopedia , Robótica , Humanos , Simulação por Computador , Procedimentos Ortopédicos/métodos , Algoritmos
16.
Int J Comput Assist Radiol Surg ; 18(7): 1135-1142, 2023 Jul.
Artigo em Inglês | MEDLINE | ID: mdl-37160580

RESUMO

PURPOSE: Recent advances in computer vision and machine learning have resulted in endoscopic video-based solutions for dense reconstruction of the anatomy. To effectively use these systems in surgical navigation, a reliable image-based technique is required to constantly track the endoscopic camera's position within the anatomy, despite frequent removal and re-insertion. In this work, we investigate the use of recent learning-based keypoint descriptors for six degree-of-freedom camera pose estimation in intraoperative endoscopic sequences and under changes in anatomy due to surgical resection. METHODS: Our method employs a dense structure from motion (SfM) reconstruction of the preoperative anatomy, obtained with a state-of-the-art patient-specific learning-based descriptor. During the reconstruction step, each estimated 3D point is associated with a descriptor. This information is employed in the intraoperative sequences to establish 2D-3D correspondences for Perspective-n-Point (PnP) camera pose estimation. We evaluate this method in six intraoperative sequences that include anatomical modifications obtained from two cadaveric subjects. RESULTS: Show that this approach led to translation and rotation errors of 3.9 mm and 0.2 radians, respectively, with 21.86% of localized cameras averaged over the six sequences. In comparison to an additional learning-based descriptor (HardNet++), the selected descriptor can achieve a better percentage of localized cameras with similar pose estimation performance. We further discussed potential error causes and limitations of the proposed approach. CONCLUSION: Patient-specific learning-based descriptors can relocalize images that are well distributed across the inspected anatomy, even where the anatomy is modified. However, camera relocalization in endoscopic sequences remains a persistently challenging problem, and future research is necessary to increase the robustness and accuracy of this technique.


Assuntos
Endoscopia , Cirurgia Assistida por Computador , Humanos , Endoscopia/métodos , Rotação
17.
Int J Comput Assist Radiol Surg ; 18(6): 1077-1084, 2023 Jun.
Artigo em Inglês | MEDLINE | ID: mdl-37160583

RESUMO

PURPOSE: Digital twins are virtual replicas of real-world objects and processes, and they have potential applications in the field of surgical procedures, such as enhancing situational awareness. We introduce Twin-S, a digital twin framework designed specifically for skull base surgeries. METHODS: Twin-S is a novel framework that combines high-precision optical tracking and real-time simulation, making it possible to integrate it into image-guided interventions. To guarantee accurate representation, Twin-S employs calibration routines to ensure that the virtual model precisely reflects all real-world processes. Twin-S models and tracks key elements of skull base surgery, including surgical tools, patient anatomy, and surgical cameras. Importantly, Twin-S mirrors real-world drilling and updates the virtual model at frame rate of 28. RESULTS: Our evaluation of Twin-S demonstrates its accuracy, with an average error of 1.39 mm during the drilling process. Our study also highlights the benefits of Twin-S, such as its ability to provide augmented surgical views derived from the continuously updated virtual model, thus offering additional situational awareness to the surgeon. CONCLUSION: We present Twin-S, a digital twin environment for skull base surgery. Twin-S captures the real-world surgical progresses and updates the virtual model in real time through the use of modern tracking technologies. Future research that integrates vision-based techniques could further increase the accuracy of Twin-S.


Assuntos
Cirurgia Assistida por Computador , Humanos , Cirurgia Assistida por Computador/métodos , Imageamento Tridimensional/métodos , Procedimentos Neurocirúrgicos , Simulação por Computador , Base do Crânio/cirurgia
18.
Int J Comput Assist Radiol Surg ; 18(7): 1167-1174, 2023 Jul.
Artigo em Inglês | MEDLINE | ID: mdl-37171660

RESUMO

PURPOSE: Robotic assistance in otologic surgery can reduce the task load of operating surgeons during the removal of bone around the critical structures in the lateral skull base. However, safe deployment into the anatomical passageways necessitates the development of advanced sensing capabilities to actively limit the interaction forces between the surgical tools and critical anatomy. METHODS: We introduce a surgical drill equipped with a force sensor that is capable of measuring accurate tool-tissue interaction forces to enable force control and feedback to surgeons. The design, calibration and validation of the force-sensing surgical drill mounted on a cooperatively controlled surgical robot are described in this work. RESULTS: The force measurements on the tip of the surgical drill are validated with raw-egg drilling experiments, where a force sensor mounted below the egg serves as ground truth. The average root mean square error for points and path drilling experiments is 41.7 (± 12.2) mN and 48.3 (± 13.7) mN, respectively. CONCLUSION: The force-sensing prototype measures forces with sub-millinewton resolution and the results demonstrate that the calibrated force-sensing drill generates accurate force measurements with minimal error compared to the measured drill forces. The development of such sensing capabilities is crucial for the safe use of robotic systems in a clinical context.


Assuntos
Procedimentos Cirúrgicos Robóticos , Robótica , Cirurgia Assistida por Computador , Humanos , Mastoidectomia , Cirurgia Assistida por Computador/métodos , Retroalimentação
19.
Int J Comput Assist Radiol Surg ; 18(7): 1201-1208, 2023 Jul.
Artigo em Inglês | MEDLINE | ID: mdl-37213057

RESUMO

PURPOSE: Percutaneous fracture fixation involves multiple X-ray acquisitions to determine adequate tool trajectories in bony anatomy. In order to reduce time spent adjusting the X-ray imager's gantry, avoid excess acquisitions, and anticipate inadequate trajectories before penetrating bone, we propose an autonomous system for intra-operative feedback that combines robotic X-ray imaging and machine learning for automated image acquisition and interpretation, respectively. METHODS: Our approach reconstructs an appropriate trajectory in a two-image sequence, where the optimal second viewpoint is determined based on analysis of the first image. A deep neural network is responsible for detecting the tool and corridor, here a K-wire and the superior pubic ramus, respectively, in these radiographs. The reconstructed corridor and K-wire pose are compared to determine likelihood of cortical breach, and both are visualized for the clinician in a mixed reality environment that is spatially registered to the patient and delivered by an optical see-through head-mounted display. RESULTS: We assess the upper bounds on system performance through in silico evaluation across 11 CTs with fractures present, in which the corridor and K-wire are adequately reconstructed. In post hoc analysis of radiographs across 3 cadaveric specimens, our system determines the appropriate trajectory to within 2.8 ± 1.3 mm and 2.7 ± 1.8[Formula: see text]. CONCLUSION: An expert user study with an anthropomorphic phantom demonstrates how our autonomous, integrated system requires fewer images and lower movement to guide and confirm adequate placement compared to current clinical practice. Code and data are available.


Assuntos
Fraturas Ósseas , Imageamento Tridimensional , Humanos , Raios X , Imageamento Tridimensional/métodos , Fluoroscopia/métodos , Tomografia Computadorizada por Raios X/métodos , Fraturas Ósseas/diagnóstico por imagem , Fraturas Ósseas/cirurgia , Fixação de Fratura , Fixação Interna de Fraturas/métodos
20.
IEEE Trans Med Robot Bionics ; 5(1): 18-29, 2023 Feb.
Artigo em Inglês | MEDLINE | ID: mdl-37213937

RESUMO

Minimally-invasive Osteoporotic Hip Augmentation (OHA) by injecting bone cement is a potential treatment option to reduce the risk of hip fracture. This treatment can significantly benefit from computer-assisted planning and execution system to optimize the pattern of cement injection. We present a novel robotic system for the execution of OHA that consists of a 6-DOF robotic arm and integrated drilling and injection component. The minimally-invasive procedure is performed by registering the robot and preoperative images to the surgical scene using multiview image-based 2D/3D registration with no external fiducial attached to the body. The performance of the system is evaluated through experimental sawbone studies as well as cadaveric experiments with intact soft tissues. In the cadaver experiments, distance errors of 3.28mm and 2.64mm for entry and target points and orientation error of 2.30° are calculated. Moreover, the mean surface distance error of 2.13mm with translational error of 4.47mm is reported between injected and planned cement profiles. The experimental results demonstrate the first application of the proposed Robot-Assisted combined Drilling and Injection System (RADIS), incorporating biomechanical planning and intraoperative fiducial-less 2D/3D registration on human cadavers with intact soft tissues.

SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA
...