Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 175
Filtrar
Mais filtros

Base de dados
Tipo de documento
Intervalo de ano de publicação
1.
IEEE Trans Robot ; 39(2): 1373-1387, 2023 Apr.
Artigo em Inglês | MEDLINE | ID: mdl-37377922

RESUMO

Notable challenges during retinal surgery lend themselves to robotic assistance which has proven beneficial in providing a safe steady-hand manipulation. Efficient assistance from the robots heavily relies on accurate sensing of surgery states (e.g. instrument tip localization and tool-to-tissue interaction forces). Many of the existing tool tip localization methods require preoperative frame registrations or instrument calibrations. In this study using an iterative approach and by combining vision and force-based methods, we develop calibration- and registration-independent (RI) algorithms to provide online estimates of instrument stiffness (least squares and adaptive). The estimations are then combined with a state-space model based on the forward kinematics (FWK) of the Steady-Hand Eye Robot (SHER) and Fiber Brag Grating (FBG) sensor measurements. This is accomplished using a Kalman Filtering (KF) approach to improve the deflected instrument tip position estimations during robot-assisted eye surgery. The conducted experiments demonstrate that when the online RI stiffness estimations are used, the instrument tip localization results surpass those obtained from pre-operative offline calibrations for stiffness.

2.
Proc IEEE Inst Electr Electron Eng ; 110(7): 993-1011, 2022 Jul.
Artigo em Inglês | MEDLINE | ID: mdl-35911127

RESUMO

Surgical robots have been widely adopted with over 4000 robots being used in practice daily. However, these are telerobots that are fully controlled by skilled human surgeons. Introducing "surgeon-assist"-some forms of autonomy-has the potential to reduce tedium and increase consistency, analogous to driver-assist functions for lanekeeping, cruise control, and parking. This article examines the scientific and technical backgrounds of robotic autonomy in surgery and some ethical, social, and legal implications. We describe several autonomous surgical tasks that have been automated in laboratory settings, and research concepts and trends.

3.
Sensors (Basel) ; 22(14)2022 Jul 17.
Artigo em Inglês | MEDLINE | ID: mdl-35891016

RESUMO

Developing image-guided robotic systems requires access to flexible, open-source software. For image guidance, the open-source medical imaging platform 3D Slicer is one of the most adopted tools that can be used for research and prototyping. Similarly, for robotics, the open-source middleware suite robot operating system (ROS) is the standard development framework. In the past, there have been several "ad hoc" attempts made to bridge both tools; however, they are all reliant on middleware and custom interfaces. Additionally, none of these attempts have been successful in bridging access to the full suite of tools provided by ROS or 3D Slicer. Therefore, in this paper, we present the SlicerROS2 module, which was designed for the direct use of ROS2 packages and libraries within 3D Slicer. The module was developed to enable real-time visualization of robots, accommodate different robot configurations, and facilitate data transfer in both directions (between ROS and Slicer). We demonstrate the system on multiple robots with different configurations, evaluate the system performance and discuss an image-guided robotic intervention that can be prototyped with this module. This module can serve as a starting point for clinical system development that reduces the need for custom interfaces and time-intensive platform setup.


Assuntos
Robótica , Diagnóstico por Imagem , Espécies Reativas de Oxigênio , Software
4.
IEEE Trans Robot ; 38(2): 1213-1229, 2022 Apr.
Artigo em Inglês | MEDLINE | ID: mdl-35633946

RESUMO

This article presents a dexterous robotic system for autonomous debridement of osteolytic bone lesions in confined spaces. The proposed system is distinguished from the state-of-the-art orthopedics systems because it combines a rigid-link robot with a continuum manipulator (CM) that enhances reach in difficult-to-access spaces often encountered in surgery. The CM is equipped with flexible debriding instruments and fiber Bragg grating sensors. The surgeon plans on the patient's preoperative computed tomography and the robotic system performs the task autonomously under the surgeon's supervision. An optimization-based controller generates control commands on the fly to execute the task while satisfying physical and safety constraints. The system design and controller are discussed and extensive simulation, phantom and human cadaver experiments are carried out to evaluate the performance, workspace, and dexterity in confined spaces. Mean and standard deviation of target placement are 0.5 and 0.18 mm, and the robotic system covers 91% of the workspace behind an acetabular implant in treatment of hip osteolysis, compared to the 54% that is achieved by conventional rigid tools.

5.
IEEE Sens J ; 21(3): 3066-3076, 2021 Feb 01.
Artigo em Inglês | MEDLINE | ID: mdl-33746624

RESUMO

This article proposes a data-driven learning-based approach for shape sensing and Distal-end Position Estimation (DPE) of a surgical Continuum Manipulator (CM) in constrained environments using Fiber Bragg Grating (FBG) sensors. The proposed approach uses only the sensory data from an unmodeled uncalibrated sensor embedded in the CM to estimate the shape and DPE. It serves as an alternate to the conventional mechanics-based sensor-model-dependent approach which relies on several sensor and CM geometrical assumptions. Unlike the conventional approach where the shape is reconstructed from proximal to distal end of the device, we propose a reversed approach where the distal-end position is estimated first and given this information, shape is then reconstructed from distal to proximal end. The proposed methodology yields more accurate DPE by avoiding accumulation of integration errors in conventional approaches. We study three data-driven models, namely a linear regression model, a Deep Neural Network (DNN), and a Temporal Neural Network (TNN) and compare DPE and shape reconstruction results. Additionally, we test both approaches (data-driven and model-dependent) against internal and external disturbances to the CM and its environment such as incorporation of flexible medical instruments into the CM and contacts with obstacles in taskspace. Using the data-driven (DNN) and model-dependent approaches, the following max absolute errors are observed for DPE: 0.78 mm and 2.45 mm in free bending motion, 0.11 mm and 3.20 mm with flexible instruments, and 1.22 mm and 3.19 mm with taskspace obstacles, indicating superior performance of the proposed data-driven approach compared to the conventional approaches.

6.
IEEE ASME Trans Mechatron ; 26(1): 369-380, 2021 Feb.
Artigo em Inglês | MEDLINE | ID: mdl-34025108

RESUMO

This paper presents the development and experimental evaluation of a redundant robotic system for the less-invasive treatment of osteolysis (bone degradation) behind the acetabular implant during total hip replacement revision surgery. The system comprises a rigid-link positioning robot and a Continuum Dexterous Manipulator (CDM) equipped with highly flexible debriding tools and a Fiber Bragg Grating (FBG)-based sensor. The robot and the continuum manipulator are controlled concurrently via an optimization-based framework using the Tip Position Estimation (TPE) from the FBG sensor as feedback. Performance of the system is evaluated on a setup that consists of an acetabular cup and saw-bone phantom simulating the bone behind the cup. Experiments consist of performing the surgical procedure on the simulated phantom setup. CDM TPE using FBGs, target location placement, cutting performance, and the concurrent control algorithm capability in achieving the desired tasks are evaluated. Mean and standard deviation of the CDM TPE from the FBG sensor and the robotic system are 0.50 mm, and 0.18 mm, respectively. Using the developed surgical system, accurate positioning and successful cutting of desired straight-line and curvilinear paths on saw-bone phantoms behind the cup with different densities are demonstrated. Compared to the conventional rigid tools, the workspace reach behind the acetabular cup is 2.47 times greater when using the developed robotic system.

7.
IEEE Trans Autom Sci Eng ; 18(1): 299-310, 2021 Jan.
Artigo em Inglês | MEDLINE | ID: mdl-33746641

RESUMO

The treatment of malaria is a global health challenge that stands to benefit from the widespread introduction of a vaccine for the disease. A method has been developed to create a live organism vaccine using the sporozoites (SPZ) of the parasite Plasmodium falciparum (Pf), which are concentrated in the salivary glands of infected mosquitoes. Current manual dissection methods to obtain these PfSPZ are not optimally efficient for large-scale vaccine production. We propose an improved dissection procedure and a mechanical fixture that increases the rate of mosquito dissection and helps to deskill this stage of the production process. We further demonstrate the automation of a key step in this production process, the picking and placing of mosquitoes from a staging apparatus into a dissection assembly. This unit test of a robotic mosquito pick-and-place system is performed using a custom-designed micro-gripper attached to a four degree of freedom (4-DOF) robot under the guidance of a computer vision system. Mosquitoes are autonomously grasped and pulled to a pair of notched dissection blades to remove the head of the mosquito, allowing access to the salivary glands. Placement into these blades is adapted based on output from computer vision to accommodate for the unique anatomy and orientation of each grasped mosquito. In this pilot test of the system on 50 mosquitoes, we demonstrate a 100% grasping accuracy and a 90% accuracy in placing the mosquito with its neck within the blade notches such that the head can be removed. This is a promising result for this difficult and non-standard pick-and-place task. NOTE TO PRACTITIONERS­: Automated processes could help increase malaria vaccine production to global scale. Currently, production requires technicians to manually dissect mosquitoes, a process that is slow, tedious, and requires a lengthy training regimen. This paper presents an an improved manual fixture and procedure that reduces technician training time. Further, an approach to automate this dissection process is proposed and the critical step of robotic manipulation of the mosquito with the aid of computer vision is demonstrated. Our approach may serve as a useful example of system design and integration for practitioners that seek to perform new and challenging pick-and-place tasks with small, non-uniform, and highly deformable objects.

8.
IEEE Trans Robot ; 36(1): 222-239, 2020 Feb.
Artigo em Inglês | MEDLINE | ID: mdl-32661460

RESUMO

In this article, we present a novel stochastic algorithm called simultaneous sensor calibration and deformation estimation (SCADE) to address the problem of modeling deformation behavior of a generic continuum manipulator (CM) in free and obstructed environments. In SCADE, using a novel mathematical formulation, we introduce a priori model-independent filtering algorithm to fuse the continuous and inaccurate measurements of an embedded sensor (e.g., magnetic or piezoelectric sensors) with an intermittent but accurate data of an external imaging system (e.g., optical trackers or cameras). The main motivation of this article is the crucial need of obtaining an accurate shape/position estimation of a CM utilized in a surgical intervention. In these robotic procedures, the CM is typically equipped with an embedded sensing unit (ESU) while an external imaging modality (e.g., ultrasound or a fluoroscopy machine) is also available in the surgical site. The results of two different set of prior experiments in free and obstructed environments were used to evaluate the efficacy of SCADE algorithm. The experiments were performed with a CM specifically designed for orthopaedic interventions equipped with an inaccurate Fiber Bragg Grating (FBG) ESU and overhead camera. The results demonstrated the successful performance of the SCADE algorithm in simultaneous estimation of unknown deformation behavior of the utilized unmodeled CM together with realizing the time-varying drift of the poor-calibrated FBG sensing unit. Moreover, the results showed the phenomenal out-performance of the SCADE algorithm in estimation of the CM's tip position as compared to FBG-based position estimations.

9.
IEEE Sens J ; 17(11): 3526-3541, 2017 Jun 01.
Artigo em Inglês | MEDLINE | ID: mdl-28736508

RESUMO

In vitreoretinal surgery, membrane peeling is a prototypical task where a layer of fibrous tissue is delaminated off the retina with a micro-forceps by applying very fine forces that are mostly imperceptible to the surgeon. Previously we developed sensitized ophthalmic surgery tools based on fiber Bragg grating (FBG) strain sensors, which were shown to precisely detect forces at the instrument's tip in two degrees of freedom perpendicular to the tool axis. This paper presents a new design that employs an additional sensor to capture also the tensile force along the tool axis. The grasping functionality is provided via a compact motorized unit. To compute forces, we investigate two distinct fitting methods: a linear regression and a nonlinear fitting based on second-order Bernstein polynomials. We carry out experiments to test the repeatability of sensor outputs, calibrate the sensor and validate its performance. Results demonstrate sensor wavelength repeatability within 2 pm. Although the linear method provides sufficient accuracy in measuring transverse forces, in the axial direction it produces a root mean square (rms) error over 3 mN even for a confined magnitude and direction of forces. On the other hand, the nonlinear method provides a more consistent and accurate measurement of both the transverse and axial forces for the entire force range (0-25 mN). Validation including random samples shows that our tool with the nonlinear force computation method can predict 3-D forces with an rms error under 0.15 mN in the transverse plane and within 2 mN accuracy in the axial direction.

10.
Sensors (Basel) ; 17(10)2017 Sep 23.
Artigo em Inglês | MEDLINE | ID: mdl-28946634

RESUMO

Retinal vein cannulation is a technically demanding surgical procedure where therapeutic agents are injected into the retinal veins to treat occlusions. The clinical feasibility of this approach has been largely limited by the technical challenges associated with performing the procedure. Among the challenges to successful vein cannulation are identifying the moment of venous puncture, achieving cannulation of the micro-vessel, and maintaining cannulation throughout drug delivery. Recent advances in medical robotics and sensing of tool-tissue interaction forces have the potential to address each of these challenges as well as to prevent tissue trauma, minimize complications, diminish surgeon effort, and ultimately promote successful retinal vein cannulation. In this paper, we develop an assistive system combining a handheld micromanipulator, called "Micron", with a force-sensing microneedle. Using this system, we examine two distinct methods of precisely detecting the instant of venous puncture. This is based on measured tool-tissue interaction forces and also the tracked position of the needle tip. In addition to the existing tremor canceling function of Micron, a new control method is implemented to actively compensate unintended movements of the operator, and to keep the cannulation device securely inside the vein following cannulation. To demonstrate the capabilities and performance of our uniquely upgraded system, we present a multi-user artificial phantom study with subjects from three different surgical skill levels. Results show that our puncture detection algorithm, when combined with the active positive holding feature enables sustained cannulation which is most evident in smaller veins. Notable is that the active holding function significantly attenuates tool motion in the vein, thereby reduces the trauma during cannulation.


Assuntos
Cateterismo/instrumentação , Cateterismo/métodos , Micromanipulação/instrumentação , Agulhas , Veia Retiniana/cirurgia , Robótica , Humanos
11.
IEEE ASME Trans Mechatron ; 22(6): 2440-2448, 2017 Dec.
Artigo em Inglês | MEDLINE | ID: mdl-29628753

RESUMO

In this study, we built and tested a handheld motion-guided micro-forceps system using common-path swept source optical coherence tomography (CP-SSOCT) for highly accurate depth controlled epiretinal membranectomy. A touch sensor and two motors were used in the forceps design to minimize the inherent motion artifact while squeezing the tool handle to actuate the tool and grasp, and to independently control the depth of the tool-tip. A smart motion monitoring and a guiding algorithm were devised to provide precise and intuitive freehand control. We compared the involuntary tool-tip motion occurring while grasping with a standard manual micro-forceps and our touch sensor activated micro-forceps. The results showed that our touch-sensor-based and motor-actuated tool can significantly attenuate the motion artifact during grasping (119.81 µm with our device versus 330.73 µm with the standard micro-forceps). By activating the CP-SSOCT based depth locking feature, the erroneous tool-tip motion can be further reduced down to 5.11µm. We evaluated the performance of our device in comparison to the standard instrument in terms of the elapsed time, the number of grasping attempts, and the maximum depth of damage created on the substrate surface while trying to pick up small pieces of fibers (Ø 125 µm) from a soft polymer surface. The results indicate that all metrics were significantly improved when using our device; of note, the average elapsed time, the number of grasping attempts, and the maximum depth of damage were reduced by 25%, 31%, and 75%, respectively.

12.
IEEE Sens J ; 15(10): 5494-5503, 2015 Oct.
Artigo em Inglês | MEDLINE | ID: mdl-27761103

RESUMO

Dexterous continuum manipulators (DCMs) can largely increase the reachable region and steerability for minimally and less invasive surgery. Many such procedures require the DCM to be capable of producing large deflections. The real-time control of the DCM shape requires sensors that accurately detect and report large deflections. We propose a novel, large deflection, shape sensor to track the shape of a 35 mm DCM designed for a less invasive treatment of osteolysis. Two shape sensors, each with three fiber Bragg grating sensing nodes is embedded within the DCM, and the sensors' distal ends fixed to the DCM. The DCM centerline is computed using the centerlines of each sensor curve. An experimental platform was built and different groups of experiments were carried out, including free bending and three cases of bending with obstacles. For each experiment, the DCM drive cable was pulled with a precise linear slide stage, the DCM centerline was calculated, and a 2D camera image was captured for verification. The reconstructed shape created with the shape sensors is compared with the ground truth generated by executing a 2D-3D registration between the camera image and 3D DCM model. Results show that the distal tip tracking accuracy is 0.40 ± 0.30 mm for the free bending and 0.61 ± 0.15 mm, 0.93 ± 0.05 mm and 0.23 ± 0.10 mm for three cases of bending with obstacles. The data suggest FBG arrays can accurately characterize the shape of large-deflection DCMs.

13.
IEEE Trans Med Imaging ; 43(1): 275-285, 2024 Jan.
Artigo em Inglês | MEDLINE | ID: mdl-37549070

RESUMO

Image-based 2D/3D registration is a critical technique for fluoroscopic guided surgical interventions. Conventional intensity-based 2D/3D registration approa- ches suffer from a limited capture range due to the presence of local minima in hand-crafted image similarity functions. In this work, we aim to extend the 2D/3D registration capture range with a fully differentiable deep network framework that learns to approximate a convex-shape similarity function. The network uses a novel Projective Spatial Transformer (ProST) module that has unique differentiability with respect to 3D pose parameters, and is trained using an innovative double backward gradient-driven loss function. We compare the most popular learning-based pose regression methods in the literature and use the well-established CMAES intensity-based registration as a benchmark. We report registration pose error, target registration error (TRE) and success rate (SR) with a threshold of 10mm for mean TRE. For the pelvis anatomy, the median TRE of ProST followed by CMAES is 4.4mm with a SR of 65.6% in simulation, and 2.2mm with a SR of 73.2% in real data. The CMAES SRs without using ProST registration are 28.5% and 36.0% in simulation and real data, respectively. Our results suggest that the proposed ProST network learns a practical similarity function, which vastly extends the capture range of conventional intensity-based 2D/3D registration. We believe that the unique differentiable property of ProST has the potential to benefit related 3D medical imaging research applications. The source code is available at https://github.com/gaocong13/Projective-Spatial-Transformers.


Assuntos
Imageamento Tridimensional , Pelve , Imageamento Tridimensional/métodos , Fluoroscopia/métodos , Software , Algoritmos
14.
Int J Comput Assist Radiol Surg ; 19(1): 51-59, 2024 Jan.
Artigo em Inglês | MEDLINE | ID: mdl-37347346

RESUMO

PURPOSE: A virtual reality (VR) system, where surgeons can practice procedures on virtual anatomies, is a scalable and cost-effective alternative to cadaveric training. The fully digitized virtual surgeries can also be used to assess the surgeon's skills using measurements that are otherwise hard to collect in reality. Thus, we present the Fully Immersive Virtual Reality System (FIVRS) for skull-base surgery, which combines surgical simulation software with a high-fidelity hardware setup. METHODS: FIVRS allows surgeons to follow normal clinical workflows inside the VR environment. FIVRS uses advanced rendering designs and drilling algorithms for realistic bone ablation. A head-mounted display with ergonomics similar to that of surgical microscopes is used to improve immersiveness. Extensive multi-modal data are recorded for post-analysis, including eye gaze, motion, force, and video of the surgery. A user-friendly interface is also designed to ease the learning curve of using FIVRS. RESULTS: We present results from a user study involving surgeons with various levels of expertise. The preliminary data recorded by FIVRS differentiate between participants with different levels of expertise, promising future research on automatic skill assessment. Furthermore, informal feedback from the study participants about the system's intuitiveness and immersiveness was positive. CONCLUSION: We present FIVRS, a fully immersive VR system for skull-base surgery. FIVRS features a realistic software simulation coupled with modern hardware for improved realism. The system is completely open source and provides feature-rich data in an industry-standard format.


Assuntos
Realidade Virtual , Humanos , Simulação por Computador , Software , Interface Usuário-Computador , Competência Clínica , Crânio/cirurgia
15.
IEEE Trans Med Robot Bionics ; 6(1): 135-145, 2024 Feb.
Artigo em Inglês | MEDLINE | ID: mdl-38304756

RESUMO

Subretinal injection methods and other procedures for treating retinal conditions and diseases (many considered incurable) have been limited in scope due to limited human motor control. This study demonstrates the next generation, cooperatively controlled Steady-Hand Eye Robot (SHER 3.0), a precise and intuitive-to-use robotic platform achieving clinical standards for targeting accuracy and resolution for subretinal injections. The system design and basic kinematics are reported and a deflection model for the incorporated delta stage and validation experiments are presented. This model optimizes the delta stage parameters, maximizing the global conditioning index and minimizing torsional compliance. Five tests measuring accuracy, repeatability, and deflection show the optimized stage design achieves a tip accuracy of < 30 µm, tip repeatability of 9.3 µm and 0.02°, and deflections between 20-350 µm/N. Future work will use updated control models to refine tip positioning outcomes and will be tested on in vivo animal models.

16.
Int J Comput Assist Radiol Surg ; 19(2): 199-208, 2024 Feb.
Artigo em Inglês | MEDLINE | ID: mdl-37610603

RESUMO

PURPOSE: To achieve effective robot-assisted laparoscopic prostatectomy, the integration of transrectal ultrasound (TRUS) imaging system which is the most widely used imaging modality in prostate imaging is essential. However, manual manipulation of the ultrasound transducer during the procedure will significantly interfere with the surgery. Therefore, we propose an image co-registration algorithm based on a photoacoustic marker (PM) method, where the ultrasound/photoacoustic (US/PA) images can be registered to the endoscopic camera images to ultimately enable the TRUS transducer to automatically track the surgical instrument. METHODS: An optimization-based algorithm is proposed to co-register the images from the two different imaging modalities. The principle of light propagation and an uncertainty in PM detection were assumed in this algorithm to improve the stability and accuracy of the algorithm. The algorithm is validated using the previously developed US/PA image-guided system with a da Vinci surgical robot. RESULTS: The target-registration-error (TRE) is measured to evaluate the proposed algorithm. In both simulation and experimental demonstration, the proposed algorithm achieved a sub-centimeter accuracy which is acceptable in practical clinics (i.e., 1.15 ± 0.29 mm from the experimental evaluation). The result is also comparable with our previous approach (i.e., 1.05 ± 0.37 mm), and the proposed method can be implemented with a normal white light stereo camera and does not require highly accurate localization of the PM. CONCLUSION: The proposed frame registration algorithm enabled a simple yet efficient integration of commercial US/PA imaging system into laparoscopic surgical setting by leveraging the characteristic properties of acoustic wave propagation and laser excitation, contributing to automated US/PA image-guided surgical intervention applications.


Assuntos
Laparoscopia , Neoplasias da Próstata , Robótica , Cirurgia Assistida por Computador , Masculino , Humanos , Imageamento Tridimensional/métodos , Ultrassonografia/métodos , Cirurgia Assistida por Computador/métodos , Algoritmos , Prostatectomia/métodos , Neoplasias da Próstata/cirurgia
17.
Otolaryngol Head Neck Surg ; 171(1): 188-196, 2024 Jul.
Artigo em Inglês | MEDLINE | ID: mdl-38488231

RESUMO

OBJECTIVE: Use microscopic video-based tracking of laryngeal surgical instruments to investigate the effect of robot assistance on instrument tremor. STUDY DESIGN: Experimental trial. SETTING: Tertiary Academic Medical Center. METHODS: In this randomized cross-over trial, 36 videos were recorded from 6 surgeons performing left and right cordectomies on cadaveric pig larynges. These recordings captured 3 distinct conditions: without robotic assistance, with robot-assisted scissors, and with robot-assisted graspers. To assess tool tremor, we employed computer vision-based algorithms for tracking surgical tools. Absolute tremor bandpower and normalized path length were utilized as quantitative measures. Wilcoxon rank sum exact tests were employed for statistical analyses and comparisons between trials. Additionally, surveys were administered to assess the perceived ease of use of the robotic system. RESULTS: Absolute tremor bandpower showed a significant decrease when using robot-assisted instruments compared to freehand instruments (P = .012). Normalized path length significantly decreased with robot-assisted compared to freehand trials (P = .001). For the scissors, robot-assisted trials resulted in a significant decrease in absolute tremor bandpower (P = .002) and normalized path length (P < .001). For the graspers, there was no significant difference in absolute tremor bandpower (P = .4), but there was a significantly lower normalized path length in the robot-assisted trials (P = .03). CONCLUSION: This study demonstrated that computer-vision-based approaches can be used to assess tool motion in simulated microlaryngeal procedures. The results suggest that robot assistance is capable of reducing instrument tremor.


Assuntos
Microcirurgia , Procedimentos Cirúrgicos Robóticos , Suínos , Animais , Procedimentos Cirúrgicos Robóticos/métodos , Microcirurgia/métodos , Tremor/cirurgia , Estudos Cross-Over , Gravação em Vídeo , Cadáver , Humanos
18.
Int J Comput Assist Radiol Surg ; 19(7): 1259-1266, 2024 Jul.
Artigo em Inglês | MEDLINE | ID: mdl-38775904

RESUMO

PURPOSE: Monocular SLAM algorithms are the key enabling technology for image-based surgical navigation systems for endoscopic procedures. Due to the visual feature scarcity and unique lighting conditions encountered in endoscopy, classical SLAM approaches perform inconsistently. Many of the recent approaches to endoscopic SLAM rely on deep learning models. They show promising results when optimized on singular domains such as arthroscopy, sinus endoscopy, colonoscopy or laparoscopy, but are limited by an inability to generalize to different domains without retraining. METHODS: To address this generality issue, we propose OneSLAM a monocular SLAM algorithm for surgical endoscopy that works out of the box for several endoscopic domains, including sinus endoscopy, colonoscopy, arthroscopy and laparoscopy. Our pipeline builds upon robust tracking any point (TAP) foundation models to reliably track sparse correspondences across multiple frames and runs local bundle adjustment to jointly optimize camera poses and a sparse 3D reconstruction of the anatomy. RESULTS: We compare the performance of our method against three strong baselines previously proposed for monocular SLAM in endoscopy and general scenes. OneSLAM presents better or comparable performance over existing approaches targeted to that specific data in all four tested domains, generalizing across domains without the need for retraining. CONCLUSION: OneSLAM benefits from the convincing performance of TAP foundation models but generalizes to endoscopic sequences of different anatomies all while demonstrating better or comparable performance over domain-specific SLAM approaches. Future research on global loop closure will investigate how to reliably detect loops in endoscopic scenes to reduce accumulated drift and enhance long-term navigation capabilities.


Assuntos
Algoritmos , Endoscopia , Humanos , Endoscopia/métodos , Imageamento Tridimensional/métodos , Cirurgia Assistida por Computador/métodos , Processamento de Imagem Assistida por Computador/métodos
19.
Artigo em Inglês | MEDLINE | ID: mdl-38922721

RESUMO

OBJECTIVE: Segmentation, the partitioning of patient imaging into multiple, labeled segments, has several potential clinical benefits but when performed manually is tedious and resource intensive. Automated deep learning (DL)-based segmentation methods can streamline the process. The objective of this study was to evaluate a label-efficient DL pipeline that requires only a small number of annotated scans for semantic segmentation of sinonasal structures in CT scans. STUDY DESIGN: Retrospective cohort study. SETTING: Academic institution. METHODS: Forty CT scans were used in this study including 16 scans in which the nasal septum (NS), inferior turbinate (IT), maxillary sinus (MS), and optic nerve (ON) were manually annotated using an open-source software. A label-efficient DL framework was used to train jointly on a few manually labeled scans and the remaining unlabeled scans. Quantitative analysis was then performed to obtain the number of annotated scans needed to achieve submillimeter average surface distances (ASDs). RESULTS: Our findings reveal that merely four labeled scans are necessary to achieve median submillimeter ASDs for large sinonasal structures-NS (0.96 mm), IT (0.74 mm), and MS (0.43 mm), whereas eight scans are required for smaller structures-ON (0.80 mm). CONCLUSION: We have evaluated a label-efficient pipeline for segmentation of sinonasal structures. Empirical results demonstrate that automated DL methods can achieve submillimeter accuracy using a small number of labeled CT scans. Our pipeline has the potential to improve pre-operative planning workflows, robotic- and image-guidance navigation systems, computer-assisted diagnosis, and the construction of statistical shape models to quantify population variations. LEVEL OF EVIDENCE: N/A.

20.
Int J Comput Assist Radiol Surg ; 19(6): 1213-1222, 2024 Jun.
Artigo em Inglês | MEDLINE | ID: mdl-38642297

RESUMO

PURPOSE: Teamwork in surgery depends on a shared mental model of success, i.e., a common understanding of objectives in the operating room. A shared model leads to increased engagement among team members and is associated with fewer complications and overall better outcomes for patients. However, clinical training typically focuses on role-specific skills, leaving individuals to acquire a shared model indirectly through on-the-job experience. METHODS: We investigate whether virtual reality (VR) cross-training, i.elet@tokeneonedotexposure to other roles, can enhance a shared mental model for non-surgeons more directly. Our study focuses on X-ray guided pelvic trauma surgery, a procedure where successful communication depends on the shared model between the surgeon and a C-arm technologist. We present a VR environment supporting both roles and evaluate a cross-training curriculum in which non-surgeons swap roles with the surgeon. RESULTS: Exposure to the surgical task resulted in higher engagement with the C-arm technologist role in VR, as measured by the mental demand and effort expended by participants ( p < 0.001 ). It also has a significant effect on non-surgeon's mental model of the overall task; novice participants' estimation of the mental demand and effort required for the surgeon's task increases after training, while their perception of overall performance decreases ( p < 0.05 ), indicating a gap in understanding based solely on observation. This phenomenon was also present for a professional C-arm technologist. CONCLUSION: Until now, VR applications for clinical training have focused on virtualizing existing curricula. We demonstrate how novel approaches which are not possible outside of a virtual environment, such as role swapping, may enhance the shared mental model of surgical teams by contextualizing each individual's role within the overall task in a time- and cost-efficient manner. As workflows grow increasingly sophisticated, we see VR curricula as being able to directly foster a shared model for success, ultimately benefiting patient outcomes through more effective teamwork in surgery.


Assuntos
Equipe de Assistência ao Paciente , Realidade Virtual , Humanos , Feminino , Masculino , Currículo , Competência Clínica , Adulto , Cirurgia Assistida por Computador/métodos , Cirurgia Assistida por Computador/educação , Cirurgiões/educação , Cirurgiões/psicologia
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA