Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 66
Filtrar
Mais filtros

Base de dados
País/Região como assunto
Tipo de documento
Intervalo de ano de publicação
1.
Sensors (Basel) ; 22(14)2022 Jul 17.
Artigo em Inglês | MEDLINE | ID: mdl-35891016

RESUMO

Developing image-guided robotic systems requires access to flexible, open-source software. For image guidance, the open-source medical imaging platform 3D Slicer is one of the most adopted tools that can be used for research and prototyping. Similarly, for robotics, the open-source middleware suite robot operating system (ROS) is the standard development framework. In the past, there have been several "ad hoc" attempts made to bridge both tools; however, they are all reliant on middleware and custom interfaces. Additionally, none of these attempts have been successful in bridging access to the full suite of tools provided by ROS or 3D Slicer. Therefore, in this paper, we present the SlicerROS2 module, which was designed for the direct use of ROS2 packages and libraries within 3D Slicer. The module was developed to enable real-time visualization of robots, accommodate different robot configurations, and facilitate data transfer in both directions (between ROS and Slicer). We demonstrate the system on multiple robots with different configurations, evaluate the system performance and discuss an image-guided robotic intervention that can be prototyped with this module. This module can serve as a starting point for clinical system development that reduces the need for custom interfaces and time-intensive platform setup.


Assuntos
Robótica , Diagnóstico por Imagem , Espécies Reativas de Oxigênio , Software
2.
IEEE ASME Trans Mechatron ; 26(1): 369-380, 2021 Feb.
Artigo em Inglês | MEDLINE | ID: mdl-34025108

RESUMO

This paper presents the development and experimental evaluation of a redundant robotic system for the less-invasive treatment of osteolysis (bone degradation) behind the acetabular implant during total hip replacement revision surgery. The system comprises a rigid-link positioning robot and a Continuum Dexterous Manipulator (CDM) equipped with highly flexible debriding tools and a Fiber Bragg Grating (FBG)-based sensor. The robot and the continuum manipulator are controlled concurrently via an optimization-based framework using the Tip Position Estimation (TPE) from the FBG sensor as feedback. Performance of the system is evaluated on a setup that consists of an acetabular cup and saw-bone phantom simulating the bone behind the cup. Experiments consist of performing the surgical procedure on the simulated phantom setup. CDM TPE using FBGs, target location placement, cutting performance, and the concurrent control algorithm capability in achieving the desired tasks are evaluated. Mean and standard deviation of the CDM TPE from the FBG sensor and the robotic system are 0.50 mm, and 0.18 mm, respectively. Using the developed surgical system, accurate positioning and successful cutting of desired straight-line and curvilinear paths on saw-bone phantoms behind the cup with different densities are demonstrated. Compared to the conventional rigid tools, the workspace reach behind the acetabular cup is 2.47 times greater when using the developed robotic system.

3.
J Appl Clin Med Phys ; 18(4): 84-96, 2017 Jul.
Artigo em Inglês | MEDLINE | ID: mdl-28574192

RESUMO

PURPOSE: Stereotactic body radiation therapy (SBRT) allows for high radiation doses to be delivered to the pancreatic tumors with limited toxicity. Nevertheless, the respiratory motion of the pancreas introduces major uncertainty during SBRT. Ultrasound imaging is a non-ionizing, non-invasive, and real-time technique for intrafraction monitoring. A configuration is not available to place the ultrasound probe during pancreas SBRT for monitoring. METHODS AND MATERIALS: An arm-bridge system was designed and built. A CT scan of the bridge-held ultrasound probe was acquired and fused to ten previously treated pancreatic SBRT patient CTs as virtual simulation CTs. Both step-and-shoot intensity-modulated radiation therapy (IMRT) and volumetric-modulated arc therapy (VMAT) planning were performed on virtual simulation CT. The accuracy of our tracking algorithm was evaluated by programmed motion phantom with simulated breath-hold 3D movement. An IRB-approved volunteer study was also performed to evaluate feasibility of system setup. Three healthy subjects underwent the same patient setup required for pancreas SBRT with active breath control (ABC). 4D ultrasound images were acquired for monitoring. Ten breath-hold cycles were monitored for both phantom and volunteers. For the phantom study, the target motion tracked by ultrasound was compared with motion tracked by the infrared camera. For the volunteer study, the reproducibility of ABC breath-hold was assessed. RESULTS: The volunteer study results showed that the arm-bridge system allows placement of an ultrasound probe. The ultrasound monitoring showed less than 2 mm reproducibility of ABC breath-hold in healthy volunteers. The phantom monitoring accuracy is 0.14 ± 0.08 mm, 0.04 ± 0.1 mm, and 0.25 ± 0.09 mm in three directions. On dosimetry part, 100% of virtual simulation plans passed protocol criteria. CONCLUSIONS: Our ultrasound system can be potentially used for real-time monitoring during pancreas SBRT without compromising planning quality. The phantom study showed high monitoring accuracy of the system, and the volunteer study showed feasibility of the clinical workflow.


Assuntos
Movimentos dos Órgãos , Neoplasias Pancreáticas/diagnóstico por imagem , Neoplasias Pancreáticas/radioterapia , Radiocirurgia/métodos , Planejamento da Radioterapia Assistida por Computador , Respiração , Ultrassonografia de Intervenção/métodos , Algoritmos , Estudos de Viabilidade , Humanos , Imagens de Fantasmas , Radioterapia de Intensidade Modulada , Reprodutibilidade dos Testes
4.
Sensors (Basel) ; 15(7): 16448-65, 2015 Jul 08.
Artigo em Inglês | MEDLINE | ID: mdl-26184191

RESUMO

Optical tracking provides relatively high accuracy over a large workspace but requires line-of-sight between the camera and the markers, which may be difficult to maintain in actual applications. In contrast, inertial sensing does not require line-of-sight but is subject to drift, which may cause large cumulative errors, especially during the measurement of position. To handle cases where some or all of the markers are occluded, this paper proposes an inertial and optical sensor fusion approach in which the bias of the inertial sensors is estimated when the optical tracker provides full six degree-of-freedom (6-DOF) pose information. As long as the position of at least one marker can be tracked by the optical system, the 3-DOF position can be combined with the orientation estimated from the inertial measurements to recover the full 6-DOF pose information. When all the markers are occluded, the position tracking relies on the inertial sensors that are bias-corrected by the optical tracking system. Experiments are performed with an augmented reality head-mounted display (ARHMD) that integrates an optical tracking system (OTS) and inertial measurement unit (IMU). Experimental results show that under partial occlusion conditions, the root mean square errors (RMSE) of orientation and position are 0.04° and 0.134 mm, and under total occlusion conditions for 1 s, the orientation and position RMSE are 0.022° and 0.22 mm, respectively. Thus, the proposed sensor fusion approach can provide reliable 6-DOF pose under long-term partial occlusion and short-term total occlusion conditions.


Assuntos
Óptica e Fotônica/instrumentação , Cadáver , Desenho de Equipamento , Humanos
5.
Healthc Technol Lett ; 11(2-3): 179-188, 2024.
Artigo em Inglês | MEDLINE | ID: mdl-38638499

RESUMO

Surgical robotics has revolutionized the field of surgery, facilitating complex procedures in operating rooms. However, the current teleoperation systems often rely on bulky consoles, which limit the mobility of surgeons. This restriction reduces surgeons' awareness of the patient during procedures and narrows the range of implementation scenarios. To address these challenges, an alternative solution is proposed: a mixed reality-based teleoperation system. This system leverages hand gestures, head motion tracking, and speech commands to enable the teleoperation of surgical robots. The implementation focuses on the da Vinci research kit (dVRK) and utilizes the capabilities of Microsoft HoloLens 2. The system's effectiveness is evaluated through camera navigation tasks and peg transfer tasks. The results indicate that, in comparison to manipulator-based teleoperation, the system demonstrates comparable viability in endoscope teleoperation. However, it falls short in instrument teleoperation, highlighting the need for further improvements in hand gesture recognition and video display quality.

6.
Int J Comput Assist Radiol Surg ; 19(6): 1147-1155, 2024 Jun.
Artigo em Inglês | MEDLINE | ID: mdl-38598140

RESUMO

PURPOSE: This paper evaluates user performance in telesurgical tasks with the da Vinci Research Kit (dVRK), comparing unilateral teleoperation, bilateral teleoperation with force sensors and sensorless force estimation. METHODS: A four-channel teleoperation system with disturbance observers and sensorless force estimation with learning-based dynamic compensation was developed. Palpation experiments were conducted with 12 users who tried to locate tumors hidden in tissue phantoms with their fingers or through handheld or teleoperated laparoscopic instruments with visual, force sensor, or sensorless force estimation feedback. In a peg transfer experiment with 10 users, the contribution of sensorless haptic feedback with/without learning-based dynamic compensation was assessed using NASA TLX surveys, measured free motion speeds and forces, environment interaction forces as well as experiment completion times. RESULTS: The first study showed a 30% increase in accuracy in detecting tumors with sensorless haptic feedback over visual feedback with only a 5-10% drop in accuracy when compared with sensor feedback or direct instrument contact. The second study showed that sensorless feedback can help reduce interaction forces due to incidental contacts by about 3 times compared with unilateral teleoperation. The cost is an increase in free motion forces and physical effort. We show that it is possible to improve this with dynamic compensation. CONCLUSION: We demonstrate the benefits of sensorless haptic feedback in teleoperated surgery systems, especially with dynamic compensation, and that it can improve surgical performance without hardware modifications.


Assuntos
Procedimentos Cirúrgicos Robóticos , Humanos , Procedimentos Cirúrgicos Robóticos/métodos , Procedimentos Cirúrgicos Robóticos/instrumentação , Imagens de Fantasmas , Desenho de Equipamento , Telemedicina/instrumentação , Palpação/métodos , Palpação/instrumentação , Interface Usuário-Computador , Retroalimentação , Robótica/instrumentação , Robótica/métodos , Laparoscopia/métodos , Laparoscopia/instrumentação
7.
Int J Comput Assist Radiol Surg ; 19(1): 51-59, 2024 Jan.
Artigo em Inglês | MEDLINE | ID: mdl-37347346

RESUMO

PURPOSE: A virtual reality (VR) system, where surgeons can practice procedures on virtual anatomies, is a scalable and cost-effective alternative to cadaveric training. The fully digitized virtual surgeries can also be used to assess the surgeon's skills using measurements that are otherwise hard to collect in reality. Thus, we present the Fully Immersive Virtual Reality System (FIVRS) for skull-base surgery, which combines surgical simulation software with a high-fidelity hardware setup. METHODS: FIVRS allows surgeons to follow normal clinical workflows inside the VR environment. FIVRS uses advanced rendering designs and drilling algorithms for realistic bone ablation. A head-mounted display with ergonomics similar to that of surgical microscopes is used to improve immersiveness. Extensive multi-modal data are recorded for post-analysis, including eye gaze, motion, force, and video of the surgery. A user-friendly interface is also designed to ease the learning curve of using FIVRS. RESULTS: We present results from a user study involving surgeons with various levels of expertise. The preliminary data recorded by FIVRS differentiate between participants with different levels of expertise, promising future research on automatic skill assessment. Furthermore, informal feedback from the study participants about the system's intuitiveness and immersiveness was positive. CONCLUSION: We present FIVRS, a fully immersive VR system for skull-base surgery. FIVRS features a realistic software simulation coupled with modern hardware for improved realism. The system is completely open source and provides feature-rich data in an industry-standard format.


Assuntos
Realidade Virtual , Humanos , Simulação por Computador , Software , Interface Usuário-Computador , Competência Clínica , Crânio/cirurgia
8.
Int J Comput Assist Radiol Surg ; 19(7): 1273-1280, 2024 Jul.
Artigo em Inglês | MEDLINE | ID: mdl-38816649

RESUMO

PURPOSE: Skullbase surgery demands exceptional precision when removing bone in the lateral skull base. Robotic assistance can alleviate the effect of human sensory-motor limitations. However, the stiffness and inertia of the robot can significantly impact the surgeon's perception and control of the tool-to-tissue interaction forces. METHODS: We present a situational-aware, force control technique aimed at regulating interaction forces during robot-assisted skullbase drilling. The contextual interaction information derived from the digital twin environment is used to enhance sensory perception and suppress undesired high forces. RESULTS: To validate our approach, we conducted initial feasibility experiments involving a medical and two engineering students. The experiment focused on further drilling around critical structures following cortical mastoidectomy. The experiment results demonstrate that robotic assistance coupled with our proposed control scheme effectively limited undesired interaction forces when compared to robotic assistance without the proposed force control. CONCLUSIONS: The proposed force control techniques show promise in significantly reducing undesired interaction forces during robot-assisted skullbase surgery. These findings contribute to the ongoing efforts to enhance surgical precision and safety in complex procedures involving the lateral skull base.


Assuntos
Procedimentos Cirúrgicos Robóticos , Base do Crânio , Humanos , Base do Crânio/cirurgia , Procedimentos Cirúrgicos Robóticos/métodos , Estudos de Viabilidade , Mastoidectomia/métodos
9.
Stud Health Technol Inform ; 184: 363-9, 2013.
Artigo em Inglês | MEDLINE | ID: mdl-23400185

RESUMO

We present the design of a self-contained head-mounted surgical navigation system, which consists of an optical tracking system and an optical see-through head-mounted display (HMD). While the current prototype is bulky, we envision a more compact solution via the eventual integration of the tracking camera(s) into the HMD goggles. Rather than attempting to accurately overlay preoperative models onto the field of view, we adopted a simpler approach of displaying a small "picture-in-picture" virtual view in the HMD. We believe this approach will provide suitable assistance for some image-guided procedures, such as tumor resection, while improving the ergonomics by reducing the need for the surgeon to look away from the patient to view an external monitor. We report the results of initial experiments performed with this system, while preparing for a more clinically realistic study.


Assuntos
Movimentos da Cabeça , Dispositivos de Proteção da Cabeça , Sistemas Homem-Máquina , Cirurgia Assistida por Computador/instrumentação , Interface Usuário-Computador , Desenho de Equipamento , Análise de Falha de Equipamento , Humanos
10.
Med Phys ; 50(6): 3418-3434, 2023 Jun.
Artigo em Inglês | MEDLINE | ID: mdl-36841948

RESUMO

BACKGROUND: In breast CT, scattered photons form a large portion of the acquired signal, adversely impacting image quality throughout the frequency response of the imaging system. Prior studies provided evidence for a new image acquisition design, dubbed Narrow Beam Breast CT (NB-bCT), in preventing scatter acquisition. PURPOSE: Here, we report the design, implementation, and initial characterization of the first NB-bCT prototype. METHODS: The imaging system's apparatus is composed of two primary assemblies: a dynamic Fluence Modulator (collimator) and a photon-counting line detector. The design of the assemblies enables them to operate in lockstep during image acquisition, converting sourced x-rays into a moving narrow beam. During a projection, this narrow beam sweeps the entire fan angle coverage of the imaging system. The assemblies are each comprised of a metal housing, a sensory system, and a robotic system. A controller unit handles their relative movements. To study the impact of fluence modulation on the signal received in the detector, three physical breast phantoms, representative of small, average, and large size breasts, were developed and imaged, and acquired projections analyzed. The scatter acquisition in each projection as a function of breast phantom size was investigated. The imaging system's spatial resolution at the center and periphery of the field of view was measured. RESULTS: Minimal acquisition of scattered rays occurs during image acquisition with NB-bCT; results in minimal scatter to primary ratios in small, average, and large breast phantoms imaged were 0.05, 0.07, and 0.9, respectively. System spatial resolution of 5.2 lp/mm at 10% max MTF and 2.9 lp/mm at 50% max MTF at the center of the field of view was achieved, with minimal loss with the shift toward the corner (5.0 lp/mm at 10% max MTF and 2.5 lp/mm at 50% max MTF). CONCLUSION: The disclosed development, implementation, and characterization of a physical NB-bCT prototype system demonstrates a new method of CT-based image acquisition that yields high spatial resolution while minimizing scatter-components in acquired projections. This methodology holds promise for high-resolution CT-imaging applications in which reduction of scatter contamination is desirable.


Assuntos
Tomografia Computadorizada por Raios X , Tomografia Computadorizada por Raios X/métodos , Imagens de Fantasmas , Espalhamento de Radiação
11.
IEEE Trans Med Robot Bionics ; 5(4): 966-977, 2023 Nov.
Artigo em Inglês | MEDLINE | ID: mdl-38779126

RESUMO

As one of the most commonly performed spinal interventions in routine clinical practice, lumbar punctures are usually done with only hand palpation and trial-and-error. Failures can prolong procedure time and introduce complications such as cerebrospinal fluid leaks and headaches. Therefore, an effective needle insertion guidance method is desired. In this work, we present a complete lumbar puncture guidance system with the integration of (1) a wearable mechatronic ultrasound imaging device, (2) volume-reconstruction and bone surface estimation algorithms and (3) two alternative augmented reality user interfaces for needle guidance, including a HoloLens-based and a tablet-based solution. We conducted a quantitative evaluation of the end-to-end navigation accuracy, which shows that our system can achieve an overall needle navigation accuracy of 2.83 mm and 2.76 mm for the Tablet-based and the HoloLens-based solutions, respectively. In addition, we conducted a preliminary user study to qualitatively evaluate the effectiveness and ergonomics of our system on lumbar phantoms. The results show that users were able to successfully reach the target in an average of 1.12 and 1.14 needle insertion attempts for Tablet-based and HoloLens-based systems, respectively, exhibiting the potential to reduce the failure rates of lumbar puncture procedures with the proposed lumbar-puncture guidance.

12.
Int J Comput Assist Radiol Surg ; 17(5): 903-910, 2022 May.
Artigo em Inglês | MEDLINE | ID: mdl-35384551

RESUMO

PURPOSE: Using the da Vinci Research Kit (dVRK), we propose and experimentally demonstrate transfer learning (Xfer) of dynamics between different configurations and robots distributed around the world. This can extend recent research using neural networks to estimate the dynamics of the patient side manipulator (PSM) to provide accurate external end-effector force estimation, by adapting it to different robots and instruments, and in different configurations, with additional forces applied on the instruments as they pass through the trocar. METHODS: The goal of the learned models is to predict internal joint torques during robot motion. First, exhaustive training is performed during free-space (FS) motion, using several configurations to include gravity effects. Second, to adapt to different setups, a limited amount of training data is collected and then the neural network is updated through Xfer. RESULTS: Xfer can adapt a FS network trained on one robot, in one configuration, with a particular instrument, to provide comparable joint torque estimation for a different robot, in a different configuration, using a different instrument, and inserted through a trocar. The robustness of this approach is demonstrated with multiple PSMs (sampled from the dVRK community), instruments, configurations and trocar ports. CONCLUSION: Xfer provides significant improvements in prediction errors without the need for complete training from scratch and is robust over a wide range of robots, kinematic configurations, surgical instruments, and patient-specific setups.


Assuntos
Robótica , Fenômenos Biomecânicos , Humanos , Redes Neurais de Computação , Instrumentos Cirúrgicos , Torque
13.
IEEE Trans Vis Comput Graph ; 28(7): 2550-2562, 2022 07.
Artigo em Inglês | MEDLINE | ID: mdl-33170780

RESUMO

Head-mounted loupes can increase the user's visual acuity to observe the details of an object. On the other hand, optical see-through head-mounted displays (OST-HMD) are able to provide virtual augmentations registered with real objects. In this article, we propose AR-Loupe, combining the advantages of loupes and OST-HMDs, to offer augmented reality in the user's magnified field-of-vision. Specifically, AR-Loupe integrates a commercial OST-HMD, Magic Leap One, and binocular Galilean magnifying loupes, with customized 3D-printed attachments. We model the combination of user's eye, screen of OST-HMD, and the optical loupe as a pinhole camera. The calibration of AR-Loupe involves interactive view segmentation and an adapted version of stereo single point active alignment method (Stereo-SPAAM). We conducted a two-phase multi-user study to evaluate AR-Loupe. The users were able to achieve sub-millimeter accuracy ( 0.82 mm) on average, which is significantly ( ) smaller compared to normal AR guidance ( 1.49 mm). The mean calibration time was 268.46 s. With the increased size of real objects through optical magnification and the registered augmentation, AR-Loupe can aid users in high-precision tasks with better visual acuity and higher accuracy.


Assuntos
Realidade Aumentada , Óculos Inteligentes , Calibragem , Gráficos por Computador , Interface Usuário-Computador
14.
Int J Comput Assist Radiol Surg ; 17(5): 911-920, 2022 May.
Artigo em Inglês | MEDLINE | ID: mdl-35334043

RESUMO

PURPOSE: Ultrasound-guided spine interventions often suffer from the insufficient visualization of key anatomical structures due to the complex shapes of the self-shadowing vertebrae. Therefore, we propose an ultrasound imaging paradigm, AutoInFocus (automatic insonification optimization with controlled ultrasound), to improve the key structure visibility. METHODS: A phased-array probe is used in conjunction with a motion platform to image a controlled workspace, and the resulting images from multiple insonification angles are combined to reveal the target anatomy. This idea is first evaluated in simulation and then realized as a robotic platform and a miniaturized patch device. A spine phantom (CIRS) and its CT scan were used in the evaluation experiments to quantitatively and qualitatively analyze the advantages of the proposed method over the traditional approach. RESULTS: We showed in simulation that the proposed system setup increased the visibility of interspinous space boundary, a key feature for lumbar puncture guidance, from 44.13 to 67.73% on average, and the 3D spine surface coverage from 14.31 to 35.87%, compared to traditional imaging setup. We also demonstrated the feasibility of both robotic and patch-based realizations in a spine phantom study. CONCLUSION: This work lays the foundation for a new imaging paradigm that leverages redundant and controlled insonification to allow for imaging optimization of the complex vertebrae anatomy, making it possible for high-quality visualization of key anatomies during ultrasound-guided spine interventions.


Assuntos
Coluna Vertebral , Tomografia Computadorizada por Raios X , Humanos , Imagens de Fantasmas , Coluna Vertebral/diagnóstico por imagem , Ultrassonografia/métodos , Ultrassonografia de Intervenção/métodos
15.
Front Oncol ; 12: 996537, 2022.
Artigo em Inglês | MEDLINE | ID: mdl-36237341

RESUMO

Purpose: In this study, we aim to further evaluate the accuracy of ultrasound tracking for intra-fraction pancreatic tumor motion during radiotherapy by a phantom-based study. Methods: Twelve patients with pancreatic cancer who were treated with stereotactic body radiation therapy were enrolled in this study. The displacement points of the respiratory cycle were acquired from 4DCT and transferred to a motion platform to mimic realistic breathing movements in our phantom study. An ultrasound abdominal phantom was placed and fixed in the motion platform. The ground truth of phantom movement was recorded by tracking an optical tracker attached to this phantom. One tumor inside the phantom was the tracking target. In the evaluation of the results, the monitoring results from the ultrasound system were compared with the phantom motion results from the infrared camera. Differences between infrared monitoring motion and ultrasound tracking motion were analyzed by calculating the root-mean-square error. Results: The 82.2% ultrasound tracking motion was within a 0.5 mm difference value between ultrasound tracking displacement and infrared monitoring motion. 0.7% ultrasound tracking failed to track accurately (a difference value > 2.5 mm). These differences between ultrasound tracking motion and infrared monitored motion do not correlate with respiratory displacements, respiratory velocity, or respiratory acceleration by linear regression analysis. Conclusions: The highly accurate monitoring results of this phantom study prove that the ultrasound tracking system may be a potential method for real-time monitoring targets, allowing more accurate delivery of radiation doses.

16.
Stud Health Technol Inform ; 163: 476-8, 2011.
Artigo em Inglês | MEDLINE | ID: mdl-21335842

RESUMO

We developed a fully automatic segmentation method for volumetric CT (computer tomography) datasets to support construction of a statistical atlas for the study of allometric laws of the lung. The proposed segmentation method, AISLE (Automated ITK-Snap based on Level-set), is based on the level-set implementation from an existing semi-automatic segmentation program, ITK-Snap. AISLE can segment the lung field without human interaction and provide intermediate graphical results as desired. The preliminary experimental results show that the proposed method can achieve accurate segmentation, in terms of volumetric overlap metric, by comparing with the ground-truth segmentation performed by a radiologist.


Assuntos
Imageamento Tridimensional/métodos , Pulmão/diagnóstico por imagem , Pulmão/crescimento & desenvolvimento , Reconhecimento Automatizado de Padrão/métodos , Interpretação de Imagem Radiográfica Assistida por Computador/métodos , Técnica de Subtração , Tomografia Computadorizada por Raios X/métodos , Algoritmos , Inteligência Artificial , Humanos , Pulmão/anatomia & histologia , Intensificação de Imagem Radiográfica/métodos
17.
Stud Health Technol Inform ; 163: 479-85, 2011.
Artigo em Inglês | MEDLINE | ID: mdl-21335843

RESUMO

Navigation devices have been essential components for Image-Guided Surgeries (IGS) including laparoscopic surgery. We propose a wireless hybrid navigation device that integrates miniature inertial sensors and electromagnetic sensing units, for tracking instruments both inside and outside the human-body. The proposed system is free of the constraints of line-of-sight or entangling sensor wires. The main functional (sensor) part of the hybrid tracker is only about 15 mm by 15 mm. We identify the sensor models and develop sensor fusion algorithms for the proposed system to get optimal estimation of position and orientation (pose). The proof-of-concept experimental results show that the proposed hardware and software system can meet the defined tracking requirements, in terms of tracking accuracy, latency and robustness to environmental interferences.


Assuntos
Aceleração , Laparoscópios , Magnetismo/instrumentação , Telemetria/instrumentação , Transdutores , Desenho de Equipamento , Análise de Falha de Equipamento , Integração de Sistemas
18.
Annu Int Conf IEEE Eng Med Biol Soc ; 2021: 4836-4839, 2021 11.
Artigo em Inglês | MEDLINE | ID: mdl-34892292

RESUMO

Functional medical imaging systems can provide insights into brain activity during various tasks, but most current imaging systems are bulky devices that are not compatible with many human movements. Our motivating application is to perform Positron Emission Tomography (PET) imaging of subjects during sitting, upright standing and locomotion studies on a treadmill. The proposed long-term solution is to construct a robotic system that can support an imaging system surrounding the subject's head, and then move the system to accommodate natural motion. This paper presents the first steps toward this approach, which are to analyze human head motion, determine initial design parameters for the robotic system, and verify the concept in simulation.


Assuntos
Procedimentos Cirúrgicos Robóticos , Robótica , Encéfalo/diagnóstico por imagem , Humanos , Movimento (Física) , Tomografia por Emissão de Pósitrons
19.
Int J Comput Assist Radiol Surg ; 16(5): 779-787, 2021 May.
Artigo em Inglês | MEDLINE | ID: mdl-33759079

RESUMO

PURPOSE: Multi- and cross-modal learning consolidates information from multiple data sources which may offer a holistic representation of complex scenarios. Cross-modal learning is particularly interesting, because synchronized data streams are immediately useful as self-supervisory signals. The prospect of achieving self-supervised continual learning in surgical robotics is exciting as it may enable lifelong learning that adapts to different surgeons and cases, ultimately leading to a more general machine understanding of surgical processes. METHODS: We present a learning paradigm using synchronous video and kinematics from robot-mediated surgery. Our approach relies on an encoder-decoder network that maps optical flow to the corresponding kinematics sequence. Clustering on the latent representations reveals meaningful groupings for surgeon gesture and skill level. We demonstrate the generalizability of the representations on the JIGSAWS dataset by classifying skill and gestures on tasks not used for training. RESULTS: For tasks seen in training, we report a 59 to 70% accuracy in surgical gestures classification. On tasks beyond the training setup, we note a 45 to 65% accuracy. Qualitatively, we find that unseen gestures form clusters in the latent space of novice actions, which may enable the automatic identification of novel interactions in a lifelong learning scenario. CONCLUSION: From predicting the synchronous kinematics sequence, optical flow representations of surgical scenes emerge that separate well even for new tasks that the model had not seen before. While the representations are useful immediately for a variety of tasks, the self-supervised learning paradigm may enable research in lifelong and user-specific learning.


Assuntos
Gestos , Procedimentos Cirúrgicos Robóticos , Cirurgiões , Algoritmos , Fenômenos Biomecânicos , Humanos , Aprendizagem , Aprendizado de Máquina , Reprodutibilidade dos Testes , Robótica , Gravação em Vídeo
20.
Front Robot AI ; 8: 747917, 2021.
Artigo em Inglês | MEDLINE | ID: mdl-34926590

RESUMO

Approaches to robotic manufacturing, assembly, and servicing of in-space assets range from autonomous operation to direct teleoperation, with many forms of semi-autonomous teleoperation in between. Because most approaches require one or more human operators at some level, it is important to explore the control and visualization interfaces available to those operators, taking into account the challenges due to significant telemetry time delay. We consider one motivating application of remote teleoperation, which is ground-based control of a robot on-orbit for satellite servicing. This paper presents a model-based architecture that: 1) improves visualization and situation awareness, 2) enables more effective human/robot interaction and control, and 3) detects task failures based on anomalous sensor feedback. We illustrate elements of the architecture by drawing on 10 years of our research in this area. The paper further reports the results of several multi-user experiments to evaluate the model-based architecture, on ground-based test platforms, for satellite servicing tasks subject to round-trip communication latencies of several seconds. The most significant performance gains were obtained by enhancing the operators' situation awareness via improved visualization and by enabling them to precisely specify intended motion. In contrast, changes to the control interface, including model-mediated control or an immersive 3D environment, often reduced the reported task load but did not significantly improve task performance. Considering the challenges of fully autonomous intervention, we expect that some form of teleoperation will continue to be necessary for robotic in-situ servicing, assembly, and manufacturing tasks for the foreseeable future. We propose that effective teleoperation can be enabled by modeling the remote environment, providing operators with a fused view of the real environment and virtual model, and incorporating interfaces and control strategies that enable interactive planning, precise operation, and prompt detection of errors.

SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA