Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 9 de 9
Filtrar
Más filtros










Base de datos
Intervalo de año de publicación
1.
J Robot Surg ; 14(4): 579-583, 2020 Aug.
Artículo en Inglés | MEDLINE | ID: mdl-31555957

RESUMEN

With the advent of precision surgery, there have been attempts to integrate imaging with robotic systems to guide sound oncologic surgical resections while preserving critical structures. In the confined space of transoral robotic surgery (TORS), this offers great potential given the proximity of structures. In this cadaveric experiment, we describe the use of a 3D virtual model displayed in the surgeon's console with the surgical field in view, to facilitate image-guided surgery at the oropharynx where there is significant soft tissue deformation. We also utilized the 3D model that was registered to the maxillary dentition, allowing for real-time image overlay of the internal carotid artery system. This allowed for real-time visualization of the internal carotid artery system that was qualitatively accurate on cadaveric dissection. Overall, this shows that virtual models and image overlays can be useful in image-guided surgery while approaching different sites in head and neck surgery with TORS.


Asunto(s)
Realidad Aumentada , Orofaringe/cirugía , Procedimientos Quirúrgicos Robotizados/métodos , Cirugía Asistida por Computador/métodos , Cadáver , Arteria Carótida Interna , Humanos , Imagenología Tridimensional , Masculino , Modelos Anatómicos , Orofaringe/irrigación sanguínea , Procedimientos Quirúrgicos Robotizados/instrumentación , Cirugía Asistida por Computador/instrumentación
2.
J Robot Surg ; 9(4): 311-4, 2015 Dec.
Artículo en Inglés | MEDLINE | ID: mdl-26530843

RESUMEN

Inability to integrate surgical navigation systems into current surgical robot is one of the reasons for the lack of development of robotic endoscopic skull base surgery. We describe an experiment to adapt current technologies for real-time navigation during transoral robotic nasopharyngectomy. A cone-beam CT was performed with a robotic C-arm after the injecting contrast into common carotid artery. 3D reconstruction of the skull images with the internal carotid artery (ICA) highlighted red was projected on the console. Robotic nasopharyngectomy was then performed. Fluoroscopy was performed with the C-arm. Fluoroscopic image was then overlaid on the reconstructed skull image. The relationship of the robotic instruments with the bony landmarks and ICA could then been viewed in real-time, acting as a surgical navigation system. Navigation during robotic skull base surgery is feasible with available technologies and can increase the safety of robotic skull base surgery.


Asunto(s)
Arteria Carótida Común/anatomía & histología , Tomografía Computarizada de Haz Cónico/métodos , Fluoroscopía/métodos , Nasofaringe/cirugía , Procedimientos Quirúrgicos Robotizados/métodos , Programas Informáticos , Medios de Contraste , Estudios de Factibilidad , Humanos
3.
J Robot Surg ; 9(3): 223-33, 2015 Sep.
Artículo en Inglés | MEDLINE | ID: mdl-26531203

RESUMEN

In transoral robotic surgery preoperative image data do not reflect large deformations of the operative workspace from perioperative setup. To address this challenge, in this study we explore image guidance with cone beam computed tomographic angiography to guide the dissection of critical vascular landmarks and resection of base-of-tongue neoplasms with adequate margins for transoral robotic surgery. We identify critical vascular landmarks from perioperative c-arm imaging to augment the stereoscopic view of a da Vinci si robot in addition to incorporating visual feedback from relative tool positions. Experiments resecting base-of-tongue mock tumors were conducted on a series of ex vivo and in vivo animal models comparing the proposed workflow for video augmentation to standard non-augmented practice and alternative, fluoroscopy-based image guidance. Accurate identification of registered augmented critical anatomy during controlled arterial dissection and en bloc mock tumor resection was possible with the augmented reality system. The proposed image-guided robotic system also achieved improved resection ratios of mock tumor margins (1.00) when compared to control scenarios (0.0) and alternative methods of image guidance (0.58). The experimental results show the feasibility of the proposed workflow and advantages of cone beam computed tomography image guidance through video augmentation of the primary stereo endoscopy as compared to control and alternative navigation methods.


Asunto(s)
Tomografía Computarizada de Haz Cónico/métodos , Procedimientos Quirúrgicos Orales/métodos , Procedimientos Quirúrgicos Robotizados/métodos , Animales , Estudios de Factibilidad , Fantasmas de Imagen , Porcinos , Lengua/cirugía , Interfaz Usuario-Computador
4.
Int J Med Robot ; 11(1): 67-79, 2015 Mar.
Artículo en Inglés | MEDLINE | ID: mdl-24623371

RESUMEN

BACKGROUND: Intraoperative application of tomographic imaging techniques provides a means of visual servoing for objects beneath the surface of organs. METHODS: The focus of this survey is on therapeutic and diagnostic medical applications where tomographic imaging is used in visual servoing. To this end, a comprehensive search of the electronic databases was completed for the period 2000-2013. RESULTS: Existing techniques and products are categorized and studied, based on the imaging modality and their medical applications. This part complements Part I of the survey, which covers visual servoing techniques using endoscopic imaging and direct vision. CONCLUSION: The main challenges in using visual servoing based on tomographic images have been identified. 'Supervised automation of medical robotics' is found to be a major trend in this field and ultrasound is the most commonly used tomographic modality for visual servoing.


Asunto(s)
Procedimientos Quirúrgicos Robotizados/métodos , Cirugía Asistida por Computador/métodos , Tomografía/métodos , Algoritmos , Fluoroscopía/métodos , Humanos , Imagenología Tridimensional/métodos , Imagen por Resonancia Magnética/métodos , Encuestas y Cuestionarios , Tomografía Computarizada por Rayos X/métodos , Ultrasonografía/métodos
5.
Int J Comput Assist Radiol Surg ; 10(8): 1239-52, 2015 Aug.
Artículo en Inglés | MEDLINE | ID: mdl-25503592

RESUMEN

PURPOSE: C-arm radiographs are commonly used for intraoperative image guidance in surgical interventions. Fluoroscopy is a cost-effective real-time modality, although image quality can vary greatly depending on the target anatomy. Cone-beam computed tomography (CBCT) scans are sometimes available, so 2D-3D registration is needed for intra-procedural guidance. C-arm radiographs were registered to CBCT scans and used for 3D localization of peritumor fiducials during a minimally invasive thoracic intervention with a da Vinci Si robot. METHODS: Intensity-based 2D-3D registration of intraoperative radiographs to CBCT was performed. The feasible range of X-ray projections achievable by a C-arm positioned around a da Vinci Si surgical robot, configured for robotic wedge resection, was determined using phantom models. Experiments were conducted on synthetic phantoms and animals imaged with an OEC 9600 and a Siemens Artis zeego, representing the spectrum of different C-arm systems currently available for clinical use. RESULTS: The image guidance workflow was feasible using either an optically tracked OEC 9600 or a Siemens Artis zeego C-arm, resulting in an angular difference of Δθ:∼ 30°. The two C-arm systems provided TRE mean ≤ 2.5 mm and TRE mean ≤ 2.0 mm, respectively (i.e., comparable to standard clinical intraoperative navigation systems). CONCLUSIONS: C-arm 3D localization from dual 2D-3D registered radiographs was feasible and applicable for intraoperative image guidance during da Vinci robotic thoracic interventions using the proposed workflow. Tissue deformation and in vivo experiments are required before clinical evaluation of this system.


Asunto(s)
Tomografía Computarizada de Haz Cónico/métodos , Imagenología Tridimensional/métodos , Robótica , Cirugía Asistida por Computador/métodos , Animales , Fluoroscopía/métodos , Fantasmas de Imagen , Intensificación de Imagen Radiográfica/métodos
6.
Surg Endosc ; 28(7): 2227-35, 2014 Jul.
Artículo en Inglés | MEDLINE | ID: mdl-24488352

RESUMEN

BACKGROUND: Conventional laparoscopes provide a flat representation of the three-dimensional (3D) operating field and are incapable of visualizing internal structures located beneath visible organ surfaces. Computed tomography (CT) and magnetic resonance (MR) images are difficult to fuse in real time with laparoscopic views due to the deformable nature of soft-tissue organs. Utilizing emerging camera technology, we have developed a real-time stereoscopic augmented-reality (AR) system for laparoscopic surgery by merging live laparoscopic ultrasound (LUS) with stereoscopic video. The system creates two new visual cues: (1) perception of true depth with improved understanding of 3D spatial relationships among anatomical structures, and (2) visualization of critical internal structures along with a more comprehensive visualization of the operating field. METHODS: The stereoscopic AR system has been designed for near-term clinical translation with seamless integration into the existing surgical workflow. It is composed of a stereoscopic vision system, a LUS system, and an optical tracker. Specialized software processes streams of imaging data from the tracked devices and registers those in real time. The resulting two ultrasound-augmented video streams (one for the left and one for the right eye) give a live stereoscopic AR view of the operating field. The team conducted a series of stereoscopic AR interrogations of the liver, gallbladder, biliary tree, and kidneys in two swine. RESULTS: The preclinical studies demonstrated the feasibility of the stereoscopic AR system during in vivo procedures. Major internal structures could be easily identified. The system exhibited unobservable latency with acceptable image-to-video registration accuracy. CONCLUSIONS: We presented the first in vivo use of a complete system with stereoscopic AR visualization capability. This new capability introduces new visual cues and enhances visualization of the surgical anatomy. The system shows promise to improve the precision and expand the capacity of minimally invasive laparoscopic surgeries.


Asunto(s)
Percepción de Profundidad , Imagenología Tridimensional , Laparoscopía/métodos , Iluminación , Cirugía Asistida por Computador/métodos , Animales , Laparoscopios , Modelos Animales , Fantasmas de Imagen , Porcinos , Ultrasonografía Intervencional , Grabación en Video
7.
JAMA Otolaryngol Head Neck Surg ; 140(3): 208-14, 2014 Mar.
Artículo en Inglés | MEDLINE | ID: mdl-24457635

RESUMEN

IMPORTANCE: To our knowledge, this is the first reported cadaveric feasibility study of a master-slave-assisted cochlear implant procedure in the otolaryngology-head and neck surgery field using the da Vinci Si system (da Vinci Surgical System; Intuitive Surgical, Inc). We describe the surgical workflow adaptations using a minimally invasive system and image guidance integrating intraoperative cone beam computed tomography through augmented reality. OBJECTIVE: To test the feasibility of da Vinci Si-assisted cochlear implant surgery with augmented reality, with visualization of critical structures and facilitation with precise cochleostomy for electrode insertion. DESIGN AND SETTING: Cadaveric case study of bilateral cochlear implant approaches conducted at Intuitive Surgical Inc, Sunnyvale, California. INTERVENTIONS: Bilateral cadaveric mastoidectomies, posterior tympanostomies, and cochleostomies were performed using the da Vinci Si system on a single adult human donor cadaveric specimen. MAIN OUTCOMES AND MEASURES: Radiographic confirmation of successful cochleostomies, placement of a phantom cochlear implant wire, and visual confirmation of critical anatomic structures (facial nerve, cochlea, and round window) in augmented stereoendoscopy. RESULTS: With a surgical mean time of 160 minutes per side, complete bilateral cochlear implant procedures were successfully performed with no violation of critical structures, notably the facial nerve, chorda tympani, sigmoid sinus, dura, or ossicles. Augmented reality image overlay of the facial nerve, round window position, and basal turn of the cochlea was precise. Postoperative cone beam computed tomography scans confirmed successful placement of the phantom implant electrode array into the basal turn of the cochlea. CONCLUSIONS AND RELEVANCE: To our knowledge, this is the first study in the otolaryngology-head and neck surgery literature examining the use of master-slave-assisted cochleostomy with augmented reality for cochlear implants using the da Vinci Si system. The described system for cochleostomy has the potential to improve the surgeon's confidence, as well as surgical safety, efficiency, and precision by filtering tremor. The integration of augmented reality may be valuable for surgeons dealing with complex cases of congenital anatomic abnormality, for revision cochlear implant with distorted anatomy and poorly pneumatized mastoids, and as a method of interactive teaching. Further research into the cost-benefit ratio of da Vinci Si-assisted otologic surgery, as well as refinements of the proposed workflow, are required before considering clinical studies.


Asunto(s)
Implantes Cocleares , Pérdida Auditiva/cirugía , Procedimientos Quirúrgicos Otológicos/métodos , Robótica/instrumentación , Cirugía Asistida por Computador/métodos , Cadáver , Tomografía Computarizada de Haz Cónico , Estudios de Factibilidad , Pérdida Auditiva/diagnóstico por imagen , Humanos , Hueso Temporal/diagnóstico por imagen , Hueso Temporal/cirugía
8.
Int J Med Robot ; 10(3): 263-74, 2014 Sep.
Artículo en Inglés | MEDLINE | ID: mdl-24106103

RESUMEN

BACKGROUND: Intra-operative imaging is widely used to provide visual feedback to a clinician when he/she performs a procedure. In visual servoing, surgical instruments and parts of tissue/body are tracked by processing the acquired images. This information is then used within a control loop to manoeuvre a robotic manipulator during a procedure. METHODS: A comprehensive search of electronic databases was completed for the period 2000-2013 to provide a survey of the visual servoing applications in medical robotics. The focus is on medical applications where image-based tracking is used for closed-loop control of a robotic system. RESULTS: Detailed classification and comparative study of various contributions in visual servoing using endoscopic or direct visual images are presented and summarized in tables and diagrams. CONCLUSION: The main challenges in using visual servoing for medical robotic applications are identified and potential future directions are suggested. 'Supervised automation of medical robotics' is found to be a major trend in this field.


Asunto(s)
Endoscopios , Endoscopía/instrumentación , Endoscopía/métodos , Robótica/métodos , Automatización , Procedimientos Quirúrgicos Cardíacos , Computadores , Diagnóstico por Imagen , Humanos , Laparoscopía/métodos , Procedimientos Quirúrgicos Mínimamente Invasivos/métodos , Cirugía Endoscópica por Orificios Naturales/métodos , Ortopedia , Programas Informáticos , Instrumentos Quirúrgicos
9.
Int J Med Robot ; 9(4): 379-95, 2013 Dec.
Artículo en Inglés | MEDLINE | ID: mdl-22736549

RESUMEN

BACKGROUND: Development of new imaging technologies and advances in computing power have enabled the physicians to perform medical interventions on the basis of high-quality 3D and/or 4D visualization of the patient's organs. Preoperative imaging has been used for planning the surgery, whereas intraoperative imaging has been widely employed to provide visual feedback to a clinician when he or she is performing the procedure. In the past decade, such systems demonstrated great potential in image-guided minimally invasive procedures on different organs, such as brain, heart, liver and kidneys. This article focuses on image-guided interventions and surgery in renal and hepatic surgeries. METHODS: A comprehensive search of existing electronic databases was completed for the period of 2000-2011. Each contribution was assessed by the authors for relevance and inclusion. The contributions were categorized on the basis of the type of operation/intervention, imaging modality and specific techniques such as image fusion and augmented reality, and organ motion tracking. RESULTS: As a result, detailed classification and comparative study of various contributions in image-guided renal and hepatic interventions are provided. In addition, the potential future directions have been sketched. CONCLUSION: With a detailed review of the literature, potential future trends in development of image-guided abdominal interventions are identified, namely, growing use of image fusion and augmented reality, computer-assisted and/or robot-assisted interventions, development of more accurate registration and navigation techniques, and growing applications of intraoperative magnetic resonance imaging.


Asunto(s)
Hepatectomía/métodos , Imagenología Tridimensional/métodos , Nefrectomía/métodos , Robótica/métodos , Cirugía Asistida por Computador/métodos , Humanos
SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA
...