Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 7 de 7
Filtrar
Más filtros










Base de datos
Intervalo de año de publicación
2.
Int J Comput Assist Radiol Surg ; 18(6): 1061-1068, 2023 Jun.
Artículo en Inglés | MEDLINE | ID: mdl-37103728

RESUMEN

PURPOSE: Trans-oral robotic surgery (TORS) using the da Vinci surgical robot is a new minimally-invasive surgery method to treat oropharyngeal tumors, but it is a challenging operation. Augmented reality (AR) based on intra-operative ultrasound (US) has the potential to enhance the visualization of the anatomy and cancerous tumors to provide additional tools for decision-making in surgery. METHODS: We propose a US-guided AR system for TORS, with the transducer placed on the neck for a transcervical view. Firstly, we perform a novel MRI-to-transcervical 3D US registration study, comprising (i) preoperative MRI to preoperative US registration, and (ii) preoperative to intraoperative US registration to account for tissue deformation due to retraction. Secondly, we develop a US-robot calibration method with an optical tracker and demonstrate its use in an AR system that displays anatomy models in the surgeon's console in real-time. RESULTS: Our AR system achieves a projection error from the US to the stereo cameras of 27.14 and 26.03 pixels (image is 540[Formula: see text]960) in a water bath experiment. The average target registration error (TRE) for MRI to 3D US is 8.90 mm for the 3D US transducer and 5.85 mm for freehand 3D US, and the TRE for pre-intra operative US registration is 7.90 mm. CONCLUSION: We demonstrate the feasibility of each component of the first complete pipeline for MRI-US-robot-patient registration for a proof-of-concept transcervical US-guided AR system for TORS. Our results show that trans-cervical 3D US is a promising technique for TORS image guidance.


Asunto(s)
Realidad Aumentada , Procedimientos Quirúrgicos Robotizados , Cirugía Asistida por Computador , Humanos , Procedimientos Quirúrgicos Robotizados/métodos , Cirugía Asistida por Computador/métodos , Ultrasonografía/métodos , Ultrasonido , Imagenología Tridimensional/métodos
3.
Telemed J E Health ; 28(8): 1199-1205, 2022 08.
Artículo en Inglés | MEDLINE | ID: mdl-34935500

RESUMEN

Background: Telemedicine use increased during the COVID-19 pandemic due to concerns for patient and provider safety. Given the lack of testing resources initially and the large geographical range served by Augusta University (AU), a telemedicine platform with up-to-date screening guidelines was implemented for COVID-19 testing in March 2020. Our objective was to understand the level of adherence to telemedicine screening guidelines for COVID-19. Methods: The study population included health care providers and population who participated in an encounter in the AU Health Express Care virtual care program from March 22 to May 21, 2020. All encounters were intended to be for COVID-19 screening, free, and available 24 h per day, 7 days per week. Screening guidelines were developed by AU based on information from the Centers for Disease Control and Prevention and the Georgia Department of Public Health. Results: Among 17,801 total encounters, 13,600 were included in the final analysis. Overall adherence to screening guidelines was 71% in the adult population and 57% in the pediatric population. When providers did not follow guidelines, 72% determined that the patient should have a positive screen. Guidelines themselves determined that only 52% of encounters should have a positive screen. Providers' specialty significantly correlated with guideline adherence (p = 0.002). Departments with the highest adherence were psychiatry, neurology, and ophthalmology. No significant correlation was found between guideline adherence and provider degree/position. Conclusions: This study provides proof of concept of a free telehealth screening platform during an ongoing pandemic. Our screening experience was effective and different specialties participated. Our patient population lived in lower than average income zip codes, suggesting that our free telemedicine screening program successfully reached populations with higher financial barriers to health care. Early training and a posteriori knowledge of telemedicine was likely key to screening guideline adherence.


Asunto(s)
COVID-19 , Telemedicina , Adulto , COVID-19/epidemiología , Prueba de COVID-19 , Niño , Personal de Salud , Humanos , Pandemias/prevención & control
4.
Int J Comput Assist Radiol Surg ; 16(7): 1181-1188, 2021 Jul.
Artículo en Inglés | MEDLINE | ID: mdl-34076803

RESUMEN

PURPOSE: Intra-operative augmented reality (AR) during surgery can mitigate incomplete cancer removal by overlaying the anatomical boundaries extracted from medical imaging data onto the camera image. In this paper, we present the first such completely markerless AR guidance system for robot-assisted laparoscopic radical prostatectomy (RALRP) that transforms medical data from transrectal ultrasound (TRUS) to endoscope camera image. Moreover, we reduce the total number of transformations by combining the hand-eye and camera calibrations in a single step. METHODS: Our proposed solution requires two transformations: TRUS to robot, [Formula: see text], and camera projection matrix, [Formula: see text] (i.e., the transformation from endoscope to camera image frame). [Formula: see text] is estimated by the method proposed in Mohareri et al. (in J Urol 193(1):302-312, 2015). [Formula: see text] is estimated by selecting corresponding 3D-2D data points in the endoscope and the image coordinate frame, respectively, by using a CAD model of the surgical instrument and a preoperative camera intrinsic matrix with an assumption of a projective camera. The parameters are estimated using Levenberg-Marquardt algorithm. Overall mean re-projection errors (MRE) are reported using simulated and real data using a water bath. We show that [Formula: see text] can be re-estimated if the focus is changed during surgery. RESULTS: Using simulated data, we received an overall MRE in the range of 11.69-13.32 pixels for monoscopic and stereo left and right cameras. For the water bath experiment, the overall MRE is in the range of 26.04-30.59 pixels for monoscopic and stereo cameras. The overall system error from TRUS to camera world frame is 4.05 mm. Details of the procedure are given in supplementary material. CONCLUSION: We demonstrate a markerless AR guidance system for RALRP that does not need calibration markers and thus has the capability to re-estimate the camera projection matrix if it changes during surgery, e.g., due to a focus change.


Asunto(s)
Algoritmos , Imagenología Tridimensional/métodos , Próstata/diagnóstico por imagen , Prostatectomía/métodos , Robótica/instrumentación , Cirugía Asistida por Computador/métodos , Ultrasonografía/métodos , Realidad Aumentada , Sistemas de Computación , Diseño de Equipo , Falla de Equipo , Humanos , Masculino , Próstata/cirugía
5.
Int J Comput Assist Radiol Surg ; 15(7): 1225-1233, 2020 Jul.
Artículo en Inglés | MEDLINE | ID: mdl-32500450

RESUMEN

PURPOSE: Robot-assisted laparoscopic radical prostatectomy (RALRP) using the da Vinci surgical robot is a common treatment for organ-confined prostate cancer. Augmented reality (AR) can help during RALRP by showing the surgeon the location of anatomical structures and tumors from preoperative imaging. Previously, we proposed hand-eye and camera intrinsic matrix estimation procedures that can be carried out with conventional instruments within the patient during surgery, take < 3 min to perform, and fit seamlessly in the existing surgical workflow. In this paper, we describe and evaluate a complete AR guidance system for RALRP and quantify its accuracy. METHODS: Our AR system requires three transformations: the transrectal ultrasound (TRUS) to da Vinci transformation, the camera intrinsic matrix, and the hand-eye transformation. For evaluation, a 3D-printed cross-wire was visualized in TRUS and stereo endoscope in a water bath. Manually triangulated cross-wire points from stereo images were used as ground truth to evaluate overall TRE between these points and points transformed from TRUS to camera. RESULTS: After transforming the ground-truth points from the TRUS to the camera coordinate frame, the mean target registration error (TRE) (SD) was [Formula: see text] mm. The mean TREs (SD) in the x-, y-, and z-directions are [Formula: see text] mm, [Formula: see text] mm, and [Formula: see text] mm, respectively. CONCLUSIONS: We describe and evaluate a complete AR guidance system for RALRP which can augment preoperative data to endoscope camera image, after a deformable magnetic resonance image to TRUS registration step. The streamlined procedures with current surgical workflow and low TRE demonstrate the compatibility and readiness of the system for clinical translation. A detailed sensitivity study remains part of future work.


Asunto(s)
Realidad Aumentada , Laparoscopía/métodos , Prostatectomía/métodos , Neoplasias de la Próstata/cirugía , Procedimientos Quirúrgicos Robotizados/métodos , Humanos , Imagenología Tridimensional/métodos , Imagen por Resonancia Magnética , Masculino , Cirugía Asistida por Computador/métodos , Ultrasonografía/métodos
6.
Int J Comput Assist Radiol Surg ; 15(8): 1369-1377, 2020 Aug.
Artículo en Inglés | MEDLINE | ID: mdl-32430693

RESUMEN

PURPOSE: This paper introduces the concept of using an additional intracorporeal camera for the specific goal of training and skill assessment and explores the benefits of such an approach. This additional camera can provide an additional view of the surgical scene, and we hypothesize that this additional view would improve surgical training and skill assessment in robot-assisted surgery. METHODS: We developed a multi-camera, multi-view system, and we conducted two user studies ([Formula: see text]) to evaluate its effectiveness for training and skill assessment. In the training user study, subjects were divided into two groups: a single-view group and a dual-view group. The skill assessment study was a within-subject study, in which every subject was shown single- and dual view recorded videos of a surgical training task, and the goal was to count the number of errors committed in each video. RESULTS: The results show the effectiveness of using an additional intracorporeal camera view for training and skill assessment. The benefits of this view are modest for skill assessment as it improves the assessment accuracy by approximately 9%. For training, the additional camera view is clearly more effective. Indeed, the dual-view group is 57% more accurate than the single-view group in a retention test. In addition, the dual-view group is 35% more accurate and 25% faster than the single-view group in a transfer test. CONCLUSION: A multi-camera, multi-view system has the potential to significantly improve training and moderately improve skill assessment in robot-assisted surgery. One application of our work is to include an additional camera view in existing virtual reality surgical training simulators to realize its benefits in training. The views from the additional intracorporeal camera can also be used to improve on existing surgical skill assessment criteria used in training systems for robot-assisted surgery.


Asunto(s)
Competencia Clínica , Procedimientos Quirúrgicos Robotizados , Humanos , Realidad Virtual
7.
Healthc Technol Lett ; 6(6): 255-260, 2019 Dec.
Artículo en Inglés | MEDLINE | ID: mdl-32038867

RESUMEN

Accurate medical Augmented Reality (AR) rendering requires two calibrations, a camera intrinsic matrix estimation and a hand-eye transformation. We present a unified, practical, marker-less, real-time system to estimate both these transformations during surgery. For camera calibration we perform calibrations at multiple distances from the endoscope, pre-operatively, to parametrize the camera intrinsic matrix as a function of distance from the endoscope. Then, we retrieve the camera parameters intra-operatively by estimating the distance of the surgical site from the endoscope in less than 1 s. Unlike in prior work, our method does not require the endoscope to be taken out of the patient; for the hand-eye calibration, as opposed to conventional methods that require the identification of a marker, we make use of a rendered tool-tip in 3D. As the surgeon moves the instrument and observes the offset between the actual and the rendered tool-tip, they can select points of high visual error and manually bring the instrument tip to match the virtual rendered tool tip. To evaluate the hand-eye calibration, 5 subjects carried out the hand-eye calibration procedure on a da Vinci robot. Average Target Registration Error of approximately 7mm was achieved with just three data points.

SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA
...