Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 20
Filtrar
Más filtros










Base de datos
Intervalo de año de publicación
1.
Artículo en Inglés | MEDLINE | ID: mdl-37823976

RESUMEN

PURPOSE: Surgical procedures take place in highly complex operating rooms (OR), involving medical staff, patients, devices and their interactions. Until now, only medical professionals are capable of comprehending these intricate links and interactions. This work advances the field toward automated, comprehensive and semantic understanding and modeling of the OR domain by introducing semantic scene graphs (SSG) as a novel approach to describing and summarizing surgical environments in a structured and semantically rich manner. METHODS: We create the first open-source 4D SSG dataset. 4D-OR includes simulated total knee replacement surgeries captured by RGB-D sensors in a realistic OR simulation center. It includes annotations for SSGs, human and object pose, clinical roles and surgical phase labels. We introduce a neural network-based SSG generation pipeline for semantic reasoning in the OR and apply our approach to two downstream tasks: clinical role prediction and surgical phase recognition. RESULTS: We show that our pipeline can successfully reason within the OR domain. The capabilities of our scene graphs are further highlighted by their successful application to clinical role prediction and surgical phase recognition tasks. CONCLUSION: This work paves the way for multimodal holistic operating room modeling, with the potential to significantly enhance the state of the art in surgical data analysis, such as enabling more efficient and precise decision-making during surgical procedures, and ultimately improving patient safety and surgical outcomes. We release our code and dataset at github.com/egeozsoy/4D-OR.

2.
Minim Invasive Ther Allied Technol ; 32(4): 190-198, 2023 Aug.
Artículo en Inglés | MEDLINE | ID: mdl-37293947

RESUMEN

Introduction: This study compares five augmented reality (AR) vasculature visualization techniques in a mixed-reality laparoscopy simulator with 50 medical professionals and analyzes their impact on the surgeon. Material and methods: ​​The different visualization techniques' abilities to convey depth were measured using the participant's accuracy in an objective depth sorting task. Demographic data and subjective measures, such as the preference of each AR visualization technique and potential application areas, were collected with questionnaires. Results: Despite measuring differences in objective measurements across the visualization techniques, they were not statistically significant. In the subjective measures, however, 55% of the participants rated visualization technique II, 'Opaque with single-color Fresnel highlights', as their favorite. Participants felt that AR could be useful for various surgeries, especially complex surgeries (100%). Almost all participants agreed that AR could potentially improve surgical parameters, such as patient safety (88%), complication rate (84%), and identifying risk structures (96%). Conclusions: More studies are needed on the effect of different visualizations on task performance, as well as more sophisticated and effective visualization techniques for the operating room. With the findings of this study, we encourage the development of new study setups to advance surgical AR.


Asunto(s)
Realidad Aumentada , Laparoscopía , Cirujanos , Cirugía Asistida por Computador , Humanos , Laparoscopía/métodos , Cirugía Asistida por Computador/métodos
3.
BMJ Surg Interv Health Technol ; 5(1): e000135, 2023.
Artículo en Inglés | MEDLINE | ID: mdl-36687799

RESUMEN

Objectives: Workplace-based assessment (WBA) is a key requirement of competency-based medical education in postgraduate surgical education. Although simulated workplace-based assessment (SWBA) has been proposed to complement WBA, it is insufficiently adopted in surgical education. In particular, approaches to criterion-referenced and automated assessment of intraoperative surgical competency in contextualized SWBA settings are missing.Main objectives were (1) application of the universal framework of intraoperative performance and exemplary adaptation to spine surgery (vertebroplasty); (2) development of computer-assisted assessment based on criterion-referenced metrics; and (3) implementation in contextualized, team-based operating room (OR) simulation, and evaluation of validity. Design: Multistage development and assessment study: (1) expert-based definition of performance indicators based on framework's performance domains; (2) development of respective assessment metrics based on preoperative planning and intraoperative performance data; (3) implementation in mixed-reality OR simulation and assessment of surgeons operating in a confederate team. Statistical analyses included internal consistency and interdomain associations, correlations with experience, and technical and non-technical performances. Setting: Surgical simulation center. Full surgical team set-up within mixed-reality OR simulation. Participants: Eleven surgeons were recruited from two teaching hospitals. Eligibility criteria included surgical specialists in orthopedic, trauma, or neurosurgery with prior VP or kyphoplasty experience. Main outcome measures: Computer-assisted assessment of surgeons' intraoperative performance. Results: Performance scores were associated with surgeons' experience, observational assessment (Objective Structured Assessment of Technical Skill) scores and overall pass/fail ratings. Results provide strong evidence for validity of our computer-assisted SWBA approach. Diverse indicators of surgeons' technical and non-technical performances could be quantified and captured. Conclusions: This study is the first to investigate computer-assisted assessment based on a competency framework in authentic, contextualized team-based OR simulation. Our approach discriminates surgical competency across the domains of intraoperative performance. It advances previous automated assessment based on the use of current surgical simulators in decontextualized settings. Our findings inform future use of computer-assisted multidomain competency assessments of surgeons using SWBA approaches.

4.
Int J Comput Assist Radiol Surg ; 18(8): 1345-1354, 2023 Aug.
Artículo en Inglés | MEDLINE | ID: mdl-36547767

RESUMEN

PURPOSE: Only a few studies have evaluated Augmented Reality (AR) in in vivo simulations compared to traditional laparoscopy; further research is especially needed regarding the most effective AR visualization technique. This pilot study aims to determine, under controlled conditions on a 3D-printed phantom, whether an AR laparoscope improves surgical outcomes over conventional laparoscopy without augmentation. METHODS: We selected six surgical residents at a similar level of training and had them perform a laparoscopic task. The participants repeated the experiment three times, using different 3D phantoms and visualizations: Floating AR, Occlusion AR, and without any AR visualization (Control). Surgical performance was determined using objective measurements. Subjective measures, such as task load and potential application areas, were collected with questionnaires. RESULTS: Differences in operative time, total touching time, and SurgTLX scores showed no statistical significance ([Formula: see text]). However, when assessing the invasiveness of the simulated intervention, the comparison revealed a statistically significant difference ([Formula: see text]). Participants felt AR could be useful for various surgeries, especially for liver, sigmoid, and pancreatic resections (100%). Almost all participants agreed that AR could potentially lead to improved surgical parameters, such as operative time (83%), complication rate (83%), and identifying risk structures (83%). CONCLUSION: According to our results, AR may have great potential in visceral surgery and based on the objective measures of the study, may improve surgeons' performance in terms of an atraumatic approach. In this pilot study, participants consistently took more time to complete the task, had more contact with the vascular tree, were significantly more invasive, and scored higher on the SurgTLX survey than with AR.


Asunto(s)
Realidad Aumentada , Laparoscopía , Cirugía Asistida por Computador , Humanos , Cirugía Asistida por Computador/métodos , Proyectos Piloto , Laparoscopía/métodos , Fantasmas de Imagen
5.
IEEE Trans Vis Comput Graph ; 28(11): 3694-3704, 2022 Nov.
Artículo en Inglés | MEDLINE | ID: mdl-36048998

RESUMEN

Our world is full of cameras, whether they are installed in the environment or integrated into mobile devices such as mobile phones or head-mounted displays. Displaying external camera views in our egocentric view with a picture-in-picture approach allows us to understand their view; however, it would not allow us to correlate their viewpoint with our perceived reality. We introduce Projective Bisector Mirrors for visualizing a camera view comprehensibly in the egocentric view of an observer with the metaphor of a virtual mirror. Our concept projects the image of a capturing camera onto the bisecting plane between the capture and the observer camera. We present extensive mathematical descriptions of this novel paradigm for multi-view visualization, discuss the effects of tracking errors and provide concrete implementation for multiple exemplary use-cases.

6.
Annu Int Conf IEEE Eng Med Biol Soc ; 2022: 562-565, 2022 07.
Artículo en Inglés | MEDLINE | ID: mdl-36085600

RESUMEN

Image registration is a commonly required task in computer assisted surgical procedures. Existing registration methods in laparoscopic navigation systems suffer from several constraints, such as lack of deformation compensation. The proposed algorithm aims to provide the surgeons with updated navigational information about the deep-seated anatomy, which considers the continuous deformations in the operating environment. We extended an initial rigid registration to a shape-preserving deformable registration pathway by incorporating user interaction and an iterative mesh editing scheme which preserves local details. The proposed deformable registration workflow was tested with phantom and animal trial datasets. A qualitative evaluation based on expert feedback demonstrated satisfactory outcome, and an commensurate execution efficiency was achieved. The improvements offered by the method, couples with its relatively easy implementation, makes it an attractive method for adoption in future pre-clinical and clinical applications of augmented reality assisted surgeries.


Asunto(s)
Realidad Aumentada , Laparoscopía , Cirujanos , Cirugía Asistida por Computador , Algoritmos , Animales , Humanos
7.
IEEE Trans Vis Comput Graph ; 28(5): 2190-2200, 2022 05.
Artículo en Inglés | MEDLINE | ID: mdl-35148264

RESUMEN

When two or more users attempt to collaborate in the same space with Augmented Reality, they often encounter conflicting intentions regarding the occupation of the same working area and self-positioning around such without mutual interference. Augmented Reality is a powerful tool for communicating ideas and intentions during a co-assisting task that requires multi-disciplinary expertise. To relax the constraint of physical co-location, we propose the concept of Duplicated Reality, where a digital copy of a 3D region of interest of the users' environment is reconstructed in real-time and visualized in-situ through an Augmented Reality user interface. This enables users to remotely annotate the region of interest while being co-located with others in Augmented Reality. We perform a user study to gain an in-depth understanding of the proposed method compared to an in-situ augmentation, including collaboration, effort, awareness, usability, and the quality of the task. The result indicates almost identical objective and subjective results, except a decrease in the consulting user's awareness of co-located users when using our method. The added benefit from duplicating the working area into a designated consulting area opens up new interaction paradigms to be further investigated for future co-located Augmented Reality collaboration systems.


Asunto(s)
Realidad Aumentada , Gráficos por Computador , Interfaz Usuario-Computador
8.
IEEE Trans Vis Comput Graph ; 28(12): 4156-4171, 2022 12.
Artículo en Inglés | MEDLINE | ID: mdl-33979287

RESUMEN

Estimating the depth of virtual content has proven to be a challenging task in Augmented Reality (AR) applications. Existing studies have shown that the visual system makes use of multiple depth cues to infer the distance of objects, occlusion being one of the most important ones. The ability to generate appropriate occlusions becomes particularly important for AR applications that require the visualization of augmented objects placed below a real surface. Examples of these applications are medical scenarios in which the visualization of anatomical information needs to be observed within the patient's body. In this regard, existing works have proposed several focus and context (F+C) approaches to aid users in visualizing this content using Video See-Through (VST) Head-Mounted Displays (HMDs). However, the implementation of these approaches in Optical See-Through (OST) HMDs remains an open question due to the additive characteristics of the display technology. In this article, we, for the first time, design and conduct a user study that compares depth estimation between VST and OST HMDs using existing in-situ visualization methods. Our results show that these visualizations cannot be directly transferred to OST displays without increasing error in depth perception tasks. To tackle this gap, we perform a structured decomposition of the visual properties of AR F+C methods to find best-performing combinations. We propose the use of chromatic shadows and hatching approaches transferred from computer graphics. In a second study, we perform a factorized analysis of these combinations, showing that varying the shading type and using colored shadows can lead to better depth estimation when using OST HMDs.


Asunto(s)
Realidad Aumentada , Gráficos por Computador , Humanos , Interfaz Usuario-Computador , Diseño de Equipo , Percepción de Profundidad
9.
J Imaging ; 9(1)2022 Dec 23.
Artículo en Inglés | MEDLINE | ID: mdl-36662102

RESUMEN

Three decades after the first set of work on Medical Augmented Reality (MAR) was presented to the international community, and ten years after the deployment of the first MAR solutions into operating rooms, its exact definition, basic components, systematic design, and validation still lack a detailed discussion. This paper defines the basic components of any Augmented Reality (AR) solution and extends them to exemplary Medical Augmented Reality Systems (MARS). We use some of the original MARS applications developed at the Chair for Computer Aided Medical Procedures and deployed into medical schools for teaching anatomy and into operating rooms for telemedicine and surgical guidance throughout the last decades to identify the corresponding basic components. In this regard, the paper is not discussing all past or existing solutions but only aims at defining the principle components and discussing the particular domain modeling for MAR and its design-development-validation process, and providing exemplary cases through the past in-house developments of such solutions.

10.
IEEE Trans Vis Comput Graph ; 27(11): 4129-4139, 2021 11.
Artículo en Inglés | MEDLINE | ID: mdl-34449373

RESUMEN

A 3D Telepresence system allows users to interact with each other in a virtual, mixed, or augmented reality (VR, MR, AR) environment, creating a shared space for collaboration and communication. There are two main methods for representing users within these 3D environments. Users can be represented either as point cloud reconstruction-based avatars that resemble a physical user or as virtual character-based avatars controlled by tracking the users' body motion. This work compares both techniques to identify the differences between user representations and their fit in the reconstructed environments regarding the perceived presence, uncanny valley factors, and behavior impression. Our study uses an asymmetric VR/AR teleconsultation system that allows a remote user to join a local scene using VR. The local user observes the remote user with an AR head-mounted display, leading to facial occlusions in the 3D reconstruction. Participants perform a warm-up interaction task followed by a goal-directed collaborative puzzle task, pursuing a common goal. The local user was represented either as a point cloud reconstruction or as a virtual character-based avatar, in which case the point cloud reconstruction of the local user was masked. Our results show that the point cloud reconstruction-based avatar was superior to the virtual character avatar regarding perceived co-presence, social presence, behavioral impression, and humanness. Further, we found that the task type partly affected the perception. The point cloud reconstruction-based approach led to higher usability ratings, while objective performance measures showed no significant difference. We conclude that despite partly missing facial information, the point cloud-based reconstruction resulted in better conveyance of the user behavior and a more coherent fit into the simulation context.


Asunto(s)
Consulta Remota , Gráficos por Computador , Humanos , Motivación , Percepción , Interfaz Usuario-Computador
11.
J Imaging ; 7(8)2021 Aug 21.
Artículo en Inglés | MEDLINE | ID: mdl-34460795

RESUMEN

Screw placement in the correct angular trajectory is one of the most intricate tasks during spinal fusion surgery. Due to the crucial role of pedicle screw placement for the outcome of the operation, spinal navigation has been introduced into the clinical routine. Despite its positive effects on the precision and safety of the surgical procedure, local separation of the navigation information and the surgical site, combined with intricate visualizations, limit the benefits of the navigation systems. Instead of a tech-driven design, a focus on usability is required in new research approaches to enable advanced and effective visualizations. This work presents a new tool-mounted interface (TMI) for pedicle screw placement. By fixing a TMI onto the surgical instrument, physical de-coupling of the anatomical target and navigation information is resolved. A total of 18 surgeons participated in a usability study comparing the TMI to the state-of-the-art visualization on an external screen. With the usage of the TMI, significant improvements in system usability (Kruskal-Wallis test p < 0.05) were achieved. A significant reduction in mental demand and overall cognitive load, measured using a NASA-TLX (p < 0.05), were observed. Moreover, a general improvement in performance was shown by means of the surgical task time (one-way ANOVA p < 0.001).

12.
Anat Sci Educ ; 14(5): 590-604, 2021 Sep.
Artículo en Inglés | MEDLINE | ID: mdl-32892494

RESUMEN

In the context of gross anatomy education, novel augmented reality (AR) systems have the potential to serve as complementary pedagogical tools and facilitate interactive, student-centered learning. However, there is a lack of AR systems that enable multiple students to engage in collaborative, team-based learning environments. This article presents the results of a pilot study in which first-year medical students (n = 16) had the opportunity to work with such a collaborative AR system during a full-day gross anatomy seminar. Student performance in an anatomy knowledge test, conducted after an extensive group learning session, increased significantly compared to a pre-test in both the experimental group working with the collaborative AR system (P < 0.01) and in the control group working with traditional anatomy atlases and three-dimensional (3D) models (P < 0.01). However, no significant differences were found between the test results of both groups. While the experienced mental effort during the collaborative learning session was considered rather high (5.13 ± 2.45 on a seven-point Likert scale), both qualitative and quantitative feedback during a survey as well as the results of a System Usability Scale (SUS) questionnaire (80.00 ± 13.90) outlined the potential of the collaborative AR system for increasing students' 3D understanding of topographic anatomy and its advantages over comparable AR systems for single-user experiences. Overall, these outcomes show that collaborative AR systems such as the one evaluated within this work stimulate interactive, student-centered learning in teams and have the potential to become an integral part of a modern, multi-modal anatomy curriculum.


Asunto(s)
Anatomía , Realidad Aumentada , Estudiantes de Medicina , Anatomía/educación , Curriculum , Evaluación Educacional , Humanos , Aprendizaje , Proyectos Piloto , Enseñanza
13.
Med Image Comput Comput Assist Interv ; 12265: 267-276, 2020 Oct.
Artículo en Inglés | MEDLINE | ID: mdl-34085059

RESUMEN

Intraoperative Optical Coherence Tomography (iOCT) has advanced in recent years to provide real-time high resolution volumetric imaging for ophthalmic surgery. It enables real-time 3D feedback during precise surgical maneuvers. Intraoperative 4D OCT generally exhibits lower signal-to-noise ratio compared to diagnostic OCT and visualization is complicated by instrument shadows occluding retinal tissue. Additional constraints of processing data rates upwards of 6GB/s create unique challenges for advanced visualization of 4D OCT. Prior approaches for real-time 4D iOCT rendering have been limited to applying simple denoising filters and colorization to improve visualization. We present a novel real-time rendering pipeline that provides enhanced intraoperative visualization and is specifically designed for the high data rates of 4D iOCT. We decompose the volume into a static part consisting of the retinal tissue and a dynamic part including the instrument. Aligning the static parts over time allows temporal compounding of these structures for improved image quality. We employ a translational motion model and use axial projection images to reduce the dimensionality of the alignment. A model-based instrument segmentation on the projections discriminates static from dynamic parts and is used to exclude instruments from the compounding. Our real-time rendering method combines the compounded static information with the latest iOCT data to provide a visualization which compensates instrument shadows and improves instrument visibility. We evaluate the individual parts of our pipeline on pre-recorded OCT volumes and demonstrate the effectiveness of our method on a recorded volume sequence with a moving retinal forceps.

14.
Anat Sci Educ ; 12(6): 585-598, 2019 Nov.
Artículo en Inglés | MEDLINE | ID: mdl-30697948

RESUMEN

Early exposure to radiological cross-section images during introductory anatomy and dissection courses increases students' understanding of both anatomy and radiology. Novel technologies such as augmented reality (AR) offer unique advantages for an interactive and hands-on integration with the student at the center of the learning experience. In this article, the benefits of a previously proposed AR Magic Mirror system are compared to the Anatomage, a virtual dissection table as a system for combined anatomy and radiology teaching during a two-semester gross anatomy course with 749 first-year medical students, as well as a follow-up elective course with 72 students. During the former, students worked with both systems in dedicated tutorial sessions which accompanied the anatomy lectures and provided survey-based feedback. In the elective course, participants were assigned to three groups and underwent a self-directed learning session using either Anatomage, Magic Mirror, or traditional radiology atlases. A pre- and posttest design with multiple choice questions revealed significant improvements in test scores between the two tests for both the Magic Mirror and the group using radiology atlases, while no significant differences in test scores were recorded for the Anatomage group. Furthermore, especially students with low mental rotation test (MRT) scores benefited from the Magic Mirror and Anatomage and achieved significantly higher posttest scores compared to students with a low MRT score in the theory group. Overall, the results provide supporting evidence that the Magic Mirror system achieves comparable results in terms of learning outcome to established anatomy learning tools such as Anatomage and radiology atlases.


Asunto(s)
Anatomía Transversal/educación , Realidad Aumentada , Instrucción por Computador/métodos , Educación de Pregrado en Medicina/métodos , Radiología/educación , Adolescente , Adulto , Instrucción por Computador/instrumentación , Curriculum , Evaluación Educacional/estadística & datos numéricos , Femenino , Humanos , Imagenología Tridimensional/métodos , Masculino , Aprendizaje Basado en Problemas/métodos , Estudiantes de Medicina/psicología , Estudiantes de Medicina/estadística & datos numéricos , Enseñanza , Tomografía Computarizada por Rayos X/métodos , Adulto Joven
15.
IEEE Trans Vis Comput Graph ; 24(11): 2983-2992, 2018 11.
Artículo en Inglés | MEDLINE | ID: mdl-30188832

RESUMEN

Understanding, navigating, and performing goal-oriented actions in Mixed Reality (MR) environments is a challenging task and requires adequate information conveyance about the location of all virtual objects in a scene. Current Head-Mounted Displays (HMDs) have a limited field-of-view where augmented objects may be displayed. Furthermore, complex MR environments may be comprised of a large number of objects which can be distributed in the extended surrounding space of the user. This paper presents two novel techniques for visually guiding the attention of users towards out-of-view objects in HMD-based MR: the 3D Radar and the Mirror Ball. We evaluate our approaches against existing techniques during three different object collection scenarios, which simulate real-world exploratory and goal-oriented visual search tasks. To better understand how the different visualizations guide the attention of users, we analyzed the head rotation data for all techniques and introduce a novel method to evaluate and classify head rotation trajectories. Our findings provide supporting evidence that the type of visual guidance technique impacts the way users search for virtual objects in MR.


Asunto(s)
Atención/fisiología , Cabeza/fisiología , Imagenología Tridimensional/métodos , Realidad Virtual , Adulto , Algoritmos , Gráficos por Computador , Femenino , Humanos , Masculino , Persona de Mediana Edad , Adulto Joven
16.
Int J Comput Assist Radiol Surg ; 13(9): 1345-1355, 2018 Sep.
Artículo en Inglés | MEDLINE | ID: mdl-30054775

RESUMEN

PURPOSE: Advances in sensing and digitalization enable us to acquire and present various heterogeneous datasets to enhance clinical decisions. Visual feedback is the dominant way of conveying such information. However, environments rich with many sources of information all presented through the same channel pose the risk of over stimulation and missing crucial information. The augmentation of the cognitive field by additional perceptual modalities such as sound is a workaround to this problem. A major challenge in auditory augmentation is the automatic generation of pleasant and ergonomic audio in complex routines, as opposed to overly simplistic feedback, to avoid alarm fatigue. METHODS: In this work, without loss of generality to other procedures, we propose a method for aural augmentation of medical procedures via automatic modification of musical pieces. RESULTS: Evaluations of this concept regarding recognizability of the conveyed information along with qualitative aesthetics show the potential of our method. CONCLUSION: In this paper, we proposed a novel sonification method for automatic musical augmentation of tasks within surgical procedures. Our experimental results suggest that these augmentations are aesthetically pleasing and have the potential to successfully convey useful information. This work opens a path for advanced sonification techniques in the operating room, in order to complement traditional visual displays and convey information more efficiently.


Asunto(s)
Algoritmos , Recursos Audiovisuales , Retroalimentación Sensorial , Sonido , Cirugía Asistida por Computador/métodos , Cirugía Vitreorretiniana/métodos , Humanos
17.
Int J Comput Assist Radiol Surg ; 13(9): 1335-1344, 2018 Sep.
Artículo en Inglés | MEDLINE | ID: mdl-29943226

RESUMEN

PURPOSE: The discrepancy of continuously decreasing opportunities for clinical training and assessment and the increasing complexity of interventions in surgery has led to the development of different training and assessment options like anatomical models, computer-based simulators or cadaver trainings. However, trainees, following training, assessment and ultimately performing patient treatment, still face a steep learning curve. METHODS: To address this problem for C-arm-based surgery, we introduce a realistic radiation-free simulation system that combines patient-based 3D printed anatomy and simulated X-ray imaging using a physical C-arm. To explore the fidelity and usefulness of the proposed mixed-reality system for training and assessment, we conducted a user study with six surgical experts performing a facet joint injection on the simulator. RESULTS: In a technical evaluation, we show that our system simulates X-ray images accurately with an RMSE of 1.85 mm compared to real X-ray imaging. The participants expressed agreement with the overall realism of the simulation, the usefulness of the system for assessment and strong agreement with the usefulness of such a mixed-reality system for training of novices and experts. In a quantitative analysis, we furthermore evaluated the suitability of the system for the assessment of surgical skills and gather preliminary evidence for validity. CONCLUSION: The proposed mixed-reality simulation system facilitates a transition to C-arm-based surgery and has the potential to complement or even replace large parts of cadaver training, to provide a safe assessment environment and to reduce the risk for errors when proceeding to patient treatment. We propose an assessment concept and outline the steps necessary to expand the system into a test instrument that provides reliable and justified assessments scores indicative of surgical proficiency with sufficient evidence for validity.


Asunto(s)
Vértebras Lumbares/cirugía , Modelos Anatómicos , Procedimientos Ortopédicos/educación , Entrenamiento Simulado/métodos , Cirugía Asistida por Computador/educación , Tomografía Computarizada por Rayos X/métodos , Interfaz Usuario-Computador , Cadáver , Competencia Clínica , Humanos , Curva de Aprendizaje , Vértebras Lumbares/diagnóstico por imagen , Masculino , Procedimientos Ortopédicos/métodos , Impresión Tridimensional , Cirugía Asistida por Computador/instrumentación
18.
Unfallchirurg ; 121(4): 278-285, 2018 Apr.
Artículo en Alemán | MEDLINE | ID: mdl-29464292

RESUMEN

BACKGROUND: One of the main challenges for modern surgery is the effective use of the many available imaging modalities and diagnostic methods. Augmented reality systems can be used in the future to blend patient and planning information into the view of surgeons, which can improve the efficiency and safety of interventions. OBJECTIVE: In this article we present five visualization methods to integrate augmented reality displays into medical procedures and the advantages and disadvantages are explained. MATERIAL AND METHODS: Based on an extensive literature review the various existing approaches for integration of augmented reality displays into medical procedures are divided into five categories and the most important research results for each approach are presented. RESULTS: A large number of mixed and augmented reality solutions for medical interventions have been developed as research prototypes; however, only very few systems have been tested on patients. CONCLUSION: In order to integrate mixed and augmented reality displays into medical practice, highly specialized solutions need to be developed. Such systems must comply with the requirements with respect to accuracy, fidelity, ergonomics and seamless integration into the surgical workflow.


Asunto(s)
Imagenología Tridimensional , Cirugía Asistida por Computador/instrumentación , Interfaz Usuario-Computador , Diseño de Equipo , Humanos , Imagenología Tridimensional/instrumentación , Diseño de Software , Integración de Sistemas , Flujo de Trabajo
19.
Healthc Technol Lett ; 4(5): 179-183, 2017 Oct.
Artículo en Inglés | MEDLINE | ID: mdl-29184661

RESUMEN

Minimally invasive surgeries (MISs) are gaining popularity as alternatives to conventional open surgeries. In thoracoscopic scoliosis MIS, fluoroscopy is used to guide pedicle screw placement and to visualise the effect of the intervention on the spine curvature. However, cosmetic external appearance is the most important concern for patients, while correction of the spine and achieving coronal and sagittal trunk balance are the top priorities for surgeons. The authors present the feasibility study of the first intra-operative assistive system for scoliosis surgery composed of a single RGBD camera affixed on a C-arm which allows visualising in real time the surgery effects on the patient trunk surface in the transverse plane. They perform three feasibility experiments from simulated data based on scoliotic patients to live acquisition from non-scoliotic mannequin and person, all showing that the proposed system accuracy is comparable with scoliotic surface reconstruction state of art.

20.
IEEE Trans Vis Comput Graph ; 21(12): 1427-41, 2015 Dec.
Artículo en Inglés | MEDLINE | ID: mdl-26394430

RESUMEN

Visuo-haptic augmented reality systems enable users to see and touch digital information that is embedded in the real world. PHANToM haptic devices are often employed to provide haptic feedback. Precise co-location of computer-generated graphics and the haptic stylus is necessary to provide a realistic user experience. Previous work has focused on calibration procedures that compensate the non-linear position error caused by inaccuracies in the joint angle sensors. In this article we present a more complete procedure that additionally compensates for errors in the gimbal sensors and improves position calibration. The proposed procedure further includes software-based temporal alignment of sensor data and a method for the estimation of a reference for position calibration, resulting in increased robustness against haptic device initialization and external tracker noise. We designed our procedure to require minimal user input to maximize usability. We conducted an extensive evaluation with two different PHANToMs, two different optical trackers, and a mechanical tracker. Compared to state-of-the-art calibration procedures, our approach significantly improves the co-location of the haptic stylus. This results in higher fidelity visual and haptic augmentations, which are crucial for fine-motor tasks in areas such as medical training simulators, assembly planning tools, or rapid prototyping applications.


Asunto(s)
Gráficos por Computador/instrumentación , Tacto/fisiología , Interfaz Usuario-Computador , Diseño de Equipo , Humanos
SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA
...