Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 20 de 1.637
Filter
1.
Nihon Ronen Igakkai Zasshi ; 61(3): 312-321, 2024.
Article in Japanese | MEDLINE | ID: mdl-39261101

ABSTRACT

PURPOSE: We aimed to develop a simulation program for physicians and nurses involved in virtual reality (VR) and augmented reality (AR) treatment and care from the perspective of these professionals and older adults with dementia who developed delirium, and to test the effectiveness of the program. METHODS: effectiveness of the program was analyzed through free-response statements from 67 nurses (84.8%) and 12 doctors (15.2%) who participated in the program between February 16 and April 18, 2023. RESULTS: Regarding the experience of delirium from the perspective of older adults with dementia (personal experience), the following statements were extracted "1. I do not understand where I am, the situation, and the treatment/care that is about to be given"; "2. I want the situation to be explained to me so that I can understand the reasons for my hospitalization and the treatment/care I am receiving"; "3. The eerie environment of the hospital and the high pressure of the staff made me feel anxious and fearful"; "4. Please respect my existence as I endure pain, anxiety, and loneliness"; "5. I feel relieved when doctors and nurses deal with me from my point of view"; and "6. I feel relieved when there is a familiar presence, such as a family member or the name I am calling on a daily basis". CONCLUSION: Specific categories of self-oriented empathy were extracted from the experience of physical restraint at night using VR and the experience of delirium using AR. This suggests the possibility of objective effects on treatment and care in future practice.


Subject(s)
Delirium , Dementia , Virtual Reality , Humans , Delirium/prevention & control , Delirium/therapy , Aged , Augmented Reality , Female , Male
2.
Sheng Wu Yi Xue Gong Cheng Xue Za Zhi ; 41(4): 684-691, 2024 Aug 25.
Article in Chinese | MEDLINE | ID: mdl-39218593

ABSTRACT

This study investigates a brain-computer interface (BCI) system based on an augmented reality (AR) environment and steady-state visual evoked potentials (SSVEP). The system is designed to facilitate the selection of real-world objects through visual gaze in real-life scenarios. By integrating object detection technology and AR technology, the system augmented real objects with visual enhancements, providing users with visual stimuli that induced corresponding brain signals. SSVEP technology was then utilized to interpret these brain signals and identify the objects that users focused on. Additionally, an adaptive dynamic time-window-based filter bank canonical correlation analysis was employed to rapidly parse the subjects' brain signals. Experimental results indicated that the system could effectively recognize SSVEP signals, achieving an average accuracy rate of 90.6% in visual target identification. This system extends the application of SSVEP signals to real-life scenarios, demonstrating feasibility and efficacy in assisting individuals with mobility impairments and physical disabilities in object selection tasks.


Subject(s)
Augmented Reality , Brain-Computer Interfaces , Electroencephalography , Evoked Potentials, Visual , Humans , Evoked Potentials, Visual/physiology , Photic Stimulation , User-Computer Interface , Algorithms
3.
Head Face Med ; 20(1): 51, 2024 Sep 21.
Article in English | MEDLINE | ID: mdl-39306659

ABSTRACT

BACKGROUND: Successfully restoring facial contours continues to pose a significant challenge for surgeons. This study aims to utilize head-mounted display-based augmented reality (AR) navigation technology for facial soft tissue defect reconstruction and to evaluate its accuracy and effectiveness, exploring its feasibility in craniofacial surgery. METHODS: Hololens 2 was utilized to construct the AR guidance system for facial fat grafting. Twenty artificial cases with facial soft tissue defects were randomly assigned to Group A and Group B, undergoing filling surgeries with the AR guidance system and conventional methods, respectively. All postoperative three-dimensional models were superimposed onto virtual plans to evaluate the accuracy of the system versus conventional filling methods. Additionally, procedure completion time was recorded to assess system efficiency relative to conventional methods. RESULTS: The error in facial soft tissue defect reconstruction assisted by the system in Group A was 2.09 ± 0.56 mm, significantly lower than the 3.23 ± 1.15 mm observed with conventional methods in Group B (p < 0.05). Additionally, the time required for facial defect filling reconstruction using the system in Group A was 25.45 ± 2.58 min, markedly shorter than the 37.05 ± 3.34 min needed with conventional methods in Group B (p < 0.05). CONCLUSION: The visual navigation offered by the fat grafting AR guidance system presents obvious advantages in facial soft tissue defect reconstruction, facilitating enhanced precision and efficiency in these filling procedures.


Subject(s)
Adipose Tissue , Augmented Reality , Plastic Surgery Procedures , Humans , Adipose Tissue/transplantation , Plastic Surgery Procedures/methods , Female , Male , Surgery, Computer-Assisted/methods , Face/surgery , Face/diagnostic imaging , Soft Tissue Injuries/surgery , Imaging, Three-Dimensional , Adult
4.
Invest Ophthalmol Vis Sci ; 65(11): 30, 2024 Sep 03.
Article in English | MEDLINE | ID: mdl-39292450

ABSTRACT

Purpose: This study aimed to investigate the changes in ocular refraction and pupillary diameter during fixation on augmented reality (AR) images using a Maxwellian display. Methods: Twenty-two healthy young volunteers (average age, 20.7 ± 0.5 years) wore a Maxwellian display device in front of their right eye and fixated on an asterisk displayed on both a liquid-crystal display (real target) and a Maxwellian display (AR target) for 29 seconds (real as a baseline for 3 seconds, AR for 13 seconds, and real for 13 seconds) at distances of 5.0, 0.5, 0.33, and 0.2 meters. A binocular open-view autorefractometer was used to measure the ocular refraction and pupillary diameter of the left eye. Results: Accommodative (5.0 meters, 0.28 ± 0.29 diopter [D]; 0.5 meter, -0.12 ± 0.35 D; 0.33 meter, -0.43 ± 0.57 D; 0.2 meter, -1.20 ± 0.82 D) and pupillary (5.0 meters, 0.07 ± 0.22 mm; 0.5 meter, -0.08 ± 0.17 mm; 0.33 meter, -0.16 ± 0.20 mm; 0.2 meter, -0.25 ± 0.24 mm) responses were negative when the real target distances were farther away. The accommodative response was significantly and positively correlated with the pupillary response during fixation on the AR target (R2 = 0.187, P < 0.001). Conclusions: Fixating on AR images using a Maxwellian display induces accommodative and pupillary responses. Accommodative responses depend on the distance between real objects. Overall, the Maxwellian display does not completely eliminate accommodation in real space.


Subject(s)
Accommodation, Ocular , Augmented Reality , Fixation, Ocular , Pupil , Refraction, Ocular , Humans , Accommodation, Ocular/physiology , Male , Female , Young Adult , Pupil/physiology , Fixation, Ocular/physiology , Refraction, Ocular/physiology , Healthy Volunteers , Vision, Binocular/physiology , Adult
5.
PLoS One ; 19(9): e0308757, 2024.
Article in English | MEDLINE | ID: mdl-39292693

ABSTRACT

Attending to the behaviors of eyewitnesses at police lineups could help to determine whether an eyewitness identification is accurate or mistaken. Eyewitness identification decision processes were explored using augmented reality holograms. Children (n = 143; Mage = 10.79, SD = 1.12 years) and adults (n = 152; Mage = 22.12, SD = 7.47 years) viewed staged crime videos and made identification decisions from sequential lineups. The lineups were presented in augmented reality. Children were less accurate than adults on the lineup task. For adults, fast response times and high post-identification confidence ratings were both reflective of identification accuracy. Fast response times were also reflective of accuracy for children; however, children's confidence ratings did not reflect the likely accuracy of their identifications. A new additional measure, the witness' proximity to the augmented reality lineup, revealed that children who made mistaken identifications moved closer to the lineup than children who correctly identified the person from the crime video. Adults who moved any distance towards the lineup were less accurate than adults who did not move at all, but beyond that, adults' proximity to the lineup was not reflective of accuracy. The findings give further evidence that behavioral indicators of deliberation and information-seeking by eyewitnesses are signals of low lineup identification reliability. The findings also suggest that when assessing the reliability of children's lineup identifications, behavioral measures are more useful than metacognitive reports.


Subject(s)
Augmented Reality , Crime , Reaction Time , Humans , Child , Female , Male , Adult , Young Adult , Adolescent , Mental Recall/physiology , Reproducibility of Results , Recognition, Psychology
6.
Comput Assist Surg (Abingdon) ; 29(1): 2357164, 2024 Dec.
Article in English | MEDLINE | ID: mdl-39253945

ABSTRACT

Augmented Reality (AR) holds the potential to revolutionize surgical procedures by allowing surgeons to visualize critical structures within the patient's body. This is achieved through superimposing preoperative organ models onto the actual anatomy. Challenges arise from dynamic deformations of organs during surgery, making preoperative models inadequate for faithfully representing intraoperative anatomy. To enable reliable navigation in augmented surgery, modeling of intraoperative deformation to obtain an accurate alignment of the preoperative organ model with the intraoperative anatomy is indispensable. Despite the existence of various methods proposed to model intraoperative organ deformation, there are still few literature reviews that systematically categorize and summarize these approaches. This review aims to fill this gap by providing a comprehensive and technical-oriented overview of modeling methods for intraoperative organ deformation in augmented reality in surgery. Through a systematic search and screening process, 112 closely relevant papers were included in this review. By presenting the current status of organ deformation modeling methods and their clinical applications, this review seeks to enhance the understanding of organ deformation modeling in AR-guided surgery, and discuss the potential topics for future advancements.


Subject(s)
Augmented Reality , Surgery, Computer-Assisted , Humans , Surgery, Computer-Assisted/methods , Models, Anatomic , Imaging, Three-Dimensional
7.
Langenbecks Arch Surg ; 409(1): 274, 2024 Sep 09.
Article in English | MEDLINE | ID: mdl-39251463

ABSTRACT

PURPOSE: Anatomical understanding is an important basis for medical teaching, especially in a surgical context. The interpretation of complex vascular structures via two-dimensional visualization can yet be difficult, particularly for students. The objective of this study was to investigate the feasibility of an MxR-assisted educational approach in vascular surgery undergraduate education, comparing an MxR-based teaching-intervention with CT-based material for learning and understanding the vascular morphology of the thoracic aorta. METHODS: In a prospective randomized controlled trial learning success and diagnostic skills following an MxR- vs. a CT-based intervention was investigated in 120 thoracic aortic visualizations. Secondary outcomes were motivation, system-usability as well as workload/satisfaction. Motivational factors and training-experience were also assessed. Twelve students (7 females; mean age: 23 years) were randomized into two groups undergoing educational intervention with MxR or CT. RESULTS: Evaluation of learning success showed a mean improvement of 1.17 points (max.score: 10; 95%CI: 0.36-1.97). The MxR-group has improved by a mean of 1.33 [95% CI: 0.16-2.51], against 1.0 points [95% CI: -0.71- 2.71] in the CT-group. Regarding diagnostic skills, both groups performed equally (CT-group: 58.25 ± 7.86 vs. MxR-group:58.5 ± 6.60; max. score 92.0). 11/12 participants were convinced that MxR facilitated learning of vascular morphologies. The usability of the MxR-system was rated positively, and the perceived workload was low. CONCLUSION: MxR-systems can be a valuable addition to vascular surgery education. Further evaluation of the technology in larger teaching situations are required. Especially regarding the acquisition of practical skills, the use of MxR-systems offers interesting application possibilities in surgical education.


Subject(s)
Aorta, Thoracic , Education, Medical, Undergraduate , Humans , Female , Male , Pilot Projects , Aorta, Thoracic/diagnostic imaging , Aorta, Thoracic/anatomy & histology , Prospective Studies , Young Adult , Education, Medical, Undergraduate/methods , Adult , Augmented Reality , Feasibility Studies , Tomography, X-Ray Computed , Vascular Surgical Procedures/education , Clinical Competence , Anatomy/education
8.
Langenbecks Arch Surg ; 409(1): 268, 2024 Sep 03.
Article in English | MEDLINE | ID: mdl-39225933

ABSTRACT

PURPOSE: Augmented reality navigation in liver surgery still faces technical challenges like insufficient registration accuracy. This study compared registration accuracy between local and external virtual 3D liver models (vir3DLivers) generated with different rendering techniques and the use of the left vs right main portal vein branch (LPV vs RPV) for landmark setting. The study should further examine how registration accuracy behaves with increasing distance from the ROI. METHODS: Retrospective registration accuracy analysis of an optical intraoperative 3D navigation system, used in 13 liver tumor patients undergoing liver resection/thermal ablation. RESULTS: 109 measurements in 13 patients were performed. Registration accuracy with local and external vir3DLivers was comparable (8.76 ± 0.9 mm vs 7.85 ± 0.9 mm; 95% CI = -0.73 to 2.55 mm; p = 0.272). Registrations via the LPV demonstrated significantly higher accuracy than via the RPV (6.2 ± 0.85 mm vs 10.41 ± 0.99 mm, 95% CI = 2.39 to 6.03 mm, p < 0.001). There was a statistically significant positive but weak correlation between the accuracy (dFeature) and the distance from the ROI (dROI) (r = 0.298; p = 0.002). CONCLUSION: Despite basing on different rendering techniques both local and external vir3DLivers have comparable registration accuracy, while LPV-based registrations significantly outperform RPV-based ones in accuracy. Higher accuracy can be assumed within distances of up to a few centimeters around the ROI.


Subject(s)
Augmented Reality , Hepatectomy , Imaging, Three-Dimensional , Liver Neoplasms , Surgery, Computer-Assisted , Humans , Hepatectomy/methods , Male , Liver Neoplasms/surgery , Liver Neoplasms/diagnostic imaging , Female , Retrospective Studies , Middle Aged , Surgery, Computer-Assisted/methods , Aged , Portal Vein/surgery , Portal Vein/diagnostic imaging , Anatomic Landmarks , Ultrasonography, Interventional/methods
9.
Sensors (Basel) ; 24(17)2024 Aug 24.
Article in English | MEDLINE | ID: mdl-39275397

ABSTRACT

State-of-the-art augmented reality (AR) glasses record their 3D pose in space, enabling measurements and analyses of clinical gait and balance tests. This study's objective was to evaluate concurrent validity and test-retest reliability for common clinical gait and balance tests in people with Parkinson's disease: Five Times Sit To Stand (FTSTS) and Timed Up and Go (TUG) tests. Position and orientation data were collected in 22 participants with Parkinson's disease using HoloLens 2 and Magic Leap 2 AR glasses, from which test completion durations and durations of distinct sub-parts (e.g., sit to stand, turning) were derived and compared to reference systems and over test repetitions. Regarding concurrent validity, for both tests, an excellent between-systems agreement was found for position and orientation time series (ICC(C,1) > 0.933) and test completion durations (ICC(A,1) > 0.984). Between-systems agreement for FTSTS (sub-)durations were all excellent (ICC(A,1) > 0.921). TUG turning sub-durations were excellent (turn 1, ICC(A,1) = 0.913) and moderate (turn 2, ICC(A,1) = 0.589). Regarding test-retest reliability, the within-system test-retest variation in test completion times and sub-durations was always much greater than the between-systems variation, implying that (sub-)durations may be derived interchangeably from AR and reference system data. In conclusion, AR data are of sufficient quality to evaluate gait and balance aspects in people with Parkinson's disease, with valid quantification of test completion durations and sub-durations of distinct FTSTS and TUG sub-parts.


Subject(s)
Augmented Reality , Gait , Parkinson Disease , Postural Balance , Humans , Parkinson Disease/physiopathology , Postural Balance/physiology , Male , Gait/physiology , Female , Aged , Middle Aged , Reproducibility of Results , Eyeglasses
10.
Sensors (Basel) ; 24(17)2024 Aug 28.
Article in English | MEDLINE | ID: mdl-39275462

ABSTRACT

Gait speed is increasingly recognized as an important health indicator. However, gait analysis in clinical settings often encounters inconsistencies due to methodological variability and resource constraints. To address these challenges, GaitKeeper uses artificial intelligence (AI) and augmented reality (AR) to standardize gait speed assessments. In laboratory conditions, GaitKeeper demonstrates close alignment with the Vicon system and, in clinical environments, it strongly correlates with the Gaitrite system. The integration of a cloud-based processing platform and robust data security positions GaitKeeper as an accurate, cost-effective, and user-friendly tool for gait assessment in diverse clinical settings.


Subject(s)
Artificial Intelligence , Gait , Walking Speed , Humans , Walking Speed/physiology , Gait/physiology , Gait Analysis/methods , Gait Analysis/instrumentation , Augmented Reality , Male , Adult , Female , Mobile Applications , Algorithms
11.
Semin Vasc Surg ; 37(3): 321-325, 2024 Sep.
Article in English | MEDLINE | ID: mdl-39277348

ABSTRACT

Extended reality has brought new opportunities for medical imaging visualization and analysis. It regroups various subfields, including virtual reality, augmented reality, and mixed reality. Various applications have been proposed for surgical practice, as well as education and training. The aim of this review was to summarize current applications of extended reality and augmented reality in vascular surgery, highlighting potential benefits, pitfalls, limitations, and perspectives on improvement.


Subject(s)
Augmented Reality , Vascular Surgical Procedures , Virtual Reality , Humans , Vascular Surgical Procedures/education , Clinical Competence , Surgery, Computer-Assisted , Predictive Value of Tests
12.
Sci Rep ; 14(1): 21198, 2024 09 11.
Article in English | MEDLINE | ID: mdl-39261561

ABSTRACT

Gait guidance systems that synchronize the gait rhythm with an avatar in a mixed reality (MR) environment are attracting attention owing to their rehabilitation applications. More effective gait guidance can be achieved by changing body sensations for the sense of embodiment (SoE), which refers to the feeling of owning, controlling, and being inside a body in MR. This study investigated full-body synchronous motion between a human and a virtual avatar to enhance the SoE in walking with actual position changes in the real world. The full-body motion and gait rhythm were measured using body-worn inertial measurement units and a visual avatar was provided through a transparent head-mounted display. The results showed that the SoE of the participants was enhanced under higher synchronization conditions. In addition, questionnaire results showed that the SoE in the synchronous condition was significantly higher than that in the asynchronous condition, and the SoE in the self-avatar condition was significantly higher than that in the other-avatar condition. This indicates that a higher synchronization level with the appearance of an avatar leads to a stronger SoE in the human perception mechanism, which is important for potential application in medical or other fields.


Subject(s)
Walking , Humans , Walking/physiology , Male , Female , Adult , Young Adult , Gait/physiology , Virtual Reality , User-Computer Interface , Augmented Reality , Avatar
13.
Isr J Health Policy Res ; 13(1): 46, 2024 Sep 12.
Article in English | MEDLINE | ID: mdl-39267143

ABSTRACT

BACKGROUND: In the realm of trauma response preparation for prehospital teams, the combination of Augmented Reality (AR) and Virtual Reality (VR) with manikin technologies is growing in importance for creating training scenarios that closely mirror potential real-life situations. The pilot study focused on training of airway management and intubation for trauma incidents, based on a Trauma AR-VR simulator involving reserve paramedics of the National EMS service (Magen David Adom) who had not practiced for up to six years, activated during the Israel-Gaza conflict (October 2023). The trauma simulator merges the physical and virtual realms by utilizing a real manikin and instruments outfitted with sensors. This integration enables a precise one-to-one correspondence between the physical and virtual environments. Considering the importance of enhancing the preparedness of the reserve paramedics to support the prehospital system in Israel, the study aims to ascertain the impact of AR-VR Trauma simulator training on the modification of key perceptual attitudes such as self-efficacy, resilience, knowledge, and competency among reserve paramedics in Israel. METHODS: A quantitative questionnaire was utilized to gauge the influence of AR-VR training on specific psychological and skill-based metrics, including self-efficacy, resilience, medical knowledge, professional competency, confidence in performing intubations, and the perceived quality of the training experience in this pilot study. The methodology entailed administering a pre-training questionnaire, delivering a targeted 30-minute AR-VR training session on airway management techniques, and collecting post-training data through a parallel questionnaire to measure the training's impact. Fifteen reserve paramedics were trained, with a response rate of 80% (n = 12) in both measurements. RESULTS: Post-training evaluations indicated a significant uptick in all measured areas, with resilience (3.717±0.611 to 4.008±0.665) and intubation confidence (3.541±0.891 to 3.833±0.608) showing particularly robust gains. The high rating (4.438±0.419 on a scale of 5) of the training quality suggests positive response to the AR-VR integration for the enhancement of medical training, CONCLUSIONS: The application of AR-VR in the training of reserve paramedics demonstrates potential as a key tool for their swift mobilization and efficiency in crisis response. This is particularly valuable for training when quick deployment of personnel is necessary, training resources are diminished, and 'all hands on deck' is necessary.


Subject(s)
Augmented Reality , Emergency Medical Services , Virtual Reality , Humans , Pilot Projects , Israel , Emergency Medical Services/methods , Male , Adult , Surveys and Questionnaires , Female , Manikins , Clinical Competence/standards , Airway Management/methods , Emergency Medical Technicians/education , Allied Health Personnel/education , Middle Aged
14.
Comput Assist Surg (Abingdon) ; 29(1): 2403444, 2024 Dec.
Article in English | MEDLINE | ID: mdl-39301766

ABSTRACT

Catheter-based intervention procedures contain complex maneuvers, and they are often performed using fluoroscopic guidance assisted by 2D and 3D echocardiography viewed on a flat screen that inherently limits depth perception. Emerging mixed reality (MR) technologies, combined with advanced rendering techniques, offer potential enhancement in depth perception and navigational support. The study aims to evaluate a MR-based guidance system for the atrial septal puncture (ASP) procedure utilizing a phantom anatomical model. A novel MR-based guidance system using a modified Monte Carlo-based rendering approach for 3D echocardiographic visualization was introduced and evaluated against standard clinical 3D echocardiographic display on a flat screen. The objective was to guide the ASP procedure by facilitating catheter placement and puncture across four specific atrial septum quadrants. To assess the system's feasibility and performance, a user study involving four experienced interventional cardiologists was conducted using a phantom model. Results show that participants accurately punctured the designated quadrant in 14 out of 16 punctures using MR and 15 out of 16 punctures using the flat screen of the ultrasound machine. The geometric mean puncture time for MR was 31 s and 26 s for flat screen guidance. User experience ratings indicated MR-based guidance to be easier to navigate and locate tents of the atrial septum. The study demonstrates the feasibility of MR-guided atrial septal puncture. User experience data, particularly with respect to navigation, imply potential benefits for more complex procedures and educational purposes. The observed performance difference suggests an associated learning curve for optimal MR utilization.


Subject(s)
Atrial Septum , Echocardiography, Three-Dimensional , Monte Carlo Method , Phantoms, Imaging , Punctures , Humans , Atrial Septum/diagnostic imaging , Echocardiography, Three-Dimensional/methods , Surgery, Computer-Assisted/methods , Cardiac Catheterization/methods , Augmented Reality , Ultrasonography, Interventional/methods
15.
Chin Clin Oncol ; 13(4): 56, 2024 Aug.
Article in English | MEDLINE | ID: mdl-39238344

ABSTRACT

BACKGROUND AND OBJECTIVE: The increasing popularity of three-dimensional (3D) virtual reconstructions of two-dimensional (2D) imaging in urology has led to significant technological advancements, resulting in the creation of highly accurate 3D virtual models (3DVMs) that faithfully replicate individual anatomical details. This technology enhances surgical reality, providing surgeons with hyper-accurate insights into instantaneous subjective surgical anatomy and improving preoperative surgical planning. In the uro-oncologic field, the utility of 3D virtual reconstruction has been demonstrated in nephron-sparing surgery, impacting surgical strategy and postoperative outcomes in prostate cancer (PCa). The aim of this study is to offer a thorough narrative review of the current state and application of 3D reconstructions and augmented reality (AR) in radical prostatectomy (RP). METHODS: A non-systematic literature review was conducted using Medline, PubMed, the Cochrane Database, and Embase to gather information on clinical trials, randomized controlled trials, review articles, and prospective and retrospective studies related to 3DVMs and AR in RP. The search strategy followed the PICOS (Patients, Intervention, Comparison, Outcome, Study design) criteria and was performed in January 2024. KEY CONTENT AND FINDINGS: The adoption of 3D visualization has become widespread, with applications ranging from preoperative planning to intraoperative consultations. The urological community's interest in intraoperative surgical navigation using cognitive, virtual, mixed, and AR during RP is evident in a substantial body of literature, including 16 noteworthy investigations. These studies highlight the varied experiences and benefits of incorporating 3D reconstructions and AR into RP, showcasing improvements in preoperative planning, intraoperative navigation, and real-time decision-making. CONCLUSIONS: The integration of 3DVMs and AR technologies in urological oncology, particularly in the context of RP, has shown promising advancements. These technologies provide crucial support in preoperative planning, intraoperative navigation, and real-time decision-making, significantly improving the visualization of complex anatomical structures helping in the nerve sparing (NS) approach modulation and reducing positive surgical margin (PSM) rate. Despite positive outcomes, challenges such as small patient cohorts, lack of standardized methodologies, and concerns about costs and technology adoption persist.


Subject(s)
Augmented Reality , Imaging, Three-Dimensional , Prostatectomy , Humans , Prostatectomy/methods , Male , Imaging, Three-Dimensional/methods , Prostatic Neoplasms/surgery
16.
J Med Syst ; 48(1): 76, 2024 Aug 15.
Article in English | MEDLINE | ID: mdl-39145896

ABSTRACT

Mixed Reality is a technology that has gained attention due to its unique capabilities for accessing and visualizing information. When integrated with voice control mechanisms, gestures and even iris movement, it becomes a valuable tool for medicine. These features are particularly appealing for the operating room and surgical learning, where access to information and freedom of hand operation are fundamental. This study examines the most significant research on mixed reality in the operating room over the past five years, to identify the trends, use cases, its applications and limitations. A systematic review was conducted following the Preferred Reporting Items for Systematic Reviews and Meta-Analysis (PRISMA) guidelines to answer the research questions established using the PICO (Population, Intervention, Comparator and Outcome) framework. Although implementation of Mixed Reality applications in the operations room presents some challenges, when used appropriately, it can yield remarkable results. It can make learning easier, flatten the learning curve for several procedures, and facilitate various aspects of the surgical processes. The articles' conclusions highlight the potential benefits of these innovations in surgical practice while acknowledging the challenges that must be addressed. Technical complexity, equipment costs, and steep learning curves present significant obstacles to the widespread adoption of Mixed Reality and computer-assisted evaluation. The need for more flexible approaches and comprehensive studies is underscored by the specificity of procedures and limited samples sizes. The integration of imaging modalities and innovative functionalities holds promise for clinical applications. However, it is important to consider issues related to usability, bias, and statistical analyses. Mixed Reality offers significant benefits, but there are still open challenges such as ergonomic issues, limited field of view, and battery autonomy that must be addressed to ensure widespread acceptance.


Subject(s)
Augmented Reality , Operating Rooms , Operating Rooms/organization & administration , Humans , User-Computer Interface
17.
Sci Rep ; 14(1): 18938, 2024 08 15.
Article in English | MEDLINE | ID: mdl-39147910

ABSTRACT

The popularity of mixed reality (MR) technologies, including virtual (VR) and augmented (AR) reality, have advanced many training and skill development applications. If successful, these technologies could be valuable for high-impact professional training, like medical operations or sports, where the physical resources could be limited or inaccessible. Despite MR's potential, it is still unclear whether repeatedly performing a task in MR would affect performance in the same or related tasks in the physical environment. To investigate this issue, participants executed a series of visually-guided manual pointing movements in the physical world before and after spending one hour in VR or AR performing similar movements. Results showed that, due to the MR headsets' intrinsic perceptual geometry, movements executed in VR were shorter and movements executed in AR were longer than the veridical Euclidean distance. Crucially, the sensorimotor bias in MR conditions also manifested in the subsequent post-test pointing task; participants transferring from VR initially undershoot whereas those from AR overshoot the target in the physical environment. These findings call for careful consideration of MR-based training because the exposure to MR may perturb the sensorimotor processes in the physical environment and negatively impact performance accuracy and transfer of training from MR to UR.


Subject(s)
Psychomotor Performance , Task Performance and Analysis , Virtual Reality , Humans , Male , Female , Adult , Young Adult , Psychomotor Performance/physiology , Augmented Reality , Movement/physiology
18.
Int J Occup Saf Ergon ; 30(3): 985-994, 2024 Sep.
Article in English | MEDLINE | ID: mdl-39108078

ABSTRACT

With software developments and advances in display technologies substantially improved, augmented reality (AR) application has gained popularity. In this study, we discuss using classic PowerPoint and AR for two kinds of scaffolding tasks (task-lifeline assembly and hedge assembly) for users with different spatial ability. We considered both objective and subjective measures of performance, i.e., correct rate and system usability and the ITC-sense of presence inventory (ITC-SOPI) scale. The results of the study show that participants using AR achieved higher operating performance than those using PowerPoint. Furthermore, the users' learning effect was influenced by spatial ability when using PowerPoint. Participants with high spatial ability achieved higher performance than participants with low spatial ability in PowerPoint. However, participants who used AR as a training method did not show significantly different operating performance at different levels of spatial ability. Consequently, AR was believed to be a potential method for enhancing training performance.


Subject(s)
Task Performance and Analysis , Humans , Male , Female , Augmented Reality , Adult , User-Computer Interface , Young Adult
19.
J Cancer Res Ther ; 20(4): 1338-1343, 2024 Aug 01.
Article in English | MEDLINE | ID: mdl-39206996

ABSTRACT

OBJECTIVES: This study aimed to evaluate the accuracy of percutaneous computed tomography (CT)-guided puncture based on machine vision and augmented reality in a phantom. MATERIALS AND METHODS: The surgical space coordinate system was established, and accurate registration was ensured using the hierarchical optimization framework. Machine vision tracking and augmented reality display technologies were used for puncture navigation. CT was performed on a phantom, and puncture paths with three different lengths were planned from the surface of the phantom to the metal ball. Puncture accuracy was evaluated by measuring the target positioning error (TPE), lateral error (LE), angular error (AE), and first success rate (FSR) based on the obtained CT images. RESULTS: A highly qualified attending interventional physician performed a total of 30 punctures using puncture navigation. For the short distance (4.5-5.5 cm), the TPE, LE, AE, and FSR were 1.90 ± 0.62 mm, 1.23 ± 0.70 mm, 1.39 ± 0.86°, and 60%, respectively. For the medium distance (9.5-10.5 cm), the TPE, LE, AE, and FSR were 2.35 ± 0.95 mm, 2.00 ± 1.07 mm, 1.20 ± 0.62°, and 40%, respectively. For the long distance (14.5-15.5 cm), the TPE, LE, AE, and FSR were 2.81 ± 1.17 mm, 2.33 ± 1.34 mm, 0.99 ± 0.55°, and 30%, respectively. CONCLUSION: The augmented reality and machine vision-based CT-guided puncture navigation system allows for precise punctures in a phantom. Further studies are needed to explore its clinical applicability.


Subject(s)
Augmented Reality , Phantoms, Imaging , Tomography, X-Ray Computed , Tomography, X-Ray Computed/methods , Humans , Punctures/methods , Surgery, Computer-Assisted/methods
20.
J Vis Exp ; (210)2024 Aug 09.
Article in English | MEDLINE | ID: mdl-39185900

ABSTRACT

Augmented Reality (AR) is in high demand in medical applications. The aim of the paper is to provide automatic surgery using AR for the Transcatheter Aortic Valve Replacement (TAVR). TAVR is the alternate medical procedure for open-heart surgery. TAVR replaces the injured valve with the new one using a catheter. In the existing model, remote guidance is given, while the surgery is not automated based on AR. In this article, we deployed a spatially aligned camera that is connected to a motor for the automation of image capture in the surgical environment. The camera tracks the 2D high-resolution image of the patient's heart along with the catheter testbed. These captured images are uploaded using the mobile app to a remote surgeon who is a cardiology expert. This image is utilized for the 3D reconstruction from 2D image tracking. This is viewed in a HoloLens like an emulator in a laptop. The surgeon can remotely inspect the 3D reconstructed images with additional transformation features such as rotation and scaling. These transformation features are enabled through hand gestures. The surgeon's guidance is transmitted to the surgical environment to automate the process in real-time scenarios. The catheter testbed in the surgical field is controlled by the hand gesture guidance of the remote surgeon. The developed prototype model demonstrates the effectiveness of remote surgical guidance through AR.


Subject(s)
Augmented Reality , Transcatheter Aortic Valve Replacement , Transcatheter Aortic Valve Replacement/methods , Humans , Surgery, Computer-Assisted/methods , Imaging, Three-Dimensional/methods
SELECTION OF CITATIONS
SEARCH DETAIL