Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 20 de 1.588
Filter
1.
Front Public Health ; 12: 1420367, 2024.
Article in English | MEDLINE | ID: mdl-39135928

ABSTRACT

While metaverse is widely discussed, comprehension of its intricacies remains limited to a select few. Conceptually akin to a three-dimensional embodiment of the Internet, the metaverse facilitates simultaneous existence in both physical and virtual domains. Fundamentally, it embodies a visually immersive virtual environment, striving for authenticity, where individuals engage in real-world activities such as commerce, gaming, social interaction, and leisure pursuits. The global pandemic has accelerated digital innovations across diverse sectors. Beyond strides in telehealth, payment systems, remote monitoring, and secure data exchange, substantial advancements have been achieved in artificial intelligence (AI), virtual reality (VR), augmented reality (AR), and blockchain technologies. Nevertheless, the metaverse, in its nascent stage, continues to evolve, harboring significant potential for revolutionizing healthcare. Through integration with the Internet of Medical Devices, quantum computing, and robotics, the metaverse stands poised to redefine healthcare systems, offering enhancements in surgical precision and therapeutic modalities, thus promising profound transformations within the industry.


Subject(s)
Virtual Reality , Humans , Delivery of Health Care , Artificial Intelligence , Telemedicine , Augmented Reality , Blockchain , COVID-19
2.
Mil Med ; 189(Supplement_3): 341-349, 2024 Aug 19.
Article in English | MEDLINE | ID: mdl-39160862

ABSTRACT

INTRODUCTION: Decision-making is a complex process that relies on situational awareness and experience to create a potential list of actions while weighing the risks and benefits of each action. There is a paucity of data evaluating decision-making for individual service members (SM) during the performance of team-based military-relevant activities. Understanding individual performance and decision-making within the context of a team-based activity has potential to aid in the detection and management of mild traumatic brain injuries and assist with safe and timely return-to-duty decision making. The aim of this project was to evaluate cognitive and motor performance in healthy SM during an augmented reality military specific, team-based activity. MATERIALS AND METHODS: Data from 110 SMs from Fort Moore Georgia were analyzed for this project. Service members completed 3 augmented reality room breaching and clearing scenarios (Empty Room, Civilian/Combatant, and Incorrect Position of a unit member) with 3 avatar team members. Participants wore a Microsoft HoloLens 2 (HL2) device and used a replica M4 weapon (Haptech Defense Systems) during scenarios. Three-dimensional position data from the HL2 headset was used to compute temporal measures of room breaching and clearing events while the number and timing of weapon discharge was monitored by the M4. Temporal outcomes included time to enter room, time to fire first shot, time in fatal funnel, and total trial time while motor outcomes were distance traveled and average movement velocity. RESULTS: Pairwise comparisons between the Incorrect Position scenario and the Civilian/Combatant scenario demonstrated no difference in time to enter the room (2.36 seconds in both scenarios). Time to fire the first shot in the Civilian/Combatant scenario was longer (0.97 seconds to 0.58 seconds) while time in fatal funnel (2.58 seconds to 3.31 seconds) and time to trial completion (7.46 seconds to 8.41 seconds) were significantly longer in the Incorrect Position scenario. CONCLUSIONS: Reaction time to fire the first shot, time in the fatal funnel, and total trial time reflect a change in information-processing and decision-making capabilities during military specific, ecological, team-based scenarios when altering the environment inside of the room and modifying avatar movements. Future studies are planned to evaluate the effects of mild traumatic brain injury on specific aspects of military team performance.


Subject(s)
Augmented Reality , Military Personnel , Humans , Male , Military Personnel/statistics & numerical data , Adult , Female , Decision Making , Georgia , Task Performance and Analysis
3.
J Med Syst ; 48(1): 76, 2024 Aug 15.
Article in English | MEDLINE | ID: mdl-39145896

ABSTRACT

Mixed Reality is a technology that has gained attention due to its unique capabilities for accessing and visualizing information. When integrated with voice control mechanisms, gestures and even iris movement, it becomes a valuable tool for medicine. These features are particularly appealing for the operating room and surgical learning, where access to information and freedom of hand operation are fundamental. This study examines the most significant research on mixed reality in the operating room over the past five years, to identify the trends, use cases, its applications and limitations. A systematic review was conducted following the Preferred Reporting Items for Systematic Reviews and Meta-Analysis (PRISMA) guidelines to answer the research questions established using the PICO (Population, Intervention, Comparator and Outcome) framework. Although implementation of Mixed Reality applications in the operations room presents some challenges, when used appropriately, it can yield remarkable results. It can make learning easier, flatten the learning curve for several procedures, and facilitate various aspects of the surgical processes. The articles' conclusions highlight the potential benefits of these innovations in surgical practice while acknowledging the challenges that must be addressed. Technical complexity, equipment costs, and steep learning curves present significant obstacles to the widespread adoption of Mixed Reality and computer-assisted evaluation. The need for more flexible approaches and comprehensive studies is underscored by the specificity of procedures and limited samples sizes. The integration of imaging modalities and innovative functionalities holds promise for clinical applications. However, it is important to consider issues related to usability, bias, and statistical analyses. Mixed Reality offers significant benefits, but there are still open challenges such as ergonomic issues, limited field of view, and battery autonomy that must be addressed to ensure widespread acceptance.


Subject(s)
Augmented Reality , Operating Rooms , Operating Rooms/organization & administration , Humans , User-Computer Interface
4.
Sci Rep ; 14(1): 18938, 2024 08 15.
Article in English | MEDLINE | ID: mdl-39147910

ABSTRACT

The popularity of mixed reality (MR) technologies, including virtual (VR) and augmented (AR) reality, have advanced many training and skill development applications. If successful, these technologies could be valuable for high-impact professional training, like medical operations or sports, where the physical resources could be limited or inaccessible. Despite MR's potential, it is still unclear whether repeatedly performing a task in MR would affect performance in the same or related tasks in the physical environment. To investigate this issue, participants executed a series of visually-guided manual pointing movements in the physical world before and after spending one hour in VR or AR performing similar movements. Results showed that, due to the MR headsets' intrinsic perceptual geometry, movements executed in VR were shorter and movements executed in AR were longer than the veridical Euclidean distance. Crucially, the sensorimotor bias in MR conditions also manifested in the subsequent post-test pointing task; participants transferring from VR initially undershoot whereas those from AR overshoot the target in the physical environment. These findings call for careful consideration of MR-based training because the exposure to MR may perturb the sensorimotor processes in the physical environment and negatively impact performance accuracy and transfer of training from MR to UR.


Subject(s)
Psychomotor Performance , Task Performance and Analysis , Virtual Reality , Humans , Male , Female , Adult , Young Adult , Psychomotor Performance/physiology , Augmented Reality , Movement/physiology
5.
PLoS One ; 19(7): e0305199, 2024.
Article in English | MEDLINE | ID: mdl-39024253

ABSTRACT

Feature description is a critical task in Augmented Reality Tracking. This article introduces a Convex Based Feature Descriptor (CBFD) system designed to withstand rotation, lighting, and blur variations while remaining computationally efficient. We have developed two filters capable of computing pixel intensity variations, followed by the covariance matrix of the polynomial to describe the features. The superiority of CBFD is validated through precision, recall, computation time, and feature location distance. Additionally, we provide a solution to determine the optimal block size for describing nonlinear regions, thereby enhancing resolution. The results demonstrate that CBFD achieves a average precision of 0.97 for the test image, outperforming Superpoint, Directional Intensified Tertiary Filtering (DITF), Binary Robust Independent Elementary Features (BRIEF), Binary Robust Invariant Scalable Keypoints (BRISK), Speeded Up Robust Features (SURF), and Scale Invariant Feature Transform (SIFT), which achieve scores of 0.95, 0.92, 0.72, 0.66, 0.63 and 0.50 respectively. Noteworthy is CBFD's recall value of 0.87 representing at the maximum of a 13.6% improvement over Superpoint, DITF, BRIEF, BRISK, SURF, and SIFT. Furthermore, the matching score for the test image is 0.975. The computation time for CBFD is 2.8 ms, which is at least 6.7% lower than that of other algorithms. Finally, the plot of location feature distance illustrates that CBFD exhibits minimal distance compared to DITF and Histogram of Oriented Gradients (HOG). These results highlight the speed and robustness of CBFD across various transformations.


Subject(s)
Algorithms , Augmented Reality , Humans , Image Processing, Computer-Assisted/methods
8.
Sensors (Basel) ; 24(14)2024 Jul 22.
Article in English | MEDLINE | ID: mdl-39066150

ABSTRACT

Periacetabular osteotomy (PAO) is an effective approach for the surgical treatment of developmental dysplasia of the hip (DDH). However, due to the complex anatomical structure around the hip joint and the limited field of view (FoV) during the surgery, it is challenging for surgeons to perform a PAO surgery. To solve this challenge, we propose a robot-assisted, augmented reality (AR)-guided surgical navigation system for PAO. The system mainly consists of a robot arm, an optical tracker, and a Microsoft HoloLens 2 headset, which is a state-of-the-art (SOTA) optical see-through (OST) head-mounted display (HMD). For AR guidance, we propose an optical marker-based AR registration method to estimate a transformation from the optical tracker coordinate system (COS) to the virtual space COS such that the virtual models can be superimposed on the corresponding physical counterparts. Furthermore, to guide the osteotomy, the developed system automatically aligns a bone saw with osteotomy planes planned in preoperative images. Then, it provides surgeons with not only virtual constraints to restrict movement of the bone saw but also AR guidance for visual feedback without sight diversion, leading to higher surgical accuracy and improved surgical safety. Comprehensive experiments were conducted to evaluate both the AR registration accuracy and osteotomy accuracy of the developed navigation system. The proposed AR registration method achieved an average mean absolute distance error (mADE) of 1.96 ± 0.43 mm. The robotic system achieved an average center translation error of 0.96 ± 0.23 mm, an average maximum distance of 1.31 ± 0.20 mm, and an average angular deviation of 3.77 ± 0.85°. Experimental results demonstrated both the AR registration accuracy and the osteotomy accuracy of the developed system.


Subject(s)
Augmented Reality , Osteotomy , Robotic Surgical Procedures , Surgery, Computer-Assisted , Osteotomy/methods , Osteotomy/instrumentation , Humans , Robotic Surgical Procedures/methods , Robotic Surgical Procedures/instrumentation , Surgery, Computer-Assisted/methods , Acetabulum/surgery
9.
Int J Cardiol ; 412: 132330, 2024 Oct 01.
Article in English | MEDLINE | ID: mdl-38964558

ABSTRACT

BACKGROUND: Using three-dimensional (3D) modalities for optimal pre-procedure planning in transcatheter aortic valve replacement (TAVR) is critical for procedural success. However, current methods rely on visualizing images on a two-dimensional screen, using shading and colors to create the illusion of 3D, potentially impeding the accurate comprehension of the actual anatomy structures. In contrast, a new Mixed Reality (MxR) based software enables accurate 3D visualization, imaging manipulation, and quantification of measurements. AIMS: The study aims to evaluate the feasibility, reproducibility, and accuracy of dimensions of the aortic valve complex as measured with a new holographic MxR software (ARTICOR®, Artiness srl, Milano, Italy) compared to a widely used software for pre-operative sizing and planning (3mensio Medical Imaging BV, Bilthoven, The Netherlands). METHODS: This retrospective, observational, double-center study enrolled 100 patients with severe aortic stenosis who underwent cardiac computed tomography (CCT) before TAVR. The CCT datasets of volumetric aortic valve images were analyzed using 3Mensio and newly introduced MxR-based software. RESULTS: Ninety-eight percent of the CCT datasets were successfully converted into holographic models. A higher level of agreement between the two software systems was observed for linear metrics (short, long, and average diameter). In comparison, agreement was lower for area, perimeter, and annulus-to-coronary ostia distance measurements. Notably, the annulus area, annular perimeter, left ventricular outflow tract (LVOT) area, and LVOT perimeter were significantly and consistently smaller with the MxR-based software compared to the 3Mensio. Excellent interobserver reliability was demonstrated for most measurements, especially for direct linear measurements. CONCLUSIONS: Linear measurements of the aortic valve complex using MxR-based software are reproducible compared to the standard CCT dataset analyzed with 3Mensio. MxR-based software could represent an accurate tool for the pre-procedural planning of TAVR.


Subject(s)
Aortic Valve Stenosis , Holography , Transcatheter Aortic Valve Replacement , Transcatheter Aortic Valve Replacement/methods , Humans , Retrospective Studies , Holography/methods , Female , Aortic Valve Stenosis/surgery , Aortic Valve Stenosis/diagnostic imaging , Male , Aged, 80 and over , Aged , Imaging, Three-Dimensional/methods , Aortic Valve/surgery , Aortic Valve/diagnostic imaging , Reproducibility of Results , Augmented Reality , Software
10.
Appl Ergon ; 120: 104340, 2024 Oct.
Article in English | MEDLINE | ID: mdl-38964218

ABSTRACT

Augmented reality (AR) environments are emerging as prominent user interfaces and gathering significant attention. However, the associated physical strain on the users presents a considerable challenge. Within this background, this study explores the impact of movement distance (MD) and target-to-user distance (TTU) on the physical load during drag-and-drop (DND) tasks in an AR environment. To address this objective, a user experiment was conducted utilizing a 5× 5 within-subject design with MD (16, 32, 48, 64, and 80 cm) and TTU (40, 80, 120, 160, and 200 cm) as the variables. Physical load was assessed using normalized electromyography (NEMG) (%MVC) indicators of the upper extremity muscles and the physical item of NASA-Task load index (TLX). The results revealed significant variations in the physical load based on MD and TTU. Specifically, both the NEMG and subjective physical workload values increased with increasing MD. Moreover, NEMG increased with decreasing TTU, whereas the subjective physical workload scores increased with increasing TTU. Interaction effects of MD and TTU on NEMG were also significantly observed. These findings suggest that considering the MD and TTU when developing content for interacting with AR objects in AR environments could potentially alleviate user load.


Subject(s)
Augmented Reality , Electromyography , Movement , Muscle, Skeletal , Task Performance and Analysis , Upper Extremity , User-Computer Interface , Humans , Upper Extremity/physiology , Male , Young Adult , Muscle, Skeletal/physiology , Female , Movement/physiology , Adult , Workload , Weight-Bearing/physiology , Virtual Reality
13.
Pan Afr Med J ; 47: 157, 2024.
Article in English | MEDLINE | ID: mdl-38974699

ABSTRACT

The integration of virtual reality (VR) and augmented reality (AR) into the telerehabilitation initiates a major change in the healthcare practice particularly in neurological and also orthopedic rehabilitation. This essay reflects the potential of the VR and AR in their capacity to create immersive, interactive environments that facilitate the recovery. The recent developments have illustrated the ability to enhance the patient engagement and outcomes, especially in tackling the complex motor and cognitive rehabilitation needs. The combination of artificial intelligence (AI) with VR and AR will bring the rehabilitation to the next level by enabling adaptive and responsive treatment programs provided through real-time feedback and predictive analytics. Nevertheless, the issues such as availability, cost, and digital gap among many others present huge obstacles to the mass adoption. This essay provides a very thorough review of the existing level of virtual reality and augmented reality in rehabilitation and examines the many potential gains, drawbacks, and future directions from a different perspective.


Subject(s)
Artificial Intelligence , Augmented Reality , Telerehabilitation , Virtual Reality , Humans , Neurological Rehabilitation/methods
14.
BMC Med Educ ; 24(1): 730, 2024 Jul 05.
Article in English | MEDLINE | ID: mdl-38970090

ABSTRACT

BACKGROUND: Virtual reality (VR) and augmented reality (AR) are emerging technologies that can be used for cardiopulmonary resuscitation (CPR) training. Compared to traditional face-to-face training, VR/AR-based training has the potential to reach a wider audience, but there is debate regarding its effectiveness in improving CPR quality. Therefore, we conducted a meta-analysis to assess the effectiveness of VR/AR training compared with face-to-face training. METHODS: We searched PubMed, Embase, Cochrane Library, Web of Science, CINAHL, China National Knowledge Infrastructure, and Wanfang databases from the inception of these databases up until December 1, 2023, for randomized controlled trials (RCTs) comparing VR- and AR-based CPR training to traditional face-to-face training. Cochrane's tool for assessing bias in RCTs was used to assess the methodological quality of the included studies. We pooled the data using a random-effects model with Review Manager 5.4, and assessed publication bias with Stata 11.0. RESULTS: Nine RCTs (involving 855 participants) were included, of which three were of low risk of bias. Meta-analyses showed no significant differences between VR/AR-based CPR training and face-to-face CPR training in terms of chest compression depth (mean difference [MD], -0.66 mm; 95% confidence interval [CI], -6.34 to 5.02 mm; P = 0.82), chest compression rate (MD, 3.60 compressions per minute; 95% CI, -1.21 to 8.41 compressions per minute; P = 0.14), overall CPR performance score (standardized mean difference, -0.05; 95% CI, -0.93 to 0.83; P = 0.91), as well as the proportion of participants meeting CPR depth criteria (risk ratio [RR], 0.79; 95% CI, 0.53 to 1.18; P = 0.26) and rate criteria (RR, 0.99; 95% CI, 0.72 to 1.35; P = 0.93). The Egger regression test showed no evidence of publication bias. CONCLUSIONS: Our study showed evidence that VR/AR-based training was as effective as traditional face-to-face CPR training. Nevertheless, there was substantial heterogeneity among the included studies, which reduced confidence in the findings. Future studies need to establish standardized VR/AR-based CPR training protocols, evaluate the cost-effectiveness of this approach, and assess its impact on actual CPR performance in real-life scenarios and patient outcomes. TRIAL REGISTRATION: CRD42023482286.


Subject(s)
Augmented Reality , Cardiopulmonary Resuscitation , Virtual Reality , Cardiopulmonary Resuscitation/education , Humans , Randomized Controlled Trials as Topic
15.
Sci Rep ; 14(1): 15458, 2024 07 04.
Article in English | MEDLINE | ID: mdl-38965266

ABSTRACT

In total hip arthroplasty (THA), determining the center of rotation (COR) and diameter of the hip joint (acetabulum and femoral head) is essential to restore patient biomechanics. This study investigates on-the-fly determination of hip COR and size, using off-the-shelf augmented reality (AR) hardware. An AR head-mounted device (HMD) was configured with inside-out infrared tracking enabling the determination of surface coordinates using a handheld stylus. Two investigators examined 10 prosthetic femoral heads and cups, and 10 human femurs. The HMD calculated the diameter and COR through sphere fitting. Results were compared to data obtained from either verified prosthetic geometry or post-hoc CT analysis. Repeated single-observer measurements showed a mean diameter error of 0.63 mm ± 0.48 mm for the prosthetic heads and 0.54 mm ± 0.39 mm for the cups. Inter-observer comparison yielded mean diameter errors of 0.28 mm ± 0.71 mm and 1.82 mm ± 1.42 mm for the heads and cups, respectively. Cadaver testing found a mean COR error of 3.09 mm ± 1.18 mm and a diameter error of 1.10 mm ± 0.90 mm. Intra- and inter-observer reliability averaged below 2 mm. AR-based surface mapping using HMD proved accurate and reliable in determining the diameter of THA components with promise in identifying COR and diameter of osteoarthritic femoral heads.


Subject(s)
Arthroplasty, Replacement, Hip , Augmented Reality , Femur Head , Hip Prosthesis , Humans , Femur Head/surgery , Femur Head/diagnostic imaging , Arthroplasty, Replacement, Hip/instrumentation , Arthroplasty, Replacement, Hip/methods , Tomography, X-Ray Computed , Rotation , Male , Hip Joint/surgery , Hip Joint/diagnostic imaging , Female
16.
BMC Med Inform Decis Mak ; 24(1): 201, 2024 Jul 22.
Article in English | MEDLINE | ID: mdl-39039522

ABSTRACT

BACKGROUND: Experts are currently investigating the potential applications of the metaverse in healthcare. The metaverse, a groundbreaking concept that arose in the early 21st century through the fusion of virtual reality and augmented reality technologies, holds promise for transforming healthcare delivery. Alongside its implementation, the issue of digital professionalism in healthcare must be addressed. Digital professionalism refers to the knowledge and skills required by healthcare specialists to navigate digital technologies effectively and ethically. This study aims to identify the core principles of digital professionalism for the use of metaverse in healthcare. METHOD: This study utilized a qualitative design and collected data through semi-structured online interviews with 20 medical information and health informatics specialists from various countries (USA, UK, Sweden, Netherlands, Poland, Romania, Italy, Iran). Data analysis was conducted using the open coding method, wherein concepts (codes) related to the themes of digital professionalism for the metaverse in healthcare were assigned to the data. The analysis was performed using the MAXQDA software (VER BI GmbH, Berlin, Germany). RESULTS: The study revealed ten fundamental principles of digital professionalism for the metaverse in healthcare: Privacy and Security, Informed Consent, Trust and Integrity, Accessibility and Inclusion, Professional Boundaries, Evidence-Based Practice, Continuous Education and Training, Collaboration and Interoperability, Feedback and Improvement, and Regulatory Compliance. CONCLUSION: As the metaverse continues to expand and integrate itself into various industries, including healthcare, it becomes vital to establish principles of digital professionalism to ensure ethical and responsible practices. Healthcare professionals can uphold these principles to maintain ethical standards, safeguard patient privacy, and deliver effective care within the metaverse.


Subject(s)
Professionalism , Humans , Professionalism/standards , Delivery of Health Care/standards , Qualitative Research , Augmented Reality , Medical Informatics , Confidentiality/standards , Informed Consent/standards , Virtual Reality
17.
eNeuro ; 11(8)2024 Aug.
Article in English | MEDLINE | ID: mdl-39013585

ABSTRACT

The electrophysiological response to rewards recorded during laboratory tasks has been well documented, yet little is known about the neural response patterns in a more naturalistic setting. Here, we combined a mobile-EEG system with an augmented reality headset to record event-related brain potentials (ERPs) while participants engaged in a naturalistic operant task to find rewards. Twenty-five participants were asked to navigate toward a west or east goal location marked by floating orbs, and once participants reached the goal location, the orb would then signify a reward (5 cents) or no-reward (0 cents) outcome. Following the outcome, participants returned to a start location marked by floating purple rings, and once standing in the middle, a 3 s counter signaled the next trial, for a total of 200 trials. Consistent with previous research, reward feedback evoked the reward positivity, an ERP component believed to index the sensitivity of the anterior cingulate cortex to reward prediction error signals. The reward positivity peaked ∼230 ms with a maximal at channel FCz (M = -0.695 µV, ±0.23) and was significantly different than zero (p < 0.01). Participants took ∼3.38 s to reach the goal location and exhibited a general lose-shift (68.3% ±3.5) response strategy and posterror slowing. Overall, these novel findings provide support for the idea that combining mobile-EEG with augmented reality technology is a feasible solution to enhance the ecological validity of human electrophysiological studies of goal-directed behavior and a step toward a new era of human cognitive neuroscience research that blurs the line between laboratory and reality.


Subject(s)
Augmented Reality , Electroencephalography , Evoked Potentials , Reward , Humans , Male , Electroencephalography/methods , Female , Young Adult , Adult , Evoked Potentials/physiology , Conditioning, Operant/physiology , Brain/physiology
18.
Hum Mov Sci ; 96: 103254, 2024 Aug.
Article in English | MEDLINE | ID: mdl-39084100

ABSTRACT

Bilateral coordination is commonly impaired in neurodevelopmental conditions including cerebral palsy, developmental coordination disorder, and autism spectrum disorder. However, we lack objective clinical assessments that can quantify bilateral coordination in a clinically feasible manner and determine age-based norms to identify impairments. The objective of this study was to use augmented reality and computer vision to characterize bilateral reaching abilities in typically developing children. Typically developing children (n = 133) ages 6-17 years completed symmetric and asymmetric bilateral reaching tasks in an augmented reality game environment. We analyzed the number of target pairs they could reach in 50 s as well as the time lag between their hands reaching the targets. We found that performance on both tasks developed in parallel, with development slowing but not plateauing after age 12. Children performed better on the symmetric task than asymmetric, both in targets reached and with shorter hand lags. Variability between children in hand lag decreased with age. We also found gender differences with females outperforming males, which were most pronounced in the 10-11 year olds. Overall, this study demonstrates parallel development through childhood and adolescence of symmetric and asymmetric reaching abilities. Furthermore, it demonstrates the ability to quantify bilateral coordination using computer vision and augmented reality, which can be applied to assess clinical populations.


Subject(s)
Augmented Reality , Psychomotor Performance , Humans , Child , Male , Female , Adolescent , Psychomotor Performance/physiology , Video Games , Child Development/physiology , Functional Laterality , Motor Skills/physiology , Hand/physiology , Sex Factors
19.
Article in English | MEDLINE | ID: mdl-39042524

ABSTRACT

Extended reality (XR) technology combines physical reality with computer synthetic virtuality to deliver immersive experience to users. Virtual reality (VR) and augmented reality (AR) are two subdomains within XR with different immersion levels. Both of these have the potential to be combined with robot-assisted training protocols to maximize postural control improvement. In this study, we conducted a randomized control experiment with sixty-three healthy subjects to compare the effectiveness of robot-assisted posture training combined with VR or AR against robotic training alone. A robotic Trunk Support Trainer (TruST) was employed to deliver assistive force at the trunk as subjects moved beyond the stability limits during training. Our results showed that both VR and AR significantly enhanced the training outcomes of the TruST intervention. However, the VR group experienced higher simulator sickness compared to the AR group, suggesting that AR is better suited for sitting posture training in conjunction with TruST intervention. Our findings highlight the added value of XR to robot-assisted training and provide novel insights into the differences between AR and VR when integrated into a robotic training protocol. In addition, we developed a custom XR application that suited well for TruST intervention requirements. Our approach can be extended to other studies to develop novel XR-enhanced robotic training platforms.


Subject(s)
Augmented Reality , Robotics , Virtual Reality , Humans , Male , Female , Adult , Young Adult , Healthy Volunteers , Postural Balance/physiology , Posture/physiology , Torso/physiology , Sitting Position
20.
Arch Orthop Trauma Surg ; 144(7): 3217-3226, 2024 Jul.
Article in English | MEDLINE | ID: mdl-38960934

ABSTRACT

PURPOSE: Patients with total knee arthroplasty (TKA) often suffer from severe postoperative pain, which seriously hinders postoperative rehabilitation. Extended reality (XR), including virtual reality, augmented reality, and mixed reality, has been increasingly used to relieve pain after TKA. The purpose of this study was to evaluate the effectiveness of XR on relieving pain after TKA. METHODS: The electronic databases including PubMed, Embase, Web of Science, Cochrane Central Register of Controlled Trials (CENTRAL), and clinicaltrials.gov were searched for studies from inception to July 20, 2023. The outcomes were pain score, anxiety score, and physiological parameters related to pain. Meta-analysis was performed using the Review Manager 5.4 software. RESULTS: Overall, 11 randomized control trials (RCTs) with 887 patients were included. The pooled results showed XR had lower pain scores (SMD = - 0.31, 95% CI [- 0.46 to - 0.16], P < 0.0001) and anxiety scores (MD = - 3.95, 95% CI [- 7.76 to - 0.13], P = 0.04) than conventional methods. The subgroup analysis revealed XR had lower pain scores within 2 weeks postoperatively (SMD = - 0.49, 95% CI [- 0.76 to - 0.22], P = 0.0004) and XR had lower pain scores when applying XR combined with conventional methods (SMD = - 0.43, 95% CI [- 0.65 to - 0.20], P = 0.0002). CONCLUSION: This systematic review and meta-analysis found applying XR could significantly reduce postoperative pain and anxiety after TKA. When XR was combined with conventional methods, postoperative pain can be effectively relieved, especially within 2 weeks after the operation. XR is an effective non-pharmacological analgesia scheme.


Subject(s)
Arthroplasty, Replacement, Knee , Pain, Postoperative , Randomized Controlled Trials as Topic , Humans , Arthroplasty, Replacement, Knee/adverse effects , Pain, Postoperative/etiology , Pain, Postoperative/prevention & control , Pain Measurement , Pain Management/methods , Augmented Reality , Treatment Outcome , Anxiety/etiology , Anxiety/prevention & control
SELECTION OF CITATIONS
SEARCH DETAIL