Your browser doesn't support javascript.
loading
Montrer: 20 | 50 | 100
Résultats 1 - 20 de 2.353
Filtrer
1.
BMC Med Educ ; 24(1): 730, 2024 Jul 05.
Article de Anglais | MEDLINE | ID: mdl-38970090

RÉSUMÉ

BACKGROUND: Virtual reality (VR) and augmented reality (AR) are emerging technologies that can be used for cardiopulmonary resuscitation (CPR) training. Compared to traditional face-to-face training, VR/AR-based training has the potential to reach a wider audience, but there is debate regarding its effectiveness in improving CPR quality. Therefore, we conducted a meta-analysis to assess the effectiveness of VR/AR training compared with face-to-face training. METHODS: We searched PubMed, Embase, Cochrane Library, Web of Science, CINAHL, China National Knowledge Infrastructure, and Wanfang databases from the inception of these databases up until December 1, 2023, for randomized controlled trials (RCTs) comparing VR- and AR-based CPR training to traditional face-to-face training. Cochrane's tool for assessing bias in RCTs was used to assess the methodological quality of the included studies. We pooled the data using a random-effects model with Review Manager 5.4, and assessed publication bias with Stata 11.0. RESULTS: Nine RCTs (involving 855 participants) were included, of which three were of low risk of bias. Meta-analyses showed no significant differences between VR/AR-based CPR training and face-to-face CPR training in terms of chest compression depth (mean difference [MD], -0.66 mm; 95% confidence interval [CI], -6.34 to 5.02 mm; P = 0.82), chest compression rate (MD, 3.60 compressions per minute; 95% CI, -1.21 to 8.41 compressions per minute; P = 0.14), overall CPR performance score (standardized mean difference, -0.05; 95% CI, -0.93 to 0.83; P = 0.91), as well as the proportion of participants meeting CPR depth criteria (risk ratio [RR], 0.79; 95% CI, 0.53 to 1.18; P = 0.26) and rate criteria (RR, 0.99; 95% CI, 0.72 to 1.35; P = 0.93). The Egger regression test showed no evidence of publication bias. CONCLUSIONS: Our study showed evidence that VR/AR-based training was as effective as traditional face-to-face CPR training. Nevertheless, there was substantial heterogeneity among the included studies, which reduced confidence in the findings. Future studies need to establish standardized VR/AR-based CPR training protocols, evaluate the cost-effectiveness of this approach, and assess its impact on actual CPR performance in real-life scenarios and patient outcomes. TRIAL REGISTRATION: CRD42023482286.


Sujet(s)
Réalité augmentée , Réanimation cardiopulmonaire , Réalité de synthèse , Réanimation cardiopulmonaire/enseignement et éducation , Humains , Essais contrôlés randomisés comme sujet
2.
Heliyon ; 10(12): e32852, 2024 Jun 30.
Article de Anglais | MEDLINE | ID: mdl-38975124

RÉSUMÉ

Nowadays with the increase of high-rise buildings, emergency evacuation is an indispensable part of urban environment management. Due to various disaster incidents occurred in indoor environments, research has concentrated on ways to deal with the different difficulties of indoor emergency evacuation. Although global navigation satellite systems (GNSSs) such as global positioning system (GPS) come in handy in outdoor spaces, they are not of much use in enclosed places, where satellite signals cannot penetrate easily. Therefore, other approaches must be considered for pedestrian navigation to cope with the indoor positioning problem. Another problem in such environments is the information of the building indoor space. The majority of the studies has used prepared maps of the building, which limits their methodology to that specific study area. However, in this study we have proposed an end-to-end method that takes advantage of BIM model of the building, thereby applicable to every structure that has an equivalent building information model (BIM). Moreover, we have used a mixture of Wi-Fi fingerprinting and pedestrian dead reckoning (PDR) method with relatively higher accuracy compared to other similar methods for navigating the user to the exit point. For implementing PDR, we used the sensors in smartphones to calculate user steps and direction. In addition, the navigational information was superimposed on the smartphone screen using augmented reality (AR) technology, thus communicating the direction information in a user-friendly manner. Finally, the AR mobile emergency evacuation application developed was assessed with a sample audience. After an experience with the app, they filled out a questionnaire which was designed in the system usability scale test (SUS) format. The evaluation results showed that the app achieved an acceptable suitability for usage.

3.
Comput Struct Biotechnol J ; 24: 451-463, 2024 Dec.
Article de Anglais | MEDLINE | ID: mdl-38975288

RÉSUMÉ

This report summarises the SMARTCLAP research project, which employs a user-centred design approach to develop a revolutionary smart product service system. The system offers personalised motivation to encourage children with cerebral palsy to actively participate more during their occupational therapy sessions, while providing paediatric occupational therapists with an optimal tool to monitor children's progress from one session to another. The product service system developed includes of a smart wearable device called DigiClap used to interact with a serious game in an Augmented Reality environment. The report highlights the research methodology used to advance the technology readiness level from 4 to 6, acknowledging the contribution of the consortium team and funding source. As part of the technology's maturity process, DigiClap and the respective serious game were evaluated with target users, to identify the system's impact in supporting the children's overall participation and hand function, and to gather feedback from occupational therapists and caregivers on this novel technology. The outcomes of this study are discussed, highlighting limitations and lessons learned. The report also outlines future work and further funding for the sustainability of the project and to reach other individuals who have upper limb limitations. Ultimately, the potential of DigiClap and the overall achievements of this project are discussed.

4.
Appl Neuropsychol Adult ; : 1-4, 2024 Jul 08.
Article de Anglais | MEDLINE | ID: mdl-38976768

RÉSUMÉ

The integration of virtual, mixed, and augmented reality technologies in cognitive neuroscience and neuropsychology represents a transformative frontier. In this Commentary, we conducted a meta-analysis of studies that explored the impact of Virtual Reality (VR), Mixed Reality (MR), and Augmented Reality (AR) on cognitive neuroscience and neuropsychology. Our review highlights the versatile applications of VR, ranging from spatial cognition assessments to rehabilitation for Traumatic Brain Injury. We found that MR and AR offer innovative avenues for cognitive training, particularly in memory-related disorders. The applications extend to addressing social cognition disorders and serving as therapeutic interventions for mental health issues. Collaborative efforts between neuroscientists and technology developers are crucial, with reinforcement learning and neuroimaging studies enhancing the potential for improved outcomes. Ethical considerations, including informed consent, privacy, and accessibility, demand careful attention. Our review identified common aspects of the meta-analysis, including the potential of VR technologies in cognitive neuroscience and neuropsychology, the use of MR and AR in memory research, and the role of VR in neurorehabilitation and therapy.

5.
Pan Afr Med J ; 47: 157, 2024.
Article de Anglais | MEDLINE | ID: mdl-38974699

RÉSUMÉ

The integration of virtual reality (VR) and augmented reality (AR) into the telerehabilitation initiates a major change in the healthcare practice particularly in neurological and also orthopedic rehabilitation. This essay reflects the potential of the VR and AR in their capacity to create immersive, interactive environments that facilitate the recovery. The recent developments have illustrated the ability to enhance the patient engagement and outcomes, especially in tackling the complex motor and cognitive rehabilitation needs. The combination of artificial intelligence (AI) with VR and AR will bring the rehabilitation to the next level by enabling adaptive and responsive treatment programs provided through real-time feedback and predictive analytics. Nevertheless, the issues such as availability, cost, and digital gap among many others present huge obstacles to the mass adoption. This essay provides a very thorough review of the existing level of virtual reality and augmented reality in rehabilitation and examines the many potential gains, drawbacks, and future directions from a different perspective.


Sujet(s)
Intelligence artificielle , Réalité augmentée , Téléréadaptation , Réalité de synthèse , Humains , Rééducation neurologique/méthodes
6.
ACS Nano ; 2024 Jul 03.
Article de Anglais | MEDLINE | ID: mdl-38958405

RÉSUMÉ

Facing the challenge of information security in the current era of information technology, optical encryption based on metasurfaces presents a promising solution to this issue. However, most metasurface-based encryption techniques rely on limited decoding keys and struggle to achieve multidimensional complex encryption. It hinders the progress of optical storage capacity and puts encryption security at a disclosing risk. Here, we propose and experimentally demonstrate a multidimensional encryption system based on chip-integrated metasurfaces that successfully incorporates the simultaneous manipulation of three-dimensional optical parameters, including wavelength, direction, and polarization. Hence, up to eight-channel augmented reality (AR) holograms are concealed by near- and far-field fused encryption, which can only be extracted by correctly providing the three-dimensional decoding keys and then vividly exhibit to the authorizer with low crosstalk, high definition, and no zero-order speckle noise. We envision that the miniature chip-integrated metasurface strategy for multidimensional encryption functionalities promises a feasible route toward the encryption capacity and information security enhancement of the anticounterfeiting performance and optically cryptographic storage.

7.
J Neurol Surg B Skull Base ; 85(4): 363-369, 2024 Aug.
Article de Anglais | MEDLINE | ID: mdl-38966300

RÉSUMÉ

Objective The aim of this work was the development of an augmented reality system including the functionality of conventional surgical navigation systems. Methods An application software for the Augmented Reality System HoloLens 2 from Microsoft was developed. It detects the position of the patient as well as position of surgical instruments in real time and displays it within the two-dimensional (2D) magnetic resonance imaging or computed tomography (CT) images. The surgical pointer instrument, including a pattern that is recognized by the HoloLens 2 sensors, was created with three-dimensional (3D) printing. The technical concept was demonstrated at a cadaver skull to identify anatomical landmarks. Results With the help of the HoloLens 2 and its sensors, the real-time position of the surgical pointer instrument could be shown. The position of the 3D-printed pointer with colored pattern could be recognized within 2D-CT images when stationary and in motion at a cadaver skull. Feasibility could be demonstrated for the clinical application of transsphenoidal pituitary surgery. Conclusion The HoloLens 2 has a high potential for use as a surgical navigation system. With subsequent studies, a further accuracy evaluation will be performed receiving valid data for comparison with conventional surgical navigation systems. In addition to transsphenoidal pituitary surgery, it could be also applied for other surgical disciplines.

9.
Physiol Behav ; 283: 114623, 2024 Jul 01.
Article de Anglais | MEDLINE | ID: mdl-38959990

RÉSUMÉ

BACKGROUND: Exercise has positive effects on psychological well-being, with team sports often associated with superior mental health compared to individual sports. Augmented reality (AR) technology has the potential to convert solitary exercise into multi-person exercise. Given the role of oxytocin in mediating the psychological benefits of exercise and sports, this study aimed to investigate the impact of AR-based multi-person exercise on mood and salivary oxytocin levels. METHODS: Fourteen participants underwent three distinct regimens: non-exercise (Rest), standard solitary cycling exercise (Ex), and AR-based multi-person cycling exercise (Ex+AR). In both Ex and Ex+AR conditions, participants engaged in cycling at a self-regulated pace to maintain a Rating of Perceived Exertion of 10. In the Ex+AR condition, participants' avatars were projected onto a tablet screen, allowing them to cycle alongside ten other virtual avatars in an AR environment. Mood states and saliva samples were collected before and immediately after each 10-minute regimen. Subsequently, salivary oxytocin levels were measured. RESULTS: Notably, only the Ex+AR condition significantly improved mood states associated with depression-dejection and exhibited a non-significant trend toward suppressing anger-hostility in participants. Moreover, the Ex+AR condition led to a significant elevation in salivary oxytocin levels, while the Ex condition showed a non-significant trend toward an increase. However, changes in salivary oxytocin did not show a significant correlation with changes in mood states. CONCLUSIONS: These findings suggest that Ex+AR enhances mood states and promotes oxytocin release. AR-based multi-person exercise may offer greater psychological benefits compared to standard solitary exercise, although the relationship between oxytocin and mood changes remains inconclusive.

10.
Appl Ergon ; 120: 104340, 2024 Jul 03.
Article de Anglais | MEDLINE | ID: mdl-38964218

RÉSUMÉ

Augmented reality (AR) environments are emerging as prominent user interfaces and gathering significant attention. However, the associated physical strain on the users presents a considerable challenge. Within this background, this study explores the impact of movement distance (MD) and target-to-user distance (TTU) on the physical load during drag-and-drop (DND) tasks in an AR environment. To address this objective, a user experiment was conducted utilizing a 5× 5 within-subject design with MD (16, 32, 48, 64, and 80 cm) and TTU (40, 80, 120, 160, and 200 cm) as the variables. Physical load was assessed using normalized electromyography (NEMG) (%MVC) indicators of the upper extremity muscles and the physical item of NASA-Task load index (TLX). The results revealed significant variations in the physical load based on MD and TTU. Specifically, both the NEMG and subjective physical workload values increased with increasing MD. Moreover, NEMG increased with decreasing TTU, whereas the subjective physical workload scores increased with increasing TTU. Interaction effects of MD and TTU on NEMG were also significantly observed. These findings suggest that considering the MD and TTU when developing content for interacting with AR objects in AR environments could potentially alleviate user load.

11.
Article de Anglais | MEDLINE | ID: mdl-38960934

RÉSUMÉ

PURPOSE: Patients with total knee arthroplasty (TKA) often suffer from severe postoperative pain, which seriously hinders postoperative rehabilitation. Extended reality (XR), including virtual reality, augmented reality, and mixed reality, has been increasingly used to relieve pain after TKA. The purpose of this study was to evaluate the effectiveness of XR on relieving pain after TKA. METHODS: The electronic databases including PubMed, Embase, Web of Science, Cochrane Central Register of Controlled Trials (CENTRAL), and clinicaltrials.gov were searched for studies from inception to July 20, 2023. The outcomes were pain score, anxiety score, and physiological parameters related to pain. Meta-analysis was performed using the Review Manager 5.4 software. RESULTS: Overall, 11 randomized control trials (RCTs) with 887 patients were included. The pooled results showed XR had lower pain scores (SMD = - 0.31, 95% CI [- 0.46 to - 0.16], P < 0.0001) and anxiety scores (MD = - 3.95, 95% CI [- 7.76 to - 0.13], P = 0.04) than conventional methods. The subgroup analysis revealed XR had lower pain scores within 2 weeks postoperatively (SMD = - 0.49, 95% CI [- 0.76 to - 0.22], P = 0.0004) and XR had lower pain scores when applying XR combined with conventional methods (SMD = - 0.43, 95% CI [- 0.65 to - 0.20], P = 0.0002). CONCLUSION: This systematic review and meta-analysis found applying XR could significantly reduce postoperative pain and anxiety after TKA. When XR was combined with conventional methods, postoperative pain can be effectively relieved, especially within 2 weeks after the operation. XR is an effective non-pharmacological analgesia scheme.

12.
Neurospine ; 21(2): 432-439, 2024 Jun.
Article de Anglais | MEDLINE | ID: mdl-38955520

RÉSUMÉ

OBJECTIVE: Spine surgeons are often at risk of radiation exposure due to intraoperative fluoroscopy, leading to health concerns such as carcinogenesis. This is due to the increasing use of percutaneous pedicle screw (PPS) in spinal surgeries, resulting from the widespread adoption of minimally invasive spine stabilization. This study aimed to elucidate the effectiveness of smart glasses (SG) in PPS insertion under fluoroscopy. METHODS: SG were used as an alternative screen for fluoroscopic images. Operators A (2-year experience in spine surgery) and B (9-year experience) inserted the PPS into the bilateral L1-5 pedicles of the lumbar model bone under fluoroscopic guidance, repeating this procedure twice with and without SG (groups SG and N-SG, respectively). Each vertebral body's insertion time, radiation dose, and radiation exposure time were measured, and the deviation in screw trajectories was evaluated. RESULTS: The groups SG and N-SG showed no significant difference in insertion time for the overall procedure and each operator. However, group SG had a significantly shorter radiation exposure time than group N-SG for the overall procedure (109.1 ± 43.5 seconds vs. 150.9 ± 38.7 seconds; p = 0.003) and operator A (100.0 ± 29.0 seconds vs. 157.9 ± 42.8 seconds; p = 0.003). The radiation dose was also significantly lower in group SG than in group N-SG for the overall procedure (1.3 ± 0.6 mGy vs. 1.7 ± 0.5 mGy; p = 0.023) and operator A (1.2 ± 0.4 mGy vs. 1.8 ± 0.5 mGy; p = 0.013). The 2 groups showed no significant difference in screw deviation. CONCLUSION: The application of SG in fluoroscopic imaging for PPS insertion holds potential as a useful method for reducing radiation exposure.

13.
Sci Rep ; 14(1): 15458, 2024 07 04.
Article de Anglais | MEDLINE | ID: mdl-38965266

RÉSUMÉ

In total hip arthroplasty (THA), determining the center of rotation (COR) and diameter of the hip joint (acetabulum and femoral head) is essential to restore patient biomechanics. This study investigates on-the-fly determination of hip COR and size, using off-the-shelf augmented reality (AR) hardware. An AR head-mounted device (HMD) was configured with inside-out infrared tracking enabling the determination of surface coordinates using a handheld stylus. Two investigators examined 10 prosthetic femoral heads and cups, and 10 human femurs. The HMD calculated the diameter and COR through sphere fitting. Results were compared to data obtained from either verified prosthetic geometry or post-hoc CT analysis. Repeated single-observer measurements showed a mean diameter error of 0.63 mm ± 0.48 mm for the prosthetic heads and 0.54 mm ± 0.39 mm for the cups. Inter-observer comparison yielded mean diameter errors of 0.28 mm ± 0.71 mm and 1.82 mm ± 1.42 mm for the heads and cups, respectively. Cadaver testing found a mean COR error of 3.09 mm ± 1.18 mm and a diameter error of 1.10 mm ± 0.90 mm. Intra- and inter-observer reliability averaged below 2 mm. AR-based surface mapping using HMD proved accurate and reliable in determining the diameter of THA components with promise in identifying COR and diameter of osteoarthritic femoral heads.


Sujet(s)
Arthroplastie prothétique de hanche , Réalité augmentée , Tête du fémur , Prothèse de hanche , Humains , Tête du fémur/chirurgie , Tête du fémur/imagerie diagnostique , Arthroplastie prothétique de hanche/instrumentation , Arthroplastie prothétique de hanche/méthodes , Tomodensitométrie , Rotation , Mâle , Articulation de la hanche/chirurgie , Articulation de la hanche/imagerie diagnostique , Femelle
14.
J Dent ; 148: 105217, 2024 Jun 28.
Article de Anglais | MEDLINE | ID: mdl-38944264

RÉSUMÉ

OBJECTIVES: Tooth preparation is complicated because it requires the preparation of an abutment while simultaneously predicting the ideal shape of the tooth. This study aimed to develop and evaluate a system using augmented reality (AR) head-mounted displays (HMDs) that provide dynamic navigation capabilities for tooth preparation. METHODS: The proposed system utilizes optical see-through HMDs to overlay digital information onto the real world and enrich the user's environment. By integrating tracking algorithms and three-dimensional modeling, the system provides real-time visualization and navigation capabilities during tooth preparation by using two different visualization techniques. The experimental setup involved a comprehensive analysis of the distance to the surface and cross-sectional angles between the ideal and prepared teeth using three scenarios: traditional (without AR), overlay (AR-assisted visualization of the ideal prepared tooth), and cross-sectional (AR-assisted visualization with cross-sectional views and angular displays). RESULTS: A user study (N = 24) revealed that the cross-sectional approach was more effective for angle adjustment and reduced the occurrence of over-reduction. Additional questionnaires revealed that the AR-assisted approaches were perceived as less difficult, with the cross-sectional approach excelling in terms of performance. CONCLUSIONS: Visualization and navigation using cross-sectional approaches have the potential to support safer tooth preparation with less overreduction than traditional and overlay approaches do. The angular displays provided by the cross-sectional approach are considered helpful for tooth preparation. CLINICAL SIGNIFICANCE: The AR navigation system can assist dentists during tooth preparation and has the potential to enhance the accuracy and safety of prosthodontic treatment.

15.
Psych J ; 2024 Jun 18.
Article de Anglais | MEDLINE | ID: mdl-38894509

RÉSUMÉ

Augmented reality (AR) technology allows virtual objects to be superimposed on the real-world environment, offering significant potential for improving cognitive assessments and rehabilitation processes in the field of visuospatial learning. This study examines how patients with acquired brain injury (ABI) evaluate the functions and usability of a SLAM-based smartphone AR app to assess object-location skills. Ten ABI patients performed a task for the spatial recall of four objects using an AR app. The data collected from 10 healthy participants provided reference values for the best performance. Their perceptions of the AR app/technology and its usability were investigated. The results indicate lower effectiveness in solving the task in the patient group, as the time they needed to complete it was related to their level of impairment. The patients showed lower, yet positive, scores in factors related to app usability and acceptance (e.g., mental effort and satisfaction, respectively). There were more patients reported on entertainment as a positive aspect of the app. Patients' perceived enjoyment was related to concentration and calm, whereas usability was associated with perceived competence, expertise, and a lower level of physical effort. For patients, the sensory aspects of the objects were related to their presence, while for healthy participants, they were related to enjoyment and required effort. The results show that AR seems to be a promising tool to assess spatial orientation in the target patient population.

16.
Cureus ; 16(5): e60479, 2024 May.
Article de Anglais | MEDLINE | ID: mdl-38882985

RÉSUMÉ

BACKGROUND: We developed a 3D camera system to track motion in a surgical field. This system has the potential to introduce augmented reality (AR) systems non-invasively, eliminating the need for the invasive AR markers conventionally required. The present study was performed to verify the real-time tracking accuracy of this system, assess the feasibility of integrating this system into the surgical workflow, and establish its potential to enhance the accuracy and efficiency of orthopedic procedures. METHODS: To evaluate the accuracy of AR technology using a 3D camera, a forearm bone model was created. The forearm model was depicted using a 3D camera, and its accuracy was verified in terms of the positional relationship with a 3D bone model created from previously imaged CT data. Images of the surgical field (capturing the actual forearm) were taken and saved in nine poses by rotating the forearm from pronation to supination. The alignment of the reference points was computed at the three points of CT versus the three points of the 3D camera, yielding a 3D rotation matrix representing the positional relationship. In the original system, a stereo vision-based 3D camera, with a depth image resolution of 1280×720 pixels, 30 frames per second, and a lens field of view of 64 specifications, with a baseline of 3 cm, capable of optimally acquiring real-time 3D data at a distance of 40-60 cm from the subject was used. In the modified system, the following modifications were made to improve tracking performance: (1) color filter processing was changed from HSV to RGB, (2) positional detection accuracy was modified with supporting marker sizes of 8 mm in diameter, and (3) the detection of marker positions was stabilized by calculating the marker position for each frame. Tracking accuracy was examined with the original system and modified system for the following parameters: differences in the rotation matrix, maximum and minimum inter-reference point errors between CT-based and camera-based 3D data, and the average error for the three reference points. RESULTS: In the original system, the average difference in rotation matrices was 5.51±2.68 mm. Average minimum and maximum errors were 1.10±0.61 and 15.53±12.51 mm, respectively. The average error of reference points was 6.26±4.49 mm. In the modified system, the average difference in rotation matrices was 4.22±1.73 mm. Average minimum and maximum errors were 0.79±0.49 and 1.94±0.87 mm, respectively. The average error of reference points was 1.41±0.58 mm. In the original system, once tracking failed, it was difficult to recover tracking accuracy. This resulted in a large maximum error in supination positions. These issues were resolved by the modified system. Significant improvements were achieved in maximum errors and average errors using the modified system (P<0.05). CONCLUSION: AR technology using a 3D camera was developed. This system allows direct comparisons of 3D data from preoperative CT scans with 3D data acquired from the surgical field using a 3D camera. This method has the advantage of introducing AR into the surgical field without invasive markers.

17.
Beijing Da Xue Xue Bao Yi Xue Ban ; 56(3): 541-545, 2024 Jun 18.
Article de Chinois | MEDLINE | ID: mdl-38864142

RÉSUMÉ

OBJECTIVE: To evaluate the outcome of Augmented reality technology in the recognizing of oral and maxillofacial anatomy. METHODS: This study was conducted on the undergraduate students in Peking University School of Stomatology who were learning oral and maxillofacial anatomy. The image data were selected according to the experiment content, and the important blood vessels and bone tissue structures, such as upper and lower jaws, neck arteries and veins were reconstructed in 3D(3-dimensional) by digital software to generate experiment models, and the reconstructed models were encrypted and stored in the cloud. The QR (quick response) code corresponding to the 3D model was scanned by a networked mobile device to obtain augmented reality images to assist experimenters in teaching and subjects in recognizing. Augmented reality technology was applied in both the theoretical explanation and cadaveric dissection respectively. Subjects' feedback was collected in the form of a post-class questionnaire to evaluate the effectiveness of augmented reality technology-assisted recognizing. RESULTS: In the study, 83 undergraduate students were included as subjects in this study. Augmented reality technology could be successfully applied in the recognizing of oral and maxillofacial anatomy. All the subjects could scan the QR code through a connected mobile device to get the 3D anatomy model from the cloud, and zoom in/out/rotate the model on the mobile. Augmented reality technology could provide personalized 3D model, based on learners' needs and abilities. The results of likert scale showed that augmented reality technology was highly recognized by the students (9.19 points), and got high scores in terms of forming a three-dimensional sense and stimulating the enthusiasm for learning (9.01 and 8.85 points respectively). CONCLUSION: Augmented reality technology can realize the three-dimensional visualization of important structures of oral and maxillofacial anatomy and stimulate students' enthusiasm for learning. Besides, it can assist students in building three-dimensional space imagination of the anatomy of oral and maxillofacial area. The application of augmented reality technology achieves favorable effect in the recognizing of oral and maxillofacial anatomy.


Sujet(s)
Réalité augmentée , Imagerie tridimensionnelle , Humains , Imagerie tridimensionnelle/méthodes , Anatomie/enseignement et éducation , Bouche/anatomie et histologie , Logiciel
18.
Article de Anglais | MEDLINE | ID: mdl-38863654

RÉSUMÉ

Tracheal intubation is a crucial procedure performed in airway management to sustain life during various procedures. However, difficult airways can make intubation challenging, which is associated with increased mortality and morbidity. This is particularly important for children who undergo intubation where the situation is difficult. Improved airway management will decrease incidences of repeated attempts, decrease hypoxic injuries in patients, and decrease hospital stays, resulting in better clinical outcomes and reduced costs. Currently, 3D printed models based on CT scans and ultrasound-guided intubation are being used or tested for device fitting and procedure guidance to increase the success rate of intubation, but both have limitations. Maintaining a 3D printing facility can be logistically inconvenient, and it can be time consuming and expensive. Ultrasound-guided intubation can be hindered by operator dependence, limited two-dimensional visualization, and potential artifacts. In this study, we developed an augmented reality (AR) system that allows the overlay of intubation tools and internal airways, providing real-time guidance during the procedure. A child manikin was used to develop and test the AR system. Three-dimensional CT images were acquired from the manikin. Different tissues were segmented to generate the 3D models that were imported into Unity to build the holograms. Phantom experiments demonstrated the AR-guided system for potential applications in tracheal intubation guidance.

19.
J Clin Med ; 13(11)2024 May 23.
Article de Anglais | MEDLINE | ID: mdl-38892770

RÉSUMÉ

Augmented reality (AR) and 3D printing (3DP) are novel technologies in the orthopedic field. Over the past decade, enthusiasm for these new digital applications has driven new perspectives in improving diagnostic accuracy and sensitivity in the field of traumatology. Currently, however, it is still difficult to quantify their value and impact in the medical-scientific field, especially in the improvement of diagnostics in complex fractures. Acetabular fractures have always been a challenge in orthopedics, due to their volumetric complexity and low diagnostic reliability. Background/Objectives: The goal of this study was to determine whether these methods could improve the learning aspect and diagnostic accuracy of complex acetabular fractures compared to gold-standard CT (computed tomography). Methods: Orthopedic residents of our department were selected and divided into Junior (JUN) and Senior (SEN) groups. Associated fractures of acetabulum were included in the study, and details of these were provided as CT scans, 3DP models, and AR models displayed on a tablet screen. In a double-blind questionnaire, each resident classified every fracture. Diagnostic accuracy (DA), response time (RT), agreement (R), and confidence (C) were measured. Results: Twenty residents (JUN = 10, SEN = 10) classified five fractures. Overall DA was 26% (CT), 18% (3DP), and 29% (AR). AR-DA was superior to 3DP-DA (p = 0.048). DA means (JUN vs. SEN, respectively): CT-DA was 20% vs. 32% (p < 0.05), 3DP-DA was 12% vs. 24% (p = 0.08), and AR-DA was 28% vs. 30% (p = 0.80). Overall RT was 61.2 s (±24.6) for CT, 35.8 s (±20.1) for 3DP, and 46.7 s (±20.8) for AR. R was fairly poor between methods and groups. Overall, 3DPs had superior C (65%). Conclusions: AR had the same overall DA as CT, independent of experience, 3DP had minor differences in DA and R, but it was the fastest method and the one in which there was the most confidence. Intra- and inter-observer R between methods remained very poor in residents.

20.
Cancers (Basel) ; 16(11)2024 May 23.
Article de Anglais | MEDLINE | ID: mdl-38893106

RÉSUMÉ

Despite its broad use in cranial and spinal surgery, navigation support and microscope-based augmented reality (AR) have not yet found their way into posterior fossa surgery in the sitting position. While this position offers surgical benefits, navigation accuracy and thereof the use of navigation itself seems limited. Intraoperative ultrasound (iUS) can be applied at any time during surgery, delivering real-time images that can be used for accuracy verification and navigation updates. Within this study, its applicability in the sitting position was assessed. Data from 15 patients with lesions within the posterior fossa who underwent magnetic resonance imaging (MRI)-based navigation-supported surgery in the sitting position were retrospectively analyzed using the standard reference array and new rigid image-based MRI-iUS co-registration. The navigation accuracy was evaluated based on the spatial overlap of the outlined lesions and the distance between the corresponding landmarks in both data sets, respectively. Image-based co-registration significantly improved (p < 0.001) the spatial overlap of the outlined lesion (0.42 ± 0.30 vs. 0.65 ± 0.23) and significantly reduced (p < 0.001) the distance between the corresponding landmarks (8.69 ± 6.23 mm vs. 3.19 ± 2.73 mm), allowing for the sufficient use of navigation and AR support. Navigated iUS can therefore serve as an easy-to-use tool to enable navigation support for posterior fossa surgery in the sitting position.

SÉLECTION CITATIONS
DÉTAIL DE RECHERCHE
...