Your browser doesn't support javascript.
loading
Montrer: 20 | 50 | 100
Résultats 1 - 20 de 142
Filtrer
1.
Sci Rep ; 14(1): 15458, 2024 07 04.
Article de Anglais | MEDLINE | ID: mdl-38965266

RÉSUMÉ

In total hip arthroplasty (THA), determining the center of rotation (COR) and diameter of the hip joint (acetabulum and femoral head) is essential to restore patient biomechanics. This study investigates on-the-fly determination of hip COR and size, using off-the-shelf augmented reality (AR) hardware. An AR head-mounted device (HMD) was configured with inside-out infrared tracking enabling the determination of surface coordinates using a handheld stylus. Two investigators examined 10 prosthetic femoral heads and cups, and 10 human femurs. The HMD calculated the diameter and COR through sphere fitting. Results were compared to data obtained from either verified prosthetic geometry or post-hoc CT analysis. Repeated single-observer measurements showed a mean diameter error of 0.63 mm ± 0.48 mm for the prosthetic heads and 0.54 mm ± 0.39 mm for the cups. Inter-observer comparison yielded mean diameter errors of 0.28 mm ± 0.71 mm and 1.82 mm ± 1.42 mm for the heads and cups, respectively. Cadaver testing found a mean COR error of 3.09 mm ± 1.18 mm and a diameter error of 1.10 mm ± 0.90 mm. Intra- and inter-observer reliability averaged below 2 mm. AR-based surface mapping using HMD proved accurate and reliable in determining the diameter of THA components with promise in identifying COR and diameter of osteoarthritic femoral heads.


Sujet(s)
Arthroplastie prothétique de hanche , Réalité augmentée , Tête du fémur , Prothèse de hanche , Humains , Tête du fémur/chirurgie , Tête du fémur/imagerie diagnostique , Arthroplastie prothétique de hanche/instrumentation , Arthroplastie prothétique de hanche/méthodes , Tomodensitométrie , Rotation , Mâle , Articulation de la hanche/chirurgie , Articulation de la hanche/imagerie diagnostique , Femelle
2.
Sensors (Basel) ; 24(11)2024 Jun 06.
Article de Anglais | MEDLINE | ID: mdl-38894475

RÉSUMÉ

A significant percentage of bridges in the United States are serving beyond their 50-year design life, and many of them are in poor condition, making them vulnerable to fatigue cracks that can result in catastrophic failure. However, current fatigue crack inspection practice based on human vision is time-consuming, labor intensive, and prone to error. We present a novel human-centered bridge inspection methodology to enhance the efficiency and accuracy of fatigue crack detection by employing advanced technologies including computer vision and augmented reality (AR). In particular, a computer vision-based algorithm is developed to enable near-real-time fatigue crack detection by analyzing structural surface motion in a short video recorded by a moving camera of the AR headset. The approach monitors structural surfaces by tracking feature points and measuring variations in distances between feature point pairs to recognize the motion pattern associated with the crack opening and closing. Measuring distance changes between feature points, as opposed to their displacement changes before this improvement, eliminates the need of camera motion compensation and enables reliable and computationally efficient fatigue crack detection using the nonstationary AR headset. In addition, an AR environment is created and integrated with the computer vision algorithm. The crack detection results are transmitted to the AR headset worn by the bridge inspector, where they are converted into holograms and anchored on the bridge surface in the 3D real-world environment. The AR environment also provides virtual menus to support human-in-the-loop decision-making to determine optimal crack detection parameters. This human-centered approach with improved visualization and human-machine collaboration aids the inspector in making well-informed decisions in the field in a near-real-time fashion. The proposed crack detection method is comprehensively assessed using two laboratory test setups for both in-plane and out-of-plane fatigue cracks. Finally, using the integrated AR environment, a human-centered bridge inspection is conducted to demonstrate the efficacy and potential of the proposed methodology.


Sujet(s)
Algorithmes , Réalité augmentée , Humains , Traitement d'image par ordinateur/méthodes
3.
BMC Med Educ ; 24(1): 701, 2024 Jun 27.
Article de Anglais | MEDLINE | ID: mdl-38937764

RÉSUMÉ

BACKGROUND: Clinical teaching during encounters with real patients lies at the heart of medical education. Mixed reality (MR) using a Microsoft HoloLens 2 (HL2) offers the potential to address several challenges: including enabling remote learning; decreasing infection control risks; facilitating greater access to medical specialties; and enhancing learning by vertical integration of basic principles to clinical application. We aimed to assess the feasibility and usability of MR using the HL2 for teaching in a busy, tertiary referral university hospital. METHODS: This prospective observational study examined the use of the HL2 to facilitate a live two-way broadcast of a clinician-patient encounter, to remotely situated third and fourth year medical students. System Usability Scale (SUS) Scores were elicited from participating medical students, clinician, and technician. Feedback was also elicited from participating patients. A modified Evaluation of Technology-Enhanced Learning Materials: Learner Perceptions Questionnaire (mETELM) was completed by medical students and patients. RESULTS: This was a mixed methods prospective, observational study, undertaken in the Day of Surgery Assessment Unit. Forty-seven medical students participated. The mean SUS score for medical students was 71.4 (SD 15.4), clinician (SUS = 75) and technician (SUS = 70) indicating good usability. The mETELM Questionnaire using a 7-point Likert Scale demonstrated MR was perceived to be more beneficial than a PowerPoint presentation (Median = 7, Range 6-7). Opinion amongst the student cohort was divided as to whether the MR tutorial was as beneficial for learning as a live patient encounter would have been (Median = 5, Range 3-6). Students were positive about the prospect of incorporating of MR in future tutorials (Median = 7, Range 5-7). The patients' mETELM results indicate the HL2 did not affect communication with the clinician (Median = 7, Range 7-7). The MR tutorial was preferred to a format based on small group teaching at the bedside (Median = 6, Range 4-7). CONCLUSIONS: Our study findings indicate that MR teaching using the HL2 demonstrates good usability characteristics for providing education to medical students at least in a clinical setting and under conditions similar to those of our study. Also, it is feasible to deliver to remotely located students, although certain practical constraints apply including Wi-Fi and audio quality.


Sujet(s)
Études de faisabilité , Étudiant médecine , Humains , Études prospectives , Étudiant médecine/psychologie , Femelle , Mâle , Autorapport , Enseignement médical premier cycle/méthodes , Adulte , Jeune adulte , Réalité augmentée , Enseignement à distance , Enquêtes et questionnaires
4.
BMC Med Educ ; 24(1): 498, 2024 May 04.
Article de Anglais | MEDLINE | ID: mdl-38704522

RÉSUMÉ

BACKGROUND: Mixed reality offers potential educational advantages in the delivery of clinical teaching. Holographic artefacts can be rendered within a shared learning environment using devices such as the Microsoft HoloLens 2. In addition to facilitating remote access to clinical events, mixed reality may provide a means of sharing mental models, including the vertical and horizontal integration of curricular elements at the bedside. This study aimed to evaluate the feasibility of delivering clinical tutorials using the Microsoft HoloLens 2 and the learning efficacy achieved. METHODS: Following receipt of institutional ethical approval, tutorials on preoperative anaesthetic history taking and upper airway examination were facilitated by a tutor who wore the HoloLens device. The tutor interacted face to face with a patient and two-way audio-visual interaction was facilitated using the HoloLens 2 and Microsoft Teams with groups of students who were located in a separate tutorial room. Holographic functions were employed by the tutor. The tutor completed the System Usability Scale, the tutor, technical facilitator, patients, and students provided quantitative and qualitative feedback, and three students participated in semi-structured feedback interviews. Students completed pre- and post-tutorial, and end-of-year examinations on the tutorial topics. RESULTS: Twelve patients and 78 students participated across 12 separate tutorials. Five students did not complete the examinations and were excluded from efficacy calculations. Student feedback contained 90 positive comments, including the technology's ability to broadcast the tutor's point-of-vision, and 62 negative comments, where students noted issues with the audio-visual quality, and concerns that the tutorial was not as beneficial as traditional in-person clinical tutorials. The technology and tutorial structure were viewed favourably by the tutor, facilitator and patients. Significant improvement was observed between students' pre- and post-tutorial MCQ scores (mean 59.2% Vs 84.7%, p < 0.001). CONCLUSIONS: This study demonstrates the feasibility of using the HoloLens 2 to facilitate remote bedside tutorials which incorporate holographic learning artefacts. Students' examination performance supports substantial learning of the tutorial topics. The tutorial structure was agreeable to students, patients and tutor. Our results support the feasibility of offering effective clinical teaching and learning opportunities using the HoloLens 2. However, the technical limitations and costs of the device are significant, and further research is required to assess the effectiveness of this tutorial format against in-person tutorials before wider roll out of this technology can be recommended as a result of this study.


Sujet(s)
Étudiant médecine , Humains , Mâle , Femelle , Enseignement assisté par ordinateur/méthodes , Enseignement médical premier cycle/méthodes , Études de faisabilité , Évaluation des acquis scolaires , Compétence clinique , Adulte , Holographie , Recueil de l'anamnèse
5.
Sensors (Basel) ; 24(9)2024 Apr 27.
Article de Anglais | MEDLINE | ID: mdl-38732904

RÉSUMÉ

In this paper, we present a novel approach referred to as the audio-based virtual landmark-based HoloSLAM. This innovative method leverages a single sound source and microphone arrays to estimate the voice-printed speaker's direction. The system allows an autonomous robot equipped with a single microphone array to navigate within indoor environments, interact with specific sound sources, and simultaneously determine its own location while mapping the environment. The proposed method does not require multiple audio sources in the environment nor sensor fusion to extract pertinent information and make accurate sound source estimations. Furthermore, the approach incorporates Robotic Mixed Reality using Microsoft HoloLens to superimpose landmarks, effectively mitigating the audio landmark-related issues of conventional audio-based landmark SLAM, particularly in situations where audio landmarks cannot be discerned, are limited in number, or are completely missing. The paper also evaluates an active speaker detection method, demonstrating its ability to achieve high accuracy in scenarios where audio data are the sole input. Real-time experiments validate the effectiveness of this method, emphasizing its precision and comprehensive mapping capabilities. The results of these experiments showcase the accuracy and efficiency of the proposed system, surpassing the constraints associated with traditional audio-based SLAM techniques, ultimately leading to a more detailed and precise mapping of the robot's surroundings.

6.
Orthod Craniofac Res ; 2024 May 07.
Article de Anglais | MEDLINE | ID: mdl-38712682

RÉSUMÉ

OBJECTIVE: We propose a method utilizing mixed reality (MR) goggles (HoloLens 2, Microsoft) to facilitate impacted canine alignment, as planning the traction direction and force delivery could benefit from 3D data visualization using mixed reality (MR). METHODS: Cone-beam CT scans featuring isometric resolution and low noise-to-signal ratio were semi-automatically segmented in Inobitec software. The exported 3D mesh (OBJ file) was then optimized for the HoloLens 2. Using the Unreal Engine environment, we developed an application for the HoloLens 2, implementing HoloLens SDK and UX Tools. Adjustable pointers were added for planning attachment placement, traction direction, and point of force application. The visualization was presented to participants of a course on impacted teeth treatment, followed by a 10-question survey addressing potential advantages (5-point scale: 1 = totally agree, 5 = totally disagree). RESULTS: Out of 38 respondents, 44.7% were orthodontists, 34.2% dentists, 15.8% dental students, and 5.3% dental technicians. Most respondents (44.7%) were between 35 and 44 years old, and only 1 (2.6%) respondent was 55-64 years old. Median answers for six questions were 'totally agree' (25th percentile 1, 75th percentile 2) and for four questions 'agree' (25th percentile 1, 75th percentile 2). No correlation was found between age, profession, and responses. CONCLUSION: Our method generated substantial interest among clinicians. The initial responses affirm the potential benefits, supporting the continued exploration of MR-based techniques for the treatment of impacted teeth. However, the recommendation for widespread use awaits validation through clinical trials.

7.
Cureus ; 16(4): e57443, 2024 Apr.
Article de Anglais | MEDLINE | ID: mdl-38699098

RÉSUMÉ

Anatomy education in the medical school curriculum has encountered considerable challenges during the last decade. The exponential growth of medical science has necessitated a review of the classical ways to teach anatomy to shorten the time students spend dissecting, allowing them to acquire critical, new knowledge in other disciplines. Augmented and mixed reality technologies have developed tremendously during the last few years, offering a wide variety of possibilities to deliver anatomy education to medical students. Here, we provide a methodology to develop, deliver, and assess an anatomy laboratory course using augmented reality applications. We suggest a novel approach, based on Microsoft® HoloLens II, to develop systematic sequences of holograms to reproduce human dissection. The laboratory sessions are prepared before classes and include a series of holograms revealing sequential layers of the human body, isolated structures, or a combination of structures forming a system or a functional unit. The in-class activities are conducted either as one group of students (n = 8-9) with a leading facilitator or small groups of students (n = 4) with facilitators (n = 4) joining the groups for discussion. The same or different sessions may be used for the assessment of students' knowledge. Although currently in its infancy, the use of holograms will soon become a substantial part of medical education. Currently, several companies are offering a range of useful learning platforms, from anatomy education to patient encounters. By describing the holographic program at our institution, we hope to provide a roadmap for other institutions looking to implement a systematic approach to teaching anatomy through holographic dissection. This approach has several benefits, including a sequential 3D presentation of the human body with varying layers of dissection, demonstrations of facilitator-selected three-dimensional (3D) anatomical regions or specific body units, and the option for classroom or remote facilitation, with the ability for students to review each session individually.

8.
J Med Imaging (Bellingham) ; 11(3): 035002, 2024 May.
Article de Anglais | MEDLINE | ID: mdl-38817712

RÉSUMÉ

Purpose: The objective of this study is to evaluate the accuracy of an augmented reality (AR) system in improving guidance, accuracy, and visualization during the subxiphoidal approach for epicardial ablation. Approach: An AR application was developed to project real-time needle trajectories and patient-specific 3D organs using the Hololens 2. Additionally, needle tracking was implemented to offer real-time feedback to the operator, facilitating needle navigation. The AR application was evaluated through three different experiments: examining overlay accuracy, assessing puncture accuracy, and performing pre-clinical evaluations on a phantom. Results: The results of the overlay accuracy assessment for the AR system yielded 2.36±2.04 mm. Additionally, the puncture accuracy utilizing the AR system yielded 1.02±2.41 mm. During the pre-clinical evaluation on the phantom, needle puncture with AR guidance showed 7.43±2.73 mm, whereas needle puncture without AR guidance showed 22.62±9.37 mm. Conclusions: Overall, the AR platform has the potential to enhance the accuracy of percutaneous epicardial access for mapping and ablation of cardiac arrhythmias, thereby reducing complications and improving patient outcomes. The significance of this study lies in the potential of AR guidance to enhance the accuracy and safety of percutaneous epicardial access.

9.
Front Neurol ; 15: 1379243, 2024.
Article de Anglais | MEDLINE | ID: mdl-38654737

RÉSUMÉ

Introduction: External cueing can improve gait in people with Parkinson's disease (PD), but there is a need for wearable, personalized and flexible cueing techniques that can exploit the power of action-relevant visual cues. Augmented Reality (AR) involving headsets or glasses represents a promising technology in those regards. This study examines the gait-modifying effects of real-world and AR cueing in people with PD. Methods: 21 people with PD performed walking tasks augmented with either real-world or AR cues, imposing changes in gait speed, step length, crossing step length, and step height. Two different AR headsets, differing in AR field of view (AR-FOV) size, were used to evaluate potential AR-FOV-size effects on the gait-modifying effects of AR cues as well as on the head orientation required for interacting with them. Results: Participants modified their gait speed, step length, and crossing step length significantly to changes in both real-world and AR cues, with step lengths also being statistically equivalent to those imposed. Due to technical issues, step-height modulation could not be analyzed. AR-FOV size had no significant effect on gait modifications, although small differences in head orientation were observed when interacting with nearby objects between AR headsets. Conclusion: People with PD can modify their gait to AR cues as effectively as to real-world cues with state-of-the-art AR headsets, for which AR-FOV size is no longer a limiting factor. Future studies are warranted to explore the merit of a library of cue modalities and individually-tailored AR cueing for facilitating gait in real-world environments.

10.
Ther Adv Urol ; 16: 17562872241232582, 2024.
Article de Anglais | MEDLINE | ID: mdl-38464882

RÉSUMÉ

Background: Transperineal biopsy of magnetic resonance imaging (MRI)-detected prostate lesions is now the established technique used in prostate cancer (CaP) diagnostics. Virtual Surgery Intelligence (VSI) Holomedicine by Apoqlar (Hamburg, Germany) is a mixed reality (MR)/augmented reality (AR) software platform that runs on the HoloLens II system (Microsoft, Redford, USA). Multiparametric prostate MRI images were converted into 3D holograms and added into a MR space, enabling visualization of a 3D hologram and image-assisted prostate biopsy. Objective: The Targeted Augmented Reality-GuidEd Transperineal (TARGET) study investigated the feasibility of performing AR-guided prostate biopsies in a MR framework, using the VSI platform in patients with MRI-detected prostate lesions. Methods: Ten patients with a clinical suspicion of CaP on MRI (Prostate Imaging-Reporting and Data System, PI-RADS 4/5) were uploaded to the VSI HoloLens system. Two MR/AR-guided prostate biopsies were then acquired using the PrecisionPoint Freehand transperineal biopsy system. Cognitive fusion biopsies were performed as standard of care following the MR/AR-guided prostate biopsies. Results: All 10 patients successfully underwent MR/AR-guided prostate biopsy after 3D MR images were overlaid on the patient's body. Prostatic tissue was obtained in all MR/AR-guided specimens. Seven patients (70%) had matching histology in both the standard and MR/AR-guided biopsies. The remaining three had ISUP (International Society of Urological Pathology) Grade 2 CaP. There were no immediate complications. Conclusion: We believe this is a world first. The initial feasibility data from the TARGET study demonstrated that an MR/AR-guided prostate biopsy utilizing the VSI Holomedicine system is a viable option in CaP diagnostics. The next stage in development is to combine AR images with real-time needle insertion and to provide further data to formally appraise the sensitivity and specificity of the technique.

11.
Foot Ankle Int ; : 10711007241237532, 2024 Mar 19.
Article de Anglais | MEDLINE | ID: mdl-38501722

RÉSUMÉ

BACKGROUND: Acquired adult flatfoot deformity (AAFD) results in a loss of the medial longitudinal arch of the foot and dysfunction of the posteromedial soft tissues. Hintermann osteotomy (H-O) is often used to treat stage II AAFD. The procedure is challenging because of variations in the subtalar facets and limited intraoperative visibility. We aimed to assess the impact of augmented reality (AR) guidance on surgical accuracy and the facet violation rate. METHODS: Sixty AR-guided and 60 conventional osteotomies were performed on foot bone models. For AR osteotomies, the ideal osteotomy plane was uploaded to a Microsoft HoloLens 1 headset and carried out in strict accordance with the superimposed holographic plane. The conventional osteotomies were performed relying solely on the anatomy of the calcaneal lateral column. The rate and severity of facet joint violation was measured, as well as accuracy of entry and exit points. The results were compared across AR-guided and conventional osteotomies, and between experienced and inexperienced surgeons. RESULTS: Experienced surgeons showed significantly greater accuracy for the osteotomy entry point using AR, with the mean deviation of 1.6 ± 0.9 mm (95% CI 1.26, 1.93) compared to 2.3 ± 1.3 mm (95% CI 1.87, 2.79) in the conventional method (P = .035). The inexperienced had improved accuracy, although not statistically significant (P = .064), with the mean deviation of 2.0 ± 1.5 mm (95% CI 1.47, 2.55) using AR compared with 2.7 ± 1.6 mm (95% CI 2.18, 3.32) in the conventional method. AR helped the experienced surgeons avoid full violation of the posterior facet (P = .011). Inexperienced surgeons had a higher rate of middle and posterior facet injury with both methods (P = .005 and .021). CONCLUSION: Application of AR guidance during H-O was associated with improved accuracy for experienced surgeons, demonstrated by a better accuracy of the osteotomy entry point. More crucially, AR guidance prevented full violation of the posterior facet in the experienced group. Further research is needed to address limitations and test this technology on cadaver feet. Ultimately, the use of AR in surgery has the potential to improve patient and surgeon safety while minimizing radiation exposure. CLINICAL RELEVANCE: Subtalar facet injury during lateral column lengthening osteotomy represents a real problem in clinical orthopaedic practice. Because of limited intraoperative visibility and variable anatomy, it is hard to resolve this issue with conventional means. This study suggests the potential of augmented reality to improve the osteotomy accuracy.

12.
Sensors (Basel) ; 24(4)2024 Feb 06.
Article de Anglais | MEDLINE | ID: mdl-38400220

RÉSUMÉ

Due to their low cost and portability, using entertainment devices for indoor mapping applications has become a hot research topic. However, the impact of user behavior on indoor mapping evaluation with entertainment devices is often overlooked in previous studies. This article aims to assess the indoor mapping performance of entertainment devices under different mapping strategies. We chose two entertainment devices, the HoloLens 2 and iPhone 14 Pro, for our evaluation work. Based on our previous mapping experience and user habits, we defined four simplified indoor mapping strategies: straight-forward mapping (SFM), left-right alternating mapping (LRAM), round-trip straight-forward mapping (RT-SFM), and round-trip left-right alternating mapping (RT-LRAM). First, we acquired triangle mesh data under each strategy with the HoloLens 2 and iPhone 14 Pro. Then, we compared the changes in data completeness and accuracy between the different devices and indoor mapping applications. Our findings show that compared to the iPhone 14 Pro, the triangle mesh accuracy acquired by the HoloLens 2 has more stable performance under different strategies. Notably, the triangle mesh data acquired by the HoloLens 2 under the RT-LRAM strategy can effectively compensate for missing wall and floor surfaces, mainly caused by furniture occlusion and the low frame rate of the depth-sensing camera. However, the iPhone 14 Pro is more efficient in terms of mapping completeness and can acquire a complete triangle mesh more quickly than the HoloLens 2. In summary, choosing an entertainment device for indoor mapping requires a combination of specific needs and scenes. If accuracy and stability are important, the HoloLens 2 is more suitable; if efficiency and completeness are important, the iPhone 14 Pro is better.

13.
Sensors (Basel) ; 24(2)2024 Jan 15.
Article de Anglais | MEDLINE | ID: mdl-38257621

RÉSUMÉ

The steady increase in the aging population worldwide is expected to cause a shortage of doctors and therapists for older people. This demographic shift requires more efficient and automated systems for rehabilitation and physical ability evaluations. Rehabilitation using mixed reality (MR) technology has attracted much attention in recent years. MR displays virtual objects on a head-mounted see-through display that overlies the user's field of vision and allows users to manipulate them as if they exist in reality. However, tasks in previous studies applying MR to rehabilitation have been limited to tasks in which the virtual objects are static and do not interact dynamically with the surrounding environment. Therefore, in this study, we developed an application to evaluate cognitive and motor functions with the aim of realizing a rehabilitation system that is dynamic and has interaction with the surrounding environment using MR technology. The developed application enabled effective evaluation of the user's spatial cognitive ability, task skillfulness, motor function, and decision-making ability. The results indicate the usefulness and feasibility of MR technology to quantify motor function and spatial cognition both for static and dynamic tasks in rehabilitation.


Sujet(s)
Réalité augmentée , Médecins , Navigation spatiale , Humains , Sujet âgé , Vieillissement , Cognition
14.
Neurosurg Focus ; 56(1): E13, 2024 01.
Article de Anglais | MEDLINE | ID: mdl-38163338

RÉSUMÉ

OBJECTIVE: The objective of this study was to analyze the potential and convenience of using mixed reality as a teaching tool for craniovertebral junction (CVJ) anomaly pathoanatomy. METHODS: CT and CT angiography images of 2 patients with CVJ anomalies were used to construct mixed reality models in the HoloMedicine application on the HoloLens 2 headset, resulting in four viewing stations. Twenty-two participants were randomly allocated into two groups, with each participant rotating through all stations for 90 seconds, each in a different order based on their group. At every station, objective questions evaluating the understanding of CVJ pathoanatomy were answered. At the end, subjective opinion on the user experience of mixed reality was provided using a 5-point Likert scale. The objective performance of the two viewing modes was compared, and a correlation between performance and participant experience was sought. Subjective feedback was compiled and correlated with experience. RESULTS: In both groups, there was a significant improvement in median (interquartile range [IQR]) objective performance with mixed reality compared with DICOM: 1) group A: case 1, median 6 (IQR 6-7) versus 5 (IQR 3-6), p = 0.009; case 2, median 6 (IQR 6-7) versus 5 (IQR 3-6), p = 0.02; 2) group B: case 1, median 6 (IQR 5-7) versus 4 (IQR 2-5), p = 0.04; case 2, median 6 (IQR 6-7) versus 5 (IQR 3-7), p = 0.03. There was significantly higher improvement in less experienced participants in both groups for both cases: 1) group A: case 1, r = -0.8665, p = 0.0005; case 2, r = -0.8002, p = 0.03; 2) group B: case 1, r = -0.6977, p = 0.01; case 2, r = -0.7417, p = 0.009. Subjectively, mixed reality was easy to use, with less disorientation due to the visible background, and it was believed to be a useful teaching tool. CONCLUSIONS: Mixed reality is an effective teaching tool for CVJ pathoanatomy, particularly for young neurosurgeons and trainees. The versatility of mixed reality and the intuitiveness of the user experience offer many potential applications, including training, intraoperative guidance, patient counseling, and individualized medicine; consequently, mixed reality has the potential to transform neurosurgery.


Sujet(s)
Réalité augmentée , Neurochirurgie , Humains , Procédures de neurochirurgie/méthodes , Neurochirurgiens , Compétence clinique
15.
Neurosurg Focus ; 56(1): E11, 2024 01.
Article de Anglais | MEDLINE | ID: mdl-38163351

RÉSUMÉ

OBJECTIVE: The traditional freehand placement of an external ventricular drain (EVD) relies on empirical craniometric landmarks to guide the craniostomy and subsequent passage of the EVD catheter. The diameter and trajectory of the craniostomy physically limit the possible trajectories that can be achieved during the passage of the catheter. In this study, the authors implemented a mixed reality-guided craniostomy procedure to evaluate the benefit of an optimally drilled craniostomy to the accurate placement of the catheter. METHODS: Optical marker-based tracking using an OptiTrack system was used to register the brain ventricular hologram and drilling guidance for craniostomy using a HoloLens 2 mixed reality headset. A patient-specific 3D-printed skull phantom embedded with intracranial camera sensors was developed to automatically calculate the EVD accuracy for evaluation. User trials consisted of one blind and one mixed reality-assisted craniostomy followed by a routine, unguided EVD catheter placement for each of two different drill bit sizes. RESULTS: A total of 49 participants were included in the study (mean age 23.4 years, 59.2% female). The mean distance from the catheter target improved from 18.6 ± 12.5 mm to 12.7 ± 11.3 mm (p = 0.0008) using mixed reality guidance for trials with a large drill bit and from 19.3 ± 12.7 mm to 10.1 ± 8.4 mm with a small drill bit (p < 0.0001). Accuracy using mixed reality was improved using a smaller diameter drill bit compared with a larger bit (p = 0.039). Overall, the majority of the participants were positive about the helpfulness of mixed reality guidance and the overall mixed reality experience. CONCLUSIONS: Appropriate indications and use cases for the application of mixed reality guidance to neurosurgical procedures remain an area of active inquiry. While prior studies have demonstrated the benefit of mixed reality-guided catheter placement using predrilled craniostomies, the authors demonstrate that real-time quantitative and visual feedback of a mixed reality-guided craniostomy procedure can independently improve procedural accuracy and represents an important tool for trainee education and eventual clinical implementation.


Sujet(s)
Réalité augmentée , Humains , Femelle , Jeune adulte , Adulte , Mâle , Drainage/méthodes , Procédures de neurochirurgie/méthodes , Ventricules cérébraux/imagerie diagnostique , Ventricules cérébraux/chirurgie , Cathéters
16.
Surg Innov ; 31(1): 48-57, 2024 Feb.
Article de Anglais | MEDLINE | ID: mdl-38019844

RÉSUMÉ

BACKGROUND: Computer assisted surgical navigation systems are designed to improve outcomes by providing clinicians with procedural guidance information. The use of new technologies, such as mixed reality, offers the potential for more intuitive, efficient, and accurate procedural guidance. The goal of this study is to assess the positional accuracy and consistency of a clinical mixed reality system that utilizes commercially available wireless head-mounted displays (HMDs), custom software, and localization instruments. METHODS: Independent teams using the second-generation Microsoft HoloLens© hardware, Medivis SurgicalAR© software, and localization instruments, tested the accuracy of the combined system at different institutions, times, and locations. The ASTM F2554-18 consensus standard for computer-assisted surgical systems, as recognized by the U.S. FDA, was utilized to measure the performance. 288 tests were performed. RESULTS: The system demonstrated consistent results, with an average accuracy performance that was better than one millimeter (.75 ± SD .37 mm). CONCLUSION: Independently acquired positional tracking accuracies exceed conventional in-market surgical navigation tracking systems and FDA standards. Importantly, the performance was achieved at two different institutions, using an international testing standard, and with a system that included a commercially available off-the-shelf wireless head mounted display and software.


Sujet(s)
Réalité augmentée , Chirurgie assistée par ordinateur , États-Unis , Chirurgie assistée par ordinateur/méthodes , Systèmes de navigation chirurgicale , Food and Drug Administration (USA) , Logiciel
17.
Brain Stimul ; 16(6): 1799-1805, 2023.
Article de Anglais | MEDLINE | ID: mdl-38135359

RÉSUMÉ

BACKGROUND: Connectomic modeling studies are expanding understanding of the brain networks that are modulated by deep brain stimulation (DBS) therapies. However, explicit integration of these modeling results into prospective neurosurgical planning is only beginning to evolve. One challenge of employing connectomic models in patient-specific surgical planning is the inherent 3D nature of the results, which can make clinically useful data integration and visualization difficult. METHODS: We developed a holographic stereotactic neurosurgery research tool (HoloSNS) that integrates patient-specific brain models into a group-based visualization environment for interactive surgical planning using connectomic hypotheses. HoloSNS currently runs on the HoloLens 2 platform and it enables remote networking between headsets. This allowed us to perform surgical planning group meetings with study co-investigators distributed across the country. RESULTS: We used HoloSNS to plan stereo-EEG and DBS electrode placements for each patient participating in a clinical trial (NCT03437928) that is targeting both the subcallosal cingulate and ventral capsule for the treatment of depression. Each patient model consisted of multiple components of scientific data and anatomical reconstructions of the head and brain (both patient-specific and atlas-based), which far exceed the data integration capabilities of traditional neurosurgical planning workstations. This allowed us to prospectively discuss and evaluate the positioning of the electrodes based on novel connectomic hypotheses. CONCLUSIONS: The 3D nature of the surgical procedure, brain imaging data, and connectomic modeling results all highlighted the utility of employing holographic visualization to support the design of unique clinical experiments to explore brain network modulation with DBS.


Sujet(s)
Stimulation cérébrale profonde , Troubles mentaux , Humains , Études prospectives , Stimulation cérébrale profonde/méthodes , Encéphale/imagerie diagnostique , Troubles mentaux/thérapie , Électroencéphalographie
18.
J Cardiovasc Dev Dis ; 10(11)2023 Nov 15.
Article de Anglais | MEDLINE | ID: mdl-37998522

RÉSUMÉ

We sought to determine the role of the patient-specific, three-dimensional (3D) holographic vascular model in patient medical knowledge and its influence on obtaining a more conscious informed consent process for percutaneous balloon angioplasty (PTA). Patients with peripheral arterial disease who had been scheduled for PTA were enrolled in the study. Information regarding the primary disease, planned procedure, and informed consent was recorded in typical fashion. Subsequently, the disease and procedure details were presented to the patient, showing the patients their individual model. A patient and medical supervisor equipped with mixed reality headsets could both simultaneously manipulate the hologram using gestures. The holographic 3D model had been created on a scale of 1:1 based on computed tomography scans. The patient's knowledge was tested by the completion of a questionnaire before and after the interaction in a mixed reality environment. Seventy-nine patients manipulated arterial holograms in mixed reality head-mounted devices. Before the 3D holographic artery model interaction, the mean ± standard deviation score of the knowledge test was 2.95 ± 1.21 points. After the presentation, the score had increased to 4.39 ± 0.82, with a statistically significant difference (p = 0.0000) between the two scores. Using a Likert scale from 1 to 5, the patients had scored the use of the 3D holographic model at 3.90 points regarding its usefulness in comprehending their medical condition; at 4.04 points regarding the evaluation of the holograms as helpful in understanding the course of surgery; and rated the model at 1.99 points in reducing procedure-related stress. Using a nominal scale (know or don't know), the patients had self-assessed their knowledge of the procedure before and after the 3D model presentation, with a score of 6.29 ± 2.01 and 8.39 ± 1.54, respectively. The study group tolerated the use of head-mounted devices. Only one patient had nausea and dizziness, while four patients experienced transient eye pain. The 3D holographic arterial model aided in the understanding of patients' knowledge regarding the disease and procedure, making the informed consent process more conscious. The holograms improved the patient's self-consciousness. Mixed reality headset-related complications were rare and within acceptable rates.

19.
J Healthc Inform Res ; 7(4): 527-541, 2023 Dec.
Article de Anglais | MEDLINE | ID: mdl-37927377

RÉSUMÉ

Mixed reality opens interesting possibilities as it allows physicians to interact with both, the real physical and the virtual computer-generated environment and objects, in a powerful way. A mixed reality system, based in the HoloLens 2 glasses, has been developed to assist cardiologists in a quite complex interventional procedure: the ultrasound-guided femoral arterial cannulations, during real-time practice in interventional cardiology. The system is divided into two modules, the transmitter module, responsible for sending medical images to HoloLens 2 glasses, and the receiver module, hosted in the HoloLens 2, which renders those medical images, allowing the practitioner to watch and manage them in a 3D environment. The system has been successfully used, between November 2021 and August 2022, in up to 9 interventions by 2 different practitioners, in a large public hospital in central Spain. The practitioners using the system confirmed it as easy to use, reliable, real-time, reachable, and cost-effective, allowing a reduction of operating times, a better control of typical errors associated to the interventional procedure, and opening the possibility to use the medical imagery produced in ubiquitous e-learning. These strengths and opportunities were only nuanced by the risk of potential medical complications emerging from system malfunction or operator errors when using the system (e.g., unexpected momentary lag). In summary, the proposed system can be taken as a realistic proof of concept of how mixed reality technologies can support practitioners when performing interventional and surgical procedures during real-time daily practice.

20.
Sensors (Basel) ; 23(21)2023 Oct 25.
Article de Anglais | MEDLINE | ID: mdl-37960398

RÉSUMÉ

The integration of Deep Learning (DL) models with the HoloLens2 Augmented Reality (AR) headset has enormous potential for real-time AR medical applications. Currently, most applications execute the models on an external server that communicates with the headset via Wi-Fi. This client-server architecture introduces undesirable delays and lacks reliability for real-time applications. However, due to HoloLens2's limited computation capabilities, running the DL model directly on the device and achieving real-time performances is not trivial. Therefore, this study has two primary objectives: (i) to systematically evaluate two popular frameworks to execute DL models on HoloLens2-Unity Barracuda and Windows Machine Learning (WinML)-using the inference time as the primary evaluation metric; (ii) to provide benchmark values for state-of-the-art DL models that can be integrated in different medical applications (e.g., Yolo and Unet models). In this study, we executed DL models with various complexities and analyzed inference times ranging from a few milliseconds to seconds. Our results show that Unity Barracuda is significantly faster than WinML (p-value < 0.005). With our findings, we sought to provide practical guidance and reference values for future studies aiming to develop single, portable AR systems for real-time medical assistance.


Sujet(s)
Réalité augmentée , Apprentissage profond , Humains , Reproductibilité des résultats , Apprentissage machine
SÉLECTION CITATIONS
DÉTAIL DE RECHERCHE
...