Your browser doesn't support javascript.
loading
: 20 | 50 | 100
1 - 20 de 1.539
1.
Medicina (Kaunas) ; 60(6)2024 Jun 02.
Article En | MEDLINE | ID: mdl-38929549

Background and Objectives: Microsurgical resection with intraoperative neuromonitoring is the gold standard for acoustic neurinomas (ANs) which are classified as T3 or T4 tumors according to the Hannover Classification. Microscope-based augmented reality (AR) can be beneficial in cerebellopontine angle and lateral skull base surgery, since these are small areas packed with anatomical structures and the use of this technology enables automatic 3D building of a model without the need for a surgeon to mentally perform this task of transferring 2D images seen on the microscope into imaginary 3D images, which then reduces the possibility of error and provides better orientation in the operative field. Materials and Methods: All patients who underwent surgery for resection of ANs in our department were included in this study. Clinical outcomes in terms of postoperative neurological deficits and complications were evaluated, as well as neuroradiological outcomes for tumor remnants and recurrence. Results: A total of 43 consecutive patients (25 female, median age 60.5 ± 16 years) who underwent resection of ANs via retrosigmoid osteoclastic craniotomy with the use of intraoperative neuromonitoring (22 right-sided, 14 giant tumors, 10 cystic, 7 with hydrocephalus) by a single surgeon were included in this study, with a median follow up of 41.2 ± 32.2 months. A total of 18 patients underwent subtotal resection, 1 patient partial resection and 24 patients gross total resection. A total of 27 patients underwent resection in sitting position and the rest in semi-sitting position. Out of 37 patients who had no facial nerve deficit prior to surgery, 19 patients were intact following surgery, 7 patients had House Brackmann (HB) Grade II paresis, 3 patients HB III, 7 patients HB IV and 1 patient HB V. Wound healing deficit with cerebrospinal fluid (CSF) leak occurred in 8 patients (18.6%). Operative time was 317.3 ± 99 min. One patient which had recurrence and one further patient with partial resection underwent radiotherapy following surgery. A total of 16 patients (37.2%) underwent resection using fiducial-based navigation and microscope-based AR, all in sitting position. Segmented objects of interest in AR were the sigmoid and transverse sinus, tumor outline, cranial nerves (CN) VII, VIII and V, petrous vein, cochlea and semicircular canals and brain stem. Operative time and clinical outcome did not differ between the AR and the non-AR group. However, use of AR improved orientation in the operative field for craniotomy planning and microsurgical resection by identification of important neurovascular structures. Conclusions: The single-center experience of resection of ANs showed a high rate of gross total (GTR) and subtotal resection (STR) with low recurrence. Use of AR improves intraoperative orientation and facilitates craniotomy planning and AN resection through early improved identification of important anatomical relations to structures of the inner auditory canal, venous sinuses, petrous vein, brain stem and the course of cranial nerves.


Augmented Reality , Microsurgery , Neuroma, Acoustic , Humans , Female , Middle Aged , Male , Microsurgery/methods , Neuroma, Acoustic/surgery , Aged , Adult , Neurosurgical Procedures/methods , Microscopy/methods , Treatment Outcome , Imaging, Three-Dimensional/methods
2.
Sensors (Basel) ; 24(12)2024 Jun 17.
Article En | MEDLINE | ID: mdl-38931701

This paper presents a fully automated experimental setup tailored for evaluating the effectiveness of augmented and virtual reality technologies in healthcare settings for regulatory purposes, with a focus on the characterization of depth sensors. The setup is constructed as a modular benchtop platform that enables quantitative analysis of depth cameras essential for extended reality technologies in a controlled environment. We detail a design concept and considerations for an experimental configuration aimed at simulating realistic scenarios for head-mounted displays. The system includes an observation platform equipped with a three-degree-of-freedom motorized system and a test object stage. To accurately replicate real-world scenarios, we utilized an array of sensors, including commonly available range-sensing cameras and commercial augmented reality headsets, notably the Intel RealSense L515 LiDAR camera, integrated into the motion control system. The paper elaborates on the system architecture and the automated data collection process. We discuss several evaluation studies performed with this setup, examining factors such as spatial resolution, Z-accuracy, and pixel-to-pixel correlation. These studies provide valuable insights into the precision and reliability of these technologies in simulated healthcare environments.


Augmented Reality , Humans , Virtual Reality
3.
Curr Opin Obstet Gynecol ; 36(4): 255-259, 2024 Aug 01.
Article En | MEDLINE | ID: mdl-38869434

PURPOSE OF REVIEW: Artificial intelligence (AI) is now integrated in our daily life. It has also been incorporated in medicine with algorithms to diagnose, recommend treatment options, and estimate prognosis. RECENT FINDINGS: AI in surgery differs from virtual AI used for clinical application. Physical AI in the form of computer vision and augmented reality is used to improve surgeon's skills, performance, and patient outcomes. SUMMARY: Several applications of AI and augmented reality are utilized in gynecologic surgery. AI's potential use can be found in all phases of surgery: preoperatively, intra-operatively, and postoperatively. Its current benefits are for improving accuracy, surgeon's precision, and reducing complications.


Artificial Intelligence , Gynecologic Surgical Procedures , Humans , Female , Gynecologic Surgical Procedures/methods , Augmented Reality , Surgery, Computer-Assisted/methods
4.
Int J Med Robot ; 20(3): e2649, 2024 Jun.
Article En | MEDLINE | ID: mdl-38847242

BACKGROUND: Endoscope retrograde cholangiopancreatography is a standard surgical treatment for gallbladder and pancreatic diseases. However, surgeons is at high risk and require sufficient surgical experience and skills. METHODS: (1) The simultaneous localisation and mapping technique to reconstruct the surgical environment. (2) The preoperative 3D model is transformed into the intraoperative video environment to implement the multi-modal fusion. (3) A framework for virtual-to-real projection based on hand-eye alignment. For the purpose of projecting the 3D model onto the imaging plane of the camera, it uses position data from electromagnetic sensors. RESULTS: Our AR-assisted navigation system can accurately guide physicians, which means a distance of registration error to be restricted to under 5 mm and a projection error of 5.76 ± 2.13, and the intubation procedure is done at 30 frames per second. CONCLUSIONS: Coupled with clinical validation and user studies, both the quantitative and qualitative results indicate that our navigation system has the potential to be highly useful in clinical practice.


Augmented Reality , Cholangiopancreatography, Endoscopic Retrograde , Phantoms, Imaging , Surgery, Computer-Assisted , Humans , Cholangiopancreatography, Endoscopic Retrograde/methods , Surgery, Computer-Assisted/methods , Surgery, Computer-Assisted/instrumentation , Imaging, Three-Dimensional/methods , Surgical Navigation Systems , Robotic Surgical Procedures/methods , Robotic Surgical Procedures/instrumentation , Reproducibility of Results
5.
J Acoust Soc Am ; 155(6): 3715-3729, 2024 Jun 01.
Article En | MEDLINE | ID: mdl-38847595

Emerging technologies of virtual reality (VR) and augmented reality (AR) are enhancing soundscape research, potentially producing new insights by enabling controlled conditions while preserving the context of a virtual gestalt within the soundscape concept. This study explored the ecological validity of virtual environments for subjective evaluations in soundscape research, focusing on the authenticity of virtual audio-visual environments for reproducibility. Different technologies for creating and reproducing virtual environments were compared, including field recording, simulated VR, AR, and audio-only presentation, in two audio-visual reproduction settings, a head-mounted display with head-tracked headphones and a VR lab with head-locked headphones. Via a series of soundwalk- and lab-based experiments, the results indicate that field recording technologies provided the most authentic audio-visual environments, followed by AR, simulated VR, and audio-only approaches. The authenticity level influenced subjective evaluations of virtual environments, e.g., arousal/eventfulness and pleasantness. The field recording and AR-based technologies closely matched the on-site soundwalk ratings in arousal, while the other approaches scored lower. All the approaches had significantly lower pleasantness ratings compared to on-site evaluations. The choice of audio-visual reproduction technology did not significantly impact the evaluations. Overall, the results suggest virtual environments with high authenticity can be useful for future soundscape research and design.


Auditory Perception , Virtual Reality , Humans , Female , Male , Adult , Young Adult , Augmented Reality , Acoustic Stimulation , Sound , Reproducibility of Results
6.
Sci Rep ; 14(1): 13579, 2024 06 12.
Article En | MEDLINE | ID: mdl-38866827

The concept of an innovative human-machine interface and interaction modes based on virtual and augmented reality technologies for airport control towers has been developed with the aim of increasing the human performances and situational awareness of air traffic control operators. By presenting digital information through see-through head-mounted displays superimposed over the out-of-the-tower view, the proposed interface should stimulate controllers to operate in a head-up position and, therefore, reduce the number of switches between a head-up and a head-down position even in low visibility conditions. This paper introduces the developed interface and describes the exercises conducted to validate the technical solutions developed, focusing on the simulation platform and exploited technologies, to demonstrate how virtual and augmented reality, along with additional features such as adaptive human-machine interface, multimodal interaction and attention guidance, enable a more natural and effective interaction in the control tower. The results of the human-in-the-loop real-time validation exercises show that the prototype concept is feasible from both an operational and technical perspective, the solution proves to support the air traffic controllers in working in a head-up position more than head-down even with low-visibility operational scenarios, and to lower the time to react in critical or alerting situations with a positive impact on the human performances of the user. While showcasing promising results, this study also identifies certain limitations and opportunities for refinement, aimed at further optimising the efficacy and usability of the proposed interface.


Airports , Augmented Reality , Man-Machine Systems , User-Computer Interface , Humans , Virtual Reality , Aviation
7.
J Vis Exp ; (207)2024 May 24.
Article En | MEDLINE | ID: mdl-38856206

This protocol helps assess the accuracy and workflow of an augmented reality (AR) hybrid navigation system using the Magic Leap head-mounted display (HMD) for minimally invasive pedicle screw placement. The cadaveric porcine specimens were placed on a surgical table and draped with sterile covers. The levels of interest were identified using fluoroscopy, and a dynamic reference frame was attached to the spinous process of a vertebra in the region of interest. Cone beam computerized tomography (CBCT) was performed, and a 3D rendering was automatically generated, which was used for the subsequent planning of the pedicle screw placements. Each surgeon was fitted with an HMD that was individually eye-calibrated and connected to the spinal navigation system. Navigated instruments, tracked by the navigation system and displayed in 2D and 3D in the HMD, were used for 33 pedicle cannulations, each with a diameter of 4.5 mm. Postprocedural CBCT scans were assessed by an independent reviewer to measure the technical (deviation from the planned path) and clinical (Gertzbein grade) accuracy of each cannulation. The navigation time for each cannulation was measured. The technical accuracy was 1.0 mm ± 0.5 mm at the entry point and 0.8 mm ± 0.1 mm at the target. The angular deviation was 1.5° ± 0.6°, and the mean insertion time per cannulation was 141 s ± 71 s. The clinical accuracy was 100% according to the Gertzbein grading scale (32 grade 0; 1 grade 1). When used for minimally invasive pedicle cannulations in a porcine model, submillimeter technical accuracy and 100% clinical accuracy could be achieved with this protocol.


Augmented Reality , Pedicle Screws , Animals , Swine , Surgery, Computer-Assisted/methods , Cone-Beam Computed Tomography/methods , Models, Animal
8.
JBJS Case Connect ; 14(2)2024 Apr 01.
Article En | MEDLINE | ID: mdl-38913787

CASE: A 32-year-old woman with a history of hip fusion presented with significant lower back, hip, and knee pain as well as severely limited hip mobility and function. Single-stage fusion takedown and conversion to total hip arthroplasty (THA) was performed using augmented reality navigation. At 1 year, the patient was pain free with improved function. This study is the first to report the technique and outcomes of surgical fusion conversion to THA, using mixed reality navigation. CONCLUSION: Mixed reality navigation in complex conversion THA can be useful for identifying the patient's true acetabulum and for patient-specific acetabular component placement to maximize outcomes.


Arthroplasty, Replacement, Hip , Humans , Female , Adult , Augmented Reality , Surgery, Computer-Assisted/methods , Hip Joint/surgery , Hip Joint/diagnostic imaging
9.
Math Biosci Eng ; 21(5): 5947-5971, 2024 May 15.
Article En | MEDLINE | ID: mdl-38872565

The technology of robot-assisted prostate seed implantation has developed rapidly. However, during the process, there are some problems to be solved, such as non-intuitive visualization effects and complicated robot control. To improve the intelligence and visualization of the operation process, a voice control technology of prostate seed implantation robot in augmented reality environment was proposed. Initially, the MRI image of the prostate was denoised and segmented. The three-dimensional model of prostate and its surrounding tissues was reconstructed by surface rendering technology. Combined with holographic application program, the augmented reality system of prostate seed implantation was built. An improved singular value decomposition three-dimensional registration algorithm based on iterative closest point was proposed, and the results of three-dimensional registration experiments verified that the algorithm could effectively improve the three-dimensional registration accuracy. A fusion algorithm based on spectral subtraction and BP neural network was proposed. The experimental results showed that the average delay of the fusion algorithm was 1.314 s, and the overall response time of the integrated system was 1.5 s. The fusion algorithm could effectively improve the reliability of the voice control system, and the integrated system could meet the responsiveness requirements of prostate seed implantation.


Algorithms , Augmented Reality , Magnetic Resonance Imaging , Neural Networks, Computer , Prostate , Prostatic Neoplasms , Robotics , Humans , Male , Robotics/instrumentation , Magnetic Resonance Imaging/methods , Prostatic Neoplasms/diagnostic imaging , Prostate/diagnostic imaging , Imaging, Three-Dimensional , Voice , Robotic Surgical Procedures/instrumentation , Robotic Surgical Procedures/methods , Holography/methods , Holography/instrumentation , Brachytherapy/instrumentation , Reproducibility of Results
10.
Medicina (Kaunas) ; 60(6)2024 May 27.
Article En | MEDLINE | ID: mdl-38929491

Despite advancement in surgical innovation, C1-C2 fixation remains challenging due to risks of screw malposition and vertebral artery (VA) injuries. Traditional image-based navigation, while useful, often demands that surgeons frequently shift their attention to external monitors, potentially causing distractions. In this article, we introduce a microscope-based augmented reality (AR) navigation system that projects both anatomical information and real-time navigation images directly onto the surgical field. In the present case report, we discuss a 37-year-old female who suffered from os odontoideum with C1-C2 subluxation. Employing AR-assisted navigation, the patient underwent the successful posterior instrumentation of C1-C2. The integrated AR system offers direct visualization, potentially minimizing surgical distractions. In our opinion, as AR technology advances, its adoption in surgical practices and education is anticipated to expand.


Augmented Reality , Humans , Female , Adult , Atlanto-Axial Joint/surgery , Atlanto-Axial Joint/injuries , Spinal Fusion/methods , Spinal Fusion/instrumentation , Odontoid Process/surgery , Odontoid Process/injuries , Odontoid Process/diagnostic imaging , Surgery, Computer-Assisted/methods
11.
Medicina (Kaunas) ; 60(6)2024 May 28.
Article En | MEDLINE | ID: mdl-38929504

Background and Objectives: The aim of this study is to present our experience in the surgical treatment of calcified thoracic herniated disc disease via a transthoracic approach in the lateral position with the use of intraoperative computed tomography (iCT) and augmented reality (AR). Materials and Methods: All patients who underwent surgery for calcified thoracic herniated disc via a transthoracic transpleural approach at our Department using iCT and microscope-based AR were included in the study. Results: Six consecutive patients (five female, median age 53.2 ± 6.4 years) with calcified herniated thoracic discs (two patients Th 10-11 level, two patients Th 7-8, one patient Th 9-10, one patient Th 11-12) were included in this case series. Indication for surgery included evidence of a calcified thoracic disc on magnet resonance imaging (MRI) and CT with spinal canal stenosis of >50% of diameter, intractable pain, and neurological deficits, as well as MRI-signs of myelopathy. Five patients had paraparesis and ataxia, and one patient had no deficit. All surgeries were performed in the lateral position via a transthoracic transpleural approach (Five from left side). CT for automatic registration was performed following the placement of the reference array, with a high registration accuracy. Microscope-based AR was used, with segmented structures of interest such as vertebral bodies, disc space, herniated disc, and dural sac. Mean operative time was 277.5 ± 156 min. The use of AR improved orientation in the operative field for identification, and tailored the resection of the herniated disc and the identification of the course of dural sac. A control-iCT scan confirmed the complete resection in five patients and incomplete resection of the herniated disc in one patient. In one patient, complications occurred, such as postoperative hematoma, and wound healing deficit occurred. Mean follow-up was 22.9 ± 16.5 months. Five patients improved following surgery, and one patient who had no deficits remained unchanged. Conclusions: Optimal surgical therapy in patients with calcified thoracic disc disease with compression of dural sac and myelopathy was resectioned via a transthoracic transpleural approach. The use of iCT-based registration and microscope-based AR significantly improved orientation in the operative field and facilitated safe resection of these lesions.


Augmented Reality , Intervertebral Disc Displacement , Thoracic Vertebrae , Tomography, X-Ray Computed , Humans , Female , Middle Aged , Intervertebral Disc Displacement/surgery , Intervertebral Disc Displacement/diagnostic imaging , Male , Tomography, X-Ray Computed/methods , Thoracic Vertebrae/surgery , Thoracic Vertebrae/diagnostic imaging , Calcinosis/surgery , Calcinosis/diagnostic imaging , Adult , Microscopy/methods , Treatment Outcome , Magnetic Resonance Imaging/methods , Intervertebral Disc Degeneration
12.
Beijing Da Xue Xue Bao Yi Xue Ban ; 56(3): 541-545, 2024 Jun 18.
Article Zh | MEDLINE | ID: mdl-38864142

OBJECTIVE: To evaluate the outcome of Augmented reality technology in the recognizing of oral and maxillofacial anatomy. METHODS: This study was conducted on the undergraduate students in Peking University School of Stomatology who were learning oral and maxillofacial anatomy. The image data were selected according to the experiment content, and the important blood vessels and bone tissue structures, such as upper and lower jaws, neck arteries and veins were reconstructed in 3D(3-dimensional) by digital software to generate experiment models, and the reconstructed models were encrypted and stored in the cloud. The QR (quick response) code corresponding to the 3D model was scanned by a networked mobile device to obtain augmented reality images to assist experimenters in teaching and subjects in recognizing. Augmented reality technology was applied in both the theoretical explanation and cadaveric dissection respectively. Subjects' feedback was collected in the form of a post-class questionnaire to evaluate the effectiveness of augmented reality technology-assisted recognizing. RESULTS: In the study, 83 undergraduate students were included as subjects in this study. Augmented reality technology could be successfully applied in the recognizing of oral and maxillofacial anatomy. All the subjects could scan the QR code through a connected mobile device to get the 3D anatomy model from the cloud, and zoom in/out/rotate the model on the mobile. Augmented reality technology could provide personalized 3D model, based on learners' needs and abilities. The results of likert scale showed that augmented reality technology was highly recognized by the students (9.19 points), and got high scores in terms of forming a three-dimensional sense and stimulating the enthusiasm for learning (9.01 and 8.85 points respectively). CONCLUSION: Augmented reality technology can realize the three-dimensional visualization of important structures of oral and maxillofacial anatomy and stimulate students' enthusiasm for learning. Besides, it can assist students in building three-dimensional space imagination of the anatomy of oral and maxillofacial area. The application of augmented reality technology achieves favorable effect in the recognizing of oral and maxillofacial anatomy.


Augmented Reality , Imaging, Three-Dimensional , Humans , Imaging, Three-Dimensional/methods , Anatomy/education , Mouth/anatomy & histology , Software
13.
Sensors (Basel) ; 24(11)2024 Jun 06.
Article En | MEDLINE | ID: mdl-38894475

A significant percentage of bridges in the United States are serving beyond their 50-year design life, and many of them are in poor condition, making them vulnerable to fatigue cracks that can result in catastrophic failure. However, current fatigue crack inspection practice based on human vision is time-consuming, labor intensive, and prone to error. We present a novel human-centered bridge inspection methodology to enhance the efficiency and accuracy of fatigue crack detection by employing advanced technologies including computer vision and augmented reality (AR). In particular, a computer vision-based algorithm is developed to enable near-real-time fatigue crack detection by analyzing structural surface motion in a short video recorded by a moving camera of the AR headset. The approach monitors structural surfaces by tracking feature points and measuring variations in distances between feature point pairs to recognize the motion pattern associated with the crack opening and closing. Measuring distance changes between feature points, as opposed to their displacement changes before this improvement, eliminates the need of camera motion compensation and enables reliable and computationally efficient fatigue crack detection using the nonstationary AR headset. In addition, an AR environment is created and integrated with the computer vision algorithm. The crack detection results are transmitted to the AR headset worn by the bridge inspector, where they are converted into holograms and anchored on the bridge surface in the 3D real-world environment. The AR environment also provides virtual menus to support human-in-the-loop decision-making to determine optimal crack detection parameters. This human-centered approach with improved visualization and human-machine collaboration aids the inspector in making well-informed decisions in the field in a near-real-time fashion. The proposed crack detection method is comprehensively assessed using two laboratory test setups for both in-plane and out-of-plane fatigue cracks. Finally, using the integrated AR environment, a human-centered bridge inspection is conducted to demonstrate the efficacy and potential of the proposed methodology.


Algorithms , Augmented Reality , Humans , Image Processing, Computer-Assisted/methods
14.
BMC Med Educ ; 24(1): 701, 2024 Jun 27.
Article En | MEDLINE | ID: mdl-38937764

BACKGROUND: Clinical teaching during encounters with real patients lies at the heart of medical education. Mixed reality (MR) using a Microsoft HoloLens 2 (HL2) offers the potential to address several challenges: including enabling remote learning; decreasing infection control risks; facilitating greater access to medical specialties; and enhancing learning by vertical integration of basic principles to clinical application. We aimed to assess the feasibility and usability of MR using the HL2 for teaching in a busy, tertiary referral university hospital. METHODS: This prospective observational study examined the use of the HL2 to facilitate a live two-way broadcast of a clinician-patient encounter, to remotely situated third and fourth year medical students. System Usability Scale (SUS) Scores were elicited from participating medical students, clinician, and technician. Feedback was also elicited from participating patients. A modified Evaluation of Technology-Enhanced Learning Materials: Learner Perceptions Questionnaire (mETELM) was completed by medical students and patients. RESULTS: This was a mixed methods prospective, observational study, undertaken in the Day of Surgery Assessment Unit. Forty-seven medical students participated. The mean SUS score for medical students was 71.4 (SD 15.4), clinician (SUS = 75) and technician (SUS = 70) indicating good usability. The mETELM Questionnaire using a 7-point Likert Scale demonstrated MR was perceived to be more beneficial than a PowerPoint presentation (Median = 7, Range 6-7). Opinion amongst the student cohort was divided as to whether the MR tutorial was as beneficial for learning as a live patient encounter would have been (Median = 5, Range 3-6). Students were positive about the prospect of incorporating of MR in future tutorials (Median = 7, Range 5-7). The patients' mETELM results indicate the HL2 did not affect communication with the clinician (Median = 7, Range 7-7). The MR tutorial was preferred to a format based on small group teaching at the bedside (Median = 6, Range 4-7). CONCLUSIONS: Our study findings indicate that MR teaching using the HL2 demonstrates good usability characteristics for providing education to medical students at least in a clinical setting and under conditions similar to those of our study. Also, it is feasible to deliver to remotely located students, although certain practical constraints apply including Wi-Fi and audio quality.


Feasibility Studies , Students, Medical , Humans , Prospective Studies , Students, Medical/psychology , Female , Male , Self Report , Education, Medical, Undergraduate/methods , Adult , Young Adult , Augmented Reality , Education, Distance , Surveys and Questionnaires
15.
Arch Orthop Trauma Surg ; 144(6): 2811-2821, 2024 Jun.
Article En | MEDLINE | ID: mdl-38704436

BACKGROUND: The use of portable navigation systems (PNS) in total hip arthroplasty (THA) has become increasingly prevalent, with second-generation PNS (sPNS) demonstrating superior accuracy in the lateral decubitus position compared to first-generation PNS. However, few studies have compared different types of sPNS. This study retrospectively compares the accuracy and clinical outcomes of two different types of sPNS instruments in patients undergoing THA. METHODS: A total of 158 eligible patients who underwent THA at a single institution between 2019 and 2022 were enrolled in the study, including 89 who used an accelerometer-based PNS with handheld infrared stereo cameras in the Naviswiss group (group N) and 69 who used an augmented reality (AR)-based PNS in the AR-Hip group (group A). Accuracy error, navigation error, clinical outcomes, and preparation time were compared between the two groups. RESULTS: Accuracy errors for Inclination were comparable between group N (3.5° ± 3.0°) and group A (3.5° ± 3.1°) (p = 0.92). Accuracy errors for anteversion were comparable between group N (4.1° ± 3.1°) and group A (4.5° ± 4.0°) (p = 0.57). The navigation errors for inclination (group N: 2.9° ± 2.7°, group A: 3.0° ± 3.2°) and anteversion (group N: 4.3° ± 3.5°, group A: 4.3° ± 4.1°) were comparable between the groups (p = 0.86 and 0.94, respectively). The preparation time was shorter in group A than in group N (p = 0.036). There were no significant differences in operative time (p = 0.255), intraoperative blood loss (p = 0.387), or complications (p = 0.248) between the two groups. CONCLUSION: An Accelerometer-based PNS using handheld infrared stereo cameras and AR-based PNS provide similar accuracy during THA in the lateral decubitus position, with a mean error of 3°-4° for both inclination and anteversion, though the AR-based PNS required a shorter preparation time.


Arthroplasty, Replacement, Hip , Augmented Reality , Surgery, Computer-Assisted , Surgical Navigation Systems , Humans , Arthroplasty, Replacement, Hip/instrumentation , Arthroplasty, Replacement, Hip/methods , Retrospective Studies , Female , Male , Aged , Middle Aged , Surgery, Computer-Assisted/methods , Surgery, Computer-Assisted/instrumentation , Infrared Rays
16.
Comput Methods Programs Biomed ; 251: 108201, 2024 Jun.
Article En | MEDLINE | ID: mdl-38703719

BACKGROUND AND OBJECTIVE: Surgical robotics tends to develop cognitive control architectures to provide certain degree of autonomy to improve patient safety and surgery outcomes, while decreasing the required surgeons' cognitive load dedicated to low level decisions. Cognition needs workspace perception, which is an essential step towards automatic decision-making and task planning capabilities. Robust and accurate detection and tracking in minimally invasive surgery suffers from limited visibility, occlusions, anatomy deformations and camera movements. METHOD: This paper develops a robust methodology to detect and track anatomical structures in real time to be used in automatic control of robotic systems and augmented reality. The work focuses on the experimental validation in highly challenging surgery: fetoscopic repair of Open Spina Bifida. The proposed method is based on two sequential steps: first, selection of relevant points (contour) using a Convolutional Neural Network and, second, reconstruction of the anatomical shape by means of deformable geometric primitives. RESULTS: The methodology performance was validated with different scenarios. Synthetic scenario tests, designed for extreme validation conditions, demonstrate the safety margin offered by the methodology with respect to the nominal conditions during surgery. Real scenario experiments have demonstrated the validity of the method in terms of accuracy, robustness and computational efficiency. CONCLUSIONS: This paper presents a robust anatomical structure detection in present of abrupt camera movements, severe occlusions and deformations. Even though the paper focuses on a case study, Open Spina Bifida, the methodology is applicable in all anatomies which contours can be approximated by geometric primitives. The methodology is designed to provide effective inputs to cognitive robotic control and augmented reality systems that require accurate tracking of sensitive anatomies.


Robotic Surgical Procedures , Humans , Robotic Surgical Procedures/methods , Neural Networks, Computer , Algorithms , Spinal Dysraphism/surgery , Spinal Dysraphism/diagnostic imaging , Image Processing, Computer-Assisted/methods , Robotics , Augmented Reality
17.
BMC Musculoskelet Disord ; 25(1): 396, 2024 May 21.
Article En | MEDLINE | ID: mdl-38773483

PURPOSE: This systematic review aims to provide an overview of the current knowledge on the role of the metaverse, augmented reality, and virtual reality in reverse shoulder arthroplasty. METHODS: A systematic review was performed using the PRISMA guidelines. A comprehensive review of the applications of the metaverse, augmented reality, and virtual reality in in-vivo intraoperative navigation, in the training of orthopedic residents, and in the latest innovations proposed in ex-vivo studies was conducted. RESULTS: A total of 22 articles were included in the review. Data on navigated shoulder arthroplasty was extracted from 14 articles: seven hundred ninety-three patients treated with intraoperative navigated rTSA or aTSA were included. Also, three randomized control trials (RCTs) reported outcomes on a total of fifty-three orthopedics surgical residents and doctors receiving VR-based training for rTSA, which were also included in the review. Three studies reporting the latest VR and AR-based rTSA applications and two proof of concept studies were also included in the review. CONCLUSIONS: The metaverse, augmented reality, and virtual reality present immense potential for the future of orthopedic surgery. As these technologies advance, it is crucial to conduct additional research, foster development, and seamlessly integrate them into surgical education to fully harness their capabilities and transform the field. This evolution promises enhanced accuracy, expanded training opportunities, and improved surgical planning capabilities.


Arthroplasty, Replacement, Shoulder , Augmented Reality , Virtual Reality , Humans , Arthroplasty, Replacement, Shoulder/methods , Surgery, Computer-Assisted/education , Surgery, Computer-Assisted/methods , Shoulder Joint/surgery
18.
Sci Rep ; 14(1): 10598, 2024 05 08.
Article En | MEDLINE | ID: mdl-38719940

A popular and widely suggested measure for assessing unilateral hand motor skills in stroke patients is the box and block test (BBT). Our study aimed to create an augmented reality enhanced version of the BBT (AR-BBT) and evaluate its correlation to the original BBT for stroke patients. Following G-power analysis, clinical examination, and inclusion-exclusion criteria, 31 stroke patients were included in this study. AR-BBT was developed using the Open Source Computer Vision Library (OpenCV). The MediaPipe's hand tracking library uses a palm and a hand landmark machine learning model to detect and track hands. A computer and a depth camera were employed in the clinical evaluation of AR-BBT following the principles of traditional BBT. A strong correlation was achieved between the number of blocks moved in the BBT and the AR-BBT on the hemiplegic side (Pearson correlation = 0.918) and a positive statistically significant correlation (p = 0.000008). The conventional BBT is currently the preferred assessment method. However, our approach offers an advantage, as it suggests that an AR-BBT solution could remotely monitor the assessment of a home-based rehabilitation program and provide additional hand kinematic information for hand dexterities in AR environment conditions. Furthermore, it employs minimal hardware equipment.


Augmented Reality , Hand , Machine Learning , Stroke Rehabilitation , Stroke , Humans , Male , Female , Middle Aged , Stroke/physiopathology , Aged , Hand/physiopathology , Hand/physiology , Stroke Rehabilitation/methods , Motor Skills/physiology , Adult
19.
Sci Rep ; 14(1): 11693, 2024 05 22.
Article En | MEDLINE | ID: mdl-38778168

Cybersickness remains a pivotal factor that impacts user experience in Augmented Reality (AR). Research probing into the relationship between AR reading tasks and cybersickness, particularly focusing on text display patterns and user characteristics, has been scant. Moreover, the influence of cybersickness on searching ability and the broader spectrum of user experience has not been rigorously tested. Recent investigations have aimed to pinpoint the variables that contribute to cybersickness during AR reading sessions. In one such study, 40 participants underwent a series of controlled experiments with randomized text display patterns, including variations in text speed and text movement modes. Post-experiment, participants completed a questionnaire that helped quantify their experiences and the degree of cybersickness encountered. The data highlighted that satiety, text speed, and text movement mode are significant contributors to cybersickness. When participants experienced higher levels of cybersickness, font color stood out as a particularly influential factor, whereas gender differences seemed to affect the onset of cybersickness more noticeably at lower levels. This study also drew attention to the impact of cybersickness on search ability within AR environments. It was noted that as cybersickness intensity increased, search ability was markedly compromised. In sum, the research underscores the importance of text display patterns and user characteristics, such as past AR experience, in understanding cybersickness and its detrimental effects on user experience and search ability, particularly under conditions of intense cybersickness.


Augmented Reality , Humans , Female , Male , Adult , Young Adult , Surveys and Questionnaires , Reading , User-Computer Interface
20.
An Acad Bras Cienc ; 96(1): e20220822, 2024.
Article En | MEDLINE | ID: mdl-38808808

Multirotors Aerial Vehicles are special class of Unmanned Aerial Vehicles with many practical applications. The growing demand for this class of aircraft requires tools that speed up their development. Simulated environments have gained increasing importance, as they facilitate testing and prototyping solutions, where virtual environments allow real-time interaction with simulated models, with similar behavior to real systems. More recently, the use of Augmented Reality has allowed an increasing experience of immersion and integration between the virtual world and a real scenario. This work proposes the use of Augmented Reality technology and a simulated model of a multirotor to create an interactive flight environment, aiming to improve the user experience in the analysis of simulated models. For this purpose, a smartphone was adopted as a hardware platform, a game engine is used as a basis for the development of the Augmented Reality application, that represents a numerical simulation of the flight dynamics and the control system of a multirotor, and a game controller is adopted for user interaction. The resulting system demonstrates that Augmented Reality is a viable technology that can be used to increase the possibilities of evaluating simulated systems.


Aircraft , Augmented Reality , Aircraft/instrumentation , Computer Simulation , Humans , User-Computer Interface , Virtual Reality
...