Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 145
Filtrar
Más filtros

Banco de datos
País/Región como asunto
Tipo del documento
Intervalo de año de publicación
1.
Orthod Craniofac Res ; 2024 May 07.
Artículo en Inglés | MEDLINE | ID: mdl-38712682

RESUMEN

OBJECTIVE: We propose a method utilizing mixed reality (MR) goggles (HoloLens 2, Microsoft) to facilitate impacted canine alignment, as planning the traction direction and force delivery could benefit from 3D data visualization using mixed reality (MR). METHODS: Cone-beam CT scans featuring isometric resolution and low noise-to-signal ratio were semi-automatically segmented in Inobitec software. The exported 3D mesh (OBJ file) was then optimized for the HoloLens 2. Using the Unreal Engine environment, we developed an application for the HoloLens 2, implementing HoloLens SDK and UX Tools. Adjustable pointers were added for planning attachment placement, traction direction, and point of force application. The visualization was presented to participants of a course on impacted teeth treatment, followed by a 10-question survey addressing potential advantages (5-point scale: 1 = totally agree, 5 = totally disagree). RESULTS: Out of 38 respondents, 44.7% were orthodontists, 34.2% dentists, 15.8% dental students, and 5.3% dental technicians. Most respondents (44.7%) were between 35 and 44 years old, and only 1 (2.6%) respondent was 55-64 years old. Median answers for six questions were 'totally agree' (25th percentile 1, 75th percentile 2) and for four questions 'agree' (25th percentile 1, 75th percentile 2). No correlation was found between age, profession, and responses. CONCLUSION: Our method generated substantial interest among clinicians. The initial responses affirm the potential benefits, supporting the continued exploration of MR-based techniques for the treatment of impacted teeth. However, the recommendation for widespread use awaits validation through clinical trials.

2.
Neurosurg Focus ; 56(1): E13, 2024 01.
Artículo en Inglés | MEDLINE | ID: mdl-38163338

RESUMEN

OBJECTIVE: The objective of this study was to analyze the potential and convenience of using mixed reality as a teaching tool for craniovertebral junction (CVJ) anomaly pathoanatomy. METHODS: CT and CT angiography images of 2 patients with CVJ anomalies were used to construct mixed reality models in the HoloMedicine application on the HoloLens 2 headset, resulting in four viewing stations. Twenty-two participants were randomly allocated into two groups, with each participant rotating through all stations for 90 seconds, each in a different order based on their group. At every station, objective questions evaluating the understanding of CVJ pathoanatomy were answered. At the end, subjective opinion on the user experience of mixed reality was provided using a 5-point Likert scale. The objective performance of the two viewing modes was compared, and a correlation between performance and participant experience was sought. Subjective feedback was compiled and correlated with experience. RESULTS: In both groups, there was a significant improvement in median (interquartile range [IQR]) objective performance with mixed reality compared with DICOM: 1) group A: case 1, median 6 (IQR 6-7) versus 5 (IQR 3-6), p = 0.009; case 2, median 6 (IQR 6-7) versus 5 (IQR 3-6), p = 0.02; 2) group B: case 1, median 6 (IQR 5-7) versus 4 (IQR 2-5), p = 0.04; case 2, median 6 (IQR 6-7) versus 5 (IQR 3-7), p = 0.03. There was significantly higher improvement in less experienced participants in both groups for both cases: 1) group A: case 1, r = -0.8665, p = 0.0005; case 2, r = -0.8002, p = 0.03; 2) group B: case 1, r = -0.6977, p = 0.01; case 2, r = -0.7417, p = 0.009. Subjectively, mixed reality was easy to use, with less disorientation due to the visible background, and it was believed to be a useful teaching tool. CONCLUSIONS: Mixed reality is an effective teaching tool for CVJ pathoanatomy, particularly for young neurosurgeons and trainees. The versatility of mixed reality and the intuitiveness of the user experience offer many potential applications, including training, intraoperative guidance, patient counseling, and individualized medicine; consequently, mixed reality has the potential to transform neurosurgery.


Asunto(s)
Realidad Aumentada , Neurocirugia , Humanos , Procedimientos Neuroquirúrgicos/métodos , Neurocirujanos , Competencia Clínica
3.
Neurosurg Focus ; 56(1): E11, 2024 01.
Artículo en Inglés | MEDLINE | ID: mdl-38163351

RESUMEN

OBJECTIVE: The traditional freehand placement of an external ventricular drain (EVD) relies on empirical craniometric landmarks to guide the craniostomy and subsequent passage of the EVD catheter. The diameter and trajectory of the craniostomy physically limit the possible trajectories that can be achieved during the passage of the catheter. In this study, the authors implemented a mixed reality-guided craniostomy procedure to evaluate the benefit of an optimally drilled craniostomy to the accurate placement of the catheter. METHODS: Optical marker-based tracking using an OptiTrack system was used to register the brain ventricular hologram and drilling guidance for craniostomy using a HoloLens 2 mixed reality headset. A patient-specific 3D-printed skull phantom embedded with intracranial camera sensors was developed to automatically calculate the EVD accuracy for evaluation. User trials consisted of one blind and one mixed reality-assisted craniostomy followed by a routine, unguided EVD catheter placement for each of two different drill bit sizes. RESULTS: A total of 49 participants were included in the study (mean age 23.4 years, 59.2% female). The mean distance from the catheter target improved from 18.6 ± 12.5 mm to 12.7 ± 11.3 mm (p = 0.0008) using mixed reality guidance for trials with a large drill bit and from 19.3 ± 12.7 mm to 10.1 ± 8.4 mm with a small drill bit (p < 0.0001). Accuracy using mixed reality was improved using a smaller diameter drill bit compared with a larger bit (p = 0.039). Overall, the majority of the participants were positive about the helpfulness of mixed reality guidance and the overall mixed reality experience. CONCLUSIONS: Appropriate indications and use cases for the application of mixed reality guidance to neurosurgical procedures remain an area of active inquiry. While prior studies have demonstrated the benefit of mixed reality-guided catheter placement using predrilled craniostomies, the authors demonstrate that real-time quantitative and visual feedback of a mixed reality-guided craniostomy procedure can independently improve procedural accuracy and represents an important tool for trainee education and eventual clinical implementation.


Asunto(s)
Realidad Aumentada , Humanos , Femenino , Adulto Joven , Adulto , Masculino , Drenaje/métodos , Procedimientos Neuroquirúrgicos/métodos , Ventrículos Cerebrales/diagnóstico por imagen , Ventrículos Cerebrales/cirugía , Catéteres
4.
BMC Med Educ ; 24(1): 498, 2024 May 04.
Artículo en Inglés | MEDLINE | ID: mdl-38704522

RESUMEN

BACKGROUND: Mixed reality offers potential educational advantages in the delivery of clinical teaching. Holographic artefacts can be rendered within a shared learning environment using devices such as the Microsoft HoloLens 2. In addition to facilitating remote access to clinical events, mixed reality may provide a means of sharing mental models, including the vertical and horizontal integration of curricular elements at the bedside. This study aimed to evaluate the feasibility of delivering clinical tutorials using the Microsoft HoloLens 2 and the learning efficacy achieved. METHODS: Following receipt of institutional ethical approval, tutorials on preoperative anaesthetic history taking and upper airway examination were facilitated by a tutor who wore the HoloLens device. The tutor interacted face to face with a patient and two-way audio-visual interaction was facilitated using the HoloLens 2 and Microsoft Teams with groups of students who were located in a separate tutorial room. Holographic functions were employed by the tutor. The tutor completed the System Usability Scale, the tutor, technical facilitator, patients, and students provided quantitative and qualitative feedback, and three students participated in semi-structured feedback interviews. Students completed pre- and post-tutorial, and end-of-year examinations on the tutorial topics. RESULTS: Twelve patients and 78 students participated across 12 separate tutorials. Five students did not complete the examinations and were excluded from efficacy calculations. Student feedback contained 90 positive comments, including the technology's ability to broadcast the tutor's point-of-vision, and 62 negative comments, where students noted issues with the audio-visual quality, and concerns that the tutorial was not as beneficial as traditional in-person clinical tutorials. The technology and tutorial structure were viewed favourably by the tutor, facilitator and patients. Significant improvement was observed between students' pre- and post-tutorial MCQ scores (mean 59.2% Vs 84.7%, p < 0.001). CONCLUSIONS: This study demonstrates the feasibility of using the HoloLens 2 to facilitate remote bedside tutorials which incorporate holographic learning artefacts. Students' examination performance supports substantial learning of the tutorial topics. The tutorial structure was agreeable to students, patients and tutor. Our results support the feasibility of offering effective clinical teaching and learning opportunities using the HoloLens 2. However, the technical limitations and costs of the device are significant, and further research is required to assess the effectiveness of this tutorial format against in-person tutorials before wider roll out of this technology can be recommended as a result of this study.


Asunto(s)
Estudiantes de Medicina , Humanos , Masculino , Femenino , Instrucción por Computador/métodos , Educación de Pregrado en Medicina/métodos , Estudios de Factibilidad , Evaluación Educacional , Competencia Clínica , Adulto , Holografía , Anamnesis
5.
BMC Med Educ ; 24(1): 701, 2024 Jun 27.
Artículo en Inglés | MEDLINE | ID: mdl-38937764

RESUMEN

BACKGROUND: Clinical teaching during encounters with real patients lies at the heart of medical education. Mixed reality (MR) using a Microsoft HoloLens 2 (HL2) offers the potential to address several challenges: including enabling remote learning; decreasing infection control risks; facilitating greater access to medical specialties; and enhancing learning by vertical integration of basic principles to clinical application. We aimed to assess the feasibility and usability of MR using the HL2 for teaching in a busy, tertiary referral university hospital. METHODS: This prospective observational study examined the use of the HL2 to facilitate a live two-way broadcast of a clinician-patient encounter, to remotely situated third and fourth year medical students. System Usability Scale (SUS) Scores were elicited from participating medical students, clinician, and technician. Feedback was also elicited from participating patients. A modified Evaluation of Technology-Enhanced Learning Materials: Learner Perceptions Questionnaire (mETELM) was completed by medical students and patients. RESULTS: This was a mixed methods prospective, observational study, undertaken in the Day of Surgery Assessment Unit. Forty-seven medical students participated. The mean SUS score for medical students was 71.4 (SD 15.4), clinician (SUS = 75) and technician (SUS = 70) indicating good usability. The mETELM Questionnaire using a 7-point Likert Scale demonstrated MR was perceived to be more beneficial than a PowerPoint presentation (Median = 7, Range 6-7). Opinion amongst the student cohort was divided as to whether the MR tutorial was as beneficial for learning as a live patient encounter would have been (Median = 5, Range 3-6). Students were positive about the prospect of incorporating of MR in future tutorials (Median = 7, Range 5-7). The patients' mETELM results indicate the HL2 did not affect communication with the clinician (Median = 7, Range 7-7). The MR tutorial was preferred to a format based on small group teaching at the bedside (Median = 6, Range 4-7). CONCLUSIONS: Our study findings indicate that MR teaching using the HL2 demonstrates good usability characteristics for providing education to medical students at least in a clinical setting and under conditions similar to those of our study. Also, it is feasible to deliver to remotely located students, although certain practical constraints apply including Wi-Fi and audio quality.


Asunto(s)
Estudios de Factibilidad , Estudiantes de Medicina , Humanos , Estudios Prospectivos , Estudiantes de Medicina/psicología , Femenino , Masculino , Autoinforme , Educación de Pregrado en Medicina/métodos , Adulto , Adulto Joven , Realidad Aumentada , Educación a Distancia , Encuestas y Cuestionarios
6.
Sensors (Basel) ; 24(11)2024 Jun 06.
Artículo en Inglés | MEDLINE | ID: mdl-38894475

RESUMEN

A significant percentage of bridges in the United States are serving beyond their 50-year design life, and many of them are in poor condition, making them vulnerable to fatigue cracks that can result in catastrophic failure. However, current fatigue crack inspection practice based on human vision is time-consuming, labor intensive, and prone to error. We present a novel human-centered bridge inspection methodology to enhance the efficiency and accuracy of fatigue crack detection by employing advanced technologies including computer vision and augmented reality (AR). In particular, a computer vision-based algorithm is developed to enable near-real-time fatigue crack detection by analyzing structural surface motion in a short video recorded by a moving camera of the AR headset. The approach monitors structural surfaces by tracking feature points and measuring variations in distances between feature point pairs to recognize the motion pattern associated with the crack opening and closing. Measuring distance changes between feature points, as opposed to their displacement changes before this improvement, eliminates the need of camera motion compensation and enables reliable and computationally efficient fatigue crack detection using the nonstationary AR headset. In addition, an AR environment is created and integrated with the computer vision algorithm. The crack detection results are transmitted to the AR headset worn by the bridge inspector, where they are converted into holograms and anchored on the bridge surface in the 3D real-world environment. The AR environment also provides virtual menus to support human-in-the-loop decision-making to determine optimal crack detection parameters. This human-centered approach with improved visualization and human-machine collaboration aids the inspector in making well-informed decisions in the field in a near-real-time fashion. The proposed crack detection method is comprehensively assessed using two laboratory test setups for both in-plane and out-of-plane fatigue cracks. Finally, using the integrated AR environment, a human-centered bridge inspection is conducted to demonstrate the efficacy and potential of the proposed methodology.


Asunto(s)
Algoritmos , Realidad Aumentada , Humanos , Procesamiento de Imagen Asistido por Computador/métodos
7.
Sensors (Basel) ; 24(14)2024 Jul 21.
Artículo en Inglés | MEDLINE | ID: mdl-39066124

RESUMEN

Recent advancements in communication technology have catalyzed the widespread adoption of realistic content, with augmented reality (AR) emerging as a pivotal tool for seamlessly integrating virtual elements into real-world environments. In construction, architecture, and urban design, the integration of mixed reality (MR) technology enables rapid interior spatial mapping, providing clients with immersive experiences to envision their desires. The rapid advancement of MR devices, or devices that integrate MR capabilities, offers users numerous opportunities for enhanced entertainment experiences. However, to support designers at a high level of expertise, it is crucial to ensure the accuracy and reliability of the data provided by these devices. This study explored the potential of utilizing spatial mapping within various methodologies for surveying architectural interiors. The objective was to identify optimized spatial mapping procedures and determine the most effective applications for their use. Experiments were conducted to evaluate the interior survey performance, using HoloLens 2, an iPhone 13 Pro for spatial mapping, and photogrammetry. The findings indicate that HoloLens 2 is most suited for the tasks examined in the scope of these experiments. Nonetheless, based on the acquired parameters, the author also proposes approaches to apply the other technologies in specific real-world scenarios.

8.
Sensors (Basel) ; 24(2)2024 Jan 15.
Artículo en Inglés | MEDLINE | ID: mdl-38257621

RESUMEN

The steady increase in the aging population worldwide is expected to cause a shortage of doctors and therapists for older people. This demographic shift requires more efficient and automated systems for rehabilitation and physical ability evaluations. Rehabilitation using mixed reality (MR) technology has attracted much attention in recent years. MR displays virtual objects on a head-mounted see-through display that overlies the user's field of vision and allows users to manipulate them as if they exist in reality. However, tasks in previous studies applying MR to rehabilitation have been limited to tasks in which the virtual objects are static and do not interact dynamically with the surrounding environment. Therefore, in this study, we developed an application to evaluate cognitive and motor functions with the aim of realizing a rehabilitation system that is dynamic and has interaction with the surrounding environment using MR technology. The developed application enabled effective evaluation of the user's spatial cognitive ability, task skillfulness, motor function, and decision-making ability. The results indicate the usefulness and feasibility of MR technology to quantify motor function and spatial cognition both for static and dynamic tasks in rehabilitation.


Asunto(s)
Realidad Aumentada , Médicos , Navegación Espacial , Humanos , Anciano , Envejecimiento , Cognición
9.
Sensors (Basel) ; 24(4)2024 Feb 06.
Artículo en Inglés | MEDLINE | ID: mdl-38400220

RESUMEN

Due to their low cost and portability, using entertainment devices for indoor mapping applications has become a hot research topic. However, the impact of user behavior on indoor mapping evaluation with entertainment devices is often overlooked in previous studies. This article aims to assess the indoor mapping performance of entertainment devices under different mapping strategies. We chose two entertainment devices, the HoloLens 2 and iPhone 14 Pro, for our evaluation work. Based on our previous mapping experience and user habits, we defined four simplified indoor mapping strategies: straight-forward mapping (SFM), left-right alternating mapping (LRAM), round-trip straight-forward mapping (RT-SFM), and round-trip left-right alternating mapping (RT-LRAM). First, we acquired triangle mesh data under each strategy with the HoloLens 2 and iPhone 14 Pro. Then, we compared the changes in data completeness and accuracy between the different devices and indoor mapping applications. Our findings show that compared to the iPhone 14 Pro, the triangle mesh accuracy acquired by the HoloLens 2 has more stable performance under different strategies. Notably, the triangle mesh data acquired by the HoloLens 2 under the RT-LRAM strategy can effectively compensate for missing wall and floor surfaces, mainly caused by furniture occlusion and the low frame rate of the depth-sensing camera. However, the iPhone 14 Pro is more efficient in terms of mapping completeness and can acquire a complete triangle mesh more quickly than the HoloLens 2. In summary, choosing an entertainment device for indoor mapping requires a combination of specific needs and scenes. If accuracy and stability are important, the HoloLens 2 is more suitable; if efficiency and completeness are important, the iPhone 14 Pro is better.

10.
Sensors (Basel) ; 24(15)2024 Jul 23.
Artículo en Inglés | MEDLINE | ID: mdl-39123822

RESUMEN

In the global context, advancements in technology and science have rendered virtual, augmented, and mixed-reality technologies capable of transforming clinical care and medical environments by offering enhanced features and improved healthcare services. This paper aims to present a mixed reality-based system to control a robotic wheelchair for people with limited mobility. The test group comprised 11 healthy subjects (six male, five female, mean age 35.2 ± 11.7 years). A novel platform that integrates a smart wheelchair and an eye-tracking-enabled head-mounted display was proposed to reduce the cognitive requirements needed for wheelchair movement and control. The approach's effectiveness was demonstrated by evaluating our system in realistic scenarios. The demonstration of the proposed AR head-mounted display user interface for controlling a smart wheelchair and the results provided in this paper could highlight the potential of the HoloLens 2-based innovative solutions and bring focus to emerging research topics, such as remote control, cognitive rehabilitation, the implementation of patient autonomy with severe disabilities, and telemedicine.


Asunto(s)
Enfermedades Neurodegenerativas , Robótica , Interfaz Usuario-Computador , Silla de Ruedas , Humanos , Masculino , Femenino , Adulto , Robótica/instrumentación , Robótica/métodos , Enfermedades Neurodegenerativas/rehabilitación , Sistemas Hombre-Máquina , Persona de Mediana Edad , Diseño de Equipo
11.
Sensors (Basel) ; 24(9)2024 Apr 27.
Artículo en Inglés | MEDLINE | ID: mdl-38732904

RESUMEN

In this paper, we present a novel approach referred to as the audio-based virtual landmark-based HoloSLAM. This innovative method leverages a single sound source and microphone arrays to estimate the voice-printed speaker's direction. The system allows an autonomous robot equipped with a single microphone array to navigate within indoor environments, interact with specific sound sources, and simultaneously determine its own location while mapping the environment. The proposed method does not require multiple audio sources in the environment nor sensor fusion to extract pertinent information and make accurate sound source estimations. Furthermore, the approach incorporates Robotic Mixed Reality using Microsoft HoloLens to superimpose landmarks, effectively mitigating the audio landmark-related issues of conventional audio-based landmark SLAM, particularly in situations where audio landmarks cannot be discerned, are limited in number, or are completely missing. The paper also evaluates an active speaker detection method, demonstrating its ability to achieve high accuracy in scenarios where audio data are the sole input. Real-time experiments validate the effectiveness of this method, emphasizing its precision and comprehensive mapping capabilities. The results of these experiments showcase the accuracy and efficiency of the proposed system, surpassing the constraints associated with traditional audio-based SLAM techniques, ultimately leading to a more detailed and precise mapping of the robot's surroundings.

12.
Surg Innov ; 31(1): 48-57, 2024 Feb.
Artículo en Inglés | MEDLINE | ID: mdl-38019844

RESUMEN

BACKGROUND: Computer assisted surgical navigation systems are designed to improve outcomes by providing clinicians with procedural guidance information. The use of new technologies, such as mixed reality, offers the potential for more intuitive, efficient, and accurate procedural guidance. The goal of this study is to assess the positional accuracy and consistency of a clinical mixed reality system that utilizes commercially available wireless head-mounted displays (HMDs), custom software, and localization instruments. METHODS: Independent teams using the second-generation Microsoft HoloLens© hardware, Medivis SurgicalAR© software, and localization instruments, tested the accuracy of the combined system at different institutions, times, and locations. The ASTM F2554-18 consensus standard for computer-assisted surgical systems, as recognized by the U.S. FDA, was utilized to measure the performance. 288 tests were performed. RESULTS: The system demonstrated consistent results, with an average accuracy performance that was better than one millimeter (.75 ± SD .37 mm). CONCLUSION: Independently acquired positional tracking accuracies exceed conventional in-market surgical navigation tracking systems and FDA standards. Importantly, the performance was achieved at two different institutions, using an international testing standard, and with a system that included a commercially available off-the-shelf wireless head mounted display and software.


Asunto(s)
Realidad Aumentada , Cirugía Asistida por Computador , Estados Unidos , Cirugía Asistida por Computador/métodos , Sistemas de Navegación Quirúrgica , United States Food and Drug Administration , Programas Informáticos
13.
Audiol Neurootol ; 28(4): 308-316, 2023.
Artículo en Inglés | MEDLINE | ID: mdl-37071980

RESUMEN

INTRODUCTION: Dizziness is a common complaint affecting up to 23% of the world population. Diagnosis is of utmost importance and routinely involves several tests to be performed in specialized centers. The advent of a new generation of technical devices would make envision their use for a valid objective vestibular assessment. Microsoft HoloLens 2 (HL2) mixed reality headset has the potential to be a valuable wearable technology that provides interactive digital stimuli and inertial measurement units (IMUs) to objectively quantify the movements of the user in response to various exercises. The aim of this study was to validate the integration of HoloLens with traditional methods used to analyze the vestibular function in order to obtain precise diagnostic values. METHODS: Twenty-six healthy adults completed the Dynamic Gait Index tests both with a traditional evaluation and while wearing HL2 headset, thus allowing to collect kinematic data of the patients' head and eyes. The subjects had to perform 8 different tasks, and the scores were independently assigned by two otolaryngology specialists. RESULTS: The maximum of the mean position of the walking axis of the subjects was found in the second task (-0.14 ± 0.23 m), while the maximum value of the standard deviation of the walking axis was found in the fifth task (-0.12 ± 0.27 m). Overall, positive results were obtained in regard to the validity of the HL2 use to analyze kinematic features. CONCLUSION: The accurate quantification of gait, movement along the walking axis, and deviation from the normality using HL2 provide an initial evidence for its useful adoption as a valuable tool in gait and mobility assessment.


Asunto(s)
Realidad Aumentada , Realidad Virtual , Adulto , Humanos , Marcha/fisiología , Caminata/fisiología , Vértigo
14.
Eur Spine J ; 32(10): 3425-3433, 2023 10.
Artículo en Inglés | MEDLINE | ID: mdl-37552327

RESUMEN

PURPOSE: Over the last years, interest and efforts to implement augmented reality (AR) in orthopedic surgery through head-mounted devices (HMD) have increased. However, the majority of experiments were preclinical and within a controlled laboratory environment. The operating room (OR) is a more challenging environment with various confounding factors potentially affecting the performance of an AR-HMD. The aim of this study was to assess the performance of an AR-HMD in a real-life OR setting. METHODS: An established AR application using the HoloLens 2 HMD was tested in an OR and in a laboratory by two users. The accuracy of the hologram overlay, the time to complete the trial, the number of rejected registration attempts, the delay in live overlay of the hologram, and the number of completely failed runs were recorded. Further, different OR setting parameters (light condition, setting up partitions, movement of personnel, and anchor placement) were modified and compared. RESULTS: Time for full registration was higher with 48 s (IQR 24 s) in the OR versus 33 s (IQR 10 s) in the laboratory setting (p < 0.001). The other investigated parameters didn't differ significantly if an optimal OR setting was used. Within the OR, the strongest influence on performance of the AR-HMD was different light conditions with direct light illumination on the situs being the least favorable. CONCLUSION: AR-HMDs are affected by different OR setups. Standardization measures for better AR-HMD performance include avoiding direct light illumination on the situs, setting up partitions, and minimizing the movement of personnel.


Asunto(s)
Realidad Aumentada , Humanos , Quirófanos
15.
Eur Arch Otorhinolaryngol ; 280(4): 2043-2049, 2023 Apr.
Artículo en Inglés | MEDLINE | ID: mdl-36269364

RESUMEN

PURPOSE: Augmented Reality can improve surgical planning and performance in parotid surgery. For easier application we implemented a voice control manual for our augmented reality system. The aim of the study was to evaluate the feasibility of the voice control in real-life situations. METHODS: We used the HoloLens 1® (Microsoft Corporation) with a special speech recognition software for parotid surgery. The evaluation took place in a audiometry cubicle and during real surgical procedures. Voice commands were used to display various 3D structures of the patient with the HoloLens 1®. Commands had different variations (male/female, 65 dB SPL)/louder, various structures). RESULTS: In silence, 100% of commands were recognized. If the volume of the operation room (OR) background noise exceeds 42 dB, the recognition rate decreases significantly, and it drops below 40% at > 60 dB SPL. With constant speech volume at 65 dB SPL male speakers had a significant better recognition rate than female speakers (p = 0.046). Higher speech volumes can compensate this effect. The recognition rate depends on the type of background noise. Mixed OR noise (52 dB(A)) reduced the detection rate significantly compared to single suction noise at 52 dB(A) (p ≤ 0.00001). The recognition rate was significantly better in the OR than in the audio cubicle (p = 0.00013 both genders, 0.0086 female, and 0.0036 male). CONCLUSIONS: The recognition rate of voice commands can be enhanced by increasing the speech volume and by singularizing ambient noises. The detection rate depends on the loudness of the OR noise. Male voices are understood significantly better than female voices.


Asunto(s)
Realidad Aumentada , Gafas Inteligentes , Voz , Humanos , Masculino , Femenino , Habla , Audiometría
16.
BMC Med Educ ; 23(1): 670, 2023 Sep 18.
Artículo en Inglés | MEDLINE | ID: mdl-37723452

RESUMEN

BACKGROUND: The purpose of this study was to explore the applicability of application effect of head-mounted mixed reality (MR) equipment combined with a three-dimensional (3D) printed model in neurosurgical ventricular and haematoma puncture training. METHODS: Digital Imaging and Communications in Medicine (DICOM) format image data of two patients with common neurosurgical diseases (hydrocephalus and basal ganglia haemorrhage) were imported into 3D Slicer software for 3D reconstruction, saved, and printed using 3D printing to produce a 1:1-sized head model with real person characteristics. The required model (brain ventricle, haematoma, puncture path, etc.) was constructed and imported into the head-mounted MR device, HoloLens, and a risk-free, visual, and repeatable system was designed for the training of junior physicians. A total of 16 junior physicians who studied under this specialty from September 2020 to March 2022 were selected as the research participants, and the applicability of the equipment and model during training was evaluated with assessment score sheets and questionnaires after training. RESULTS: According to results of the assessment and questionnaire, the doctors trained by this system are more familiar with the localization of the lateral anterior ventricle horn puncture and the common endoscopic surgery for basal ganglia haemorrhage, as well as more confident in the mastery of these two operations than the traditional training methods. CONCLUSIONS: The use of head-mounted MR equipment combined with 3D printing models can provide an ideal platform for the operation training of young doctors. Through holographic images created from the combination of virtual and real images, operators can be better immersed in the operation process and deepen their understanding of the operation and related anatomical structures. The 3D printed model can be repeatedly reproduced so that doctors can master the technology, learn from mistakes, better achieve the purpose of teaching and training, and improve the effect of training.


Asunto(s)
Realidad Aumentada , Hemorragia de los Ganglios Basales , Neurocirugia , Humanos , Punciones , Impresión Tridimensional , Hematoma
17.
Sensors (Basel) ; 23(4)2023 Feb 15.
Artículo en Inglés | MEDLINE | ID: mdl-36850766

RESUMEN

Medical ultrasound (US) is a commonly used modality for image-guided procedures. Recent research systems providing an in situ visualization of 2D US images via an augmented reality (AR) head-mounted display (HMD) were shown to be advantageous over conventional imaging through reduced task completion times and improved accuracy. In this work, we continue in the direction of recent developments by describing the first AR HMD application visualizing real-time volumetric (3D) US in situ for guiding vascular punctures. We evaluated the application on a technical level as well as in a mixed-methods user study with a qualitative prestudy and a quantitative main study, simulating a vascular puncture. Participants completed the puncture task significantly faster when using 3D US AR mode compared to 2D US AR, with a decrease of 28.4% in time. However, no significant differences were observed regarding the success rate of vascular puncture (2D US AR-50% vs. 3D US AR-72%). On the technical side, the system offers a low latency of 49.90 ± 12.92 ms and a satisfactory frame rate of 60 Hz. Our work shows the feasibility of a system that visualizes real-time 3D US data via an AR HMD, and our experiments show, furthermore, that this may offer additional benefits in US-guided tasks (i.e., reduced task completion time) over 2D US images viewed in AR by offering a vividly volumetric visualization.


Asunto(s)
Realidad Aumentada , Gafas Inteligentes , Humanos , Punciones , Ultrasonografía
18.
Sensors (Basel) ; 23(6)2023 Mar 11.
Artículo en Inglés | MEDLINE | ID: mdl-36991751

RESUMEN

The adoption of extended reality solutions is growing rapidly in the healthcare world. Augmented reality (AR) and virtual reality (VR) interfaces can bring advantages in various medical-health sectors; it is thus not surprising that the medical MR market is among the fastest-growing ones. The present study reports on a comparison between two of the most popular MR head-mounted displays, Magic Leap 1 and Microsoft HoloLens 2, for the visualization of 3D medical imaging data. We evaluate the functionalities and performance of both devices through a user-study in which surgeons and residents assessed the visualization of 3D computer-generated anatomical models. The digital content is obtained through a dedicated medical imaging suite (Verima imaging suite) developed by the Italian start-up company (Witapp s.r.l.). According to our performance analysis in terms of frame rate, there are no significant differences between the two devices. The surgical staff expressed a clear preference for Magic Leap 1, particularly for the better visualization quality and the ease of interaction with the 3D virtual content. Nonetheless, even though the results of the questionnaire were slightly more positive for Magic Leap 1, the spatial understanding of the 3D anatomical model in terms of depth relations and spatial arrangement was positively evaluated for both devices.


Asunto(s)
Realidad Aumentada , Cirugía Asistida por Computador , Realidad Virtual , Humanos , Simulación por Computador , Cirugía Asistida por Computador/métodos , Imagenología Tridimensional
19.
Sensors (Basel) ; 23(9)2023 Apr 24.
Artículo en Inglés | MEDLINE | ID: mdl-37177449

RESUMEN

When producing an engaging augmented reality (AR) user experience, it is crucial to create AR content that mimics real-life objects' behavior to the greatest extent possible. A critical aspect to achieve this is ensuring that the digital objects conform to line-of-sight rules and are either partially or completely occluded, just like real-world objects would be. The study explores the concept of utilizing a pre-existing 3D representation of the physical environment as an occlusion mask that governs the rendering of each pixel. Specifically, the research aligns a Level of Detail (LOD) 1 building model and a 3D mesh model with their real-world counterparts and evaluates the effectiveness of occlusion between the two models in an outdoor setting. Despite the mesh model containing more detailed information, the overall results do not show improvement. In an indoor scenario, the researchers leverage the scanning capability of HoloLens 2.0 to create a pre-scanned representation, which helps overcome the limited range and delay of the mesh reconstruction.

20.
Sensors (Basel) ; 23(21)2023 Oct 25.
Artículo en Inglés | MEDLINE | ID: mdl-37960398

RESUMEN

The integration of Deep Learning (DL) models with the HoloLens2 Augmented Reality (AR) headset has enormous potential for real-time AR medical applications. Currently, most applications execute the models on an external server that communicates with the headset via Wi-Fi. This client-server architecture introduces undesirable delays and lacks reliability for real-time applications. However, due to HoloLens2's limited computation capabilities, running the DL model directly on the device and achieving real-time performances is not trivial. Therefore, this study has two primary objectives: (i) to systematically evaluate two popular frameworks to execute DL models on HoloLens2-Unity Barracuda and Windows Machine Learning (WinML)-using the inference time as the primary evaluation metric; (ii) to provide benchmark values for state-of-the-art DL models that can be integrated in different medical applications (e.g., Yolo and Unet models). In this study, we executed DL models with various complexities and analyzed inference times ranging from a few milliseconds to seconds. Our results show that Unity Barracuda is significantly faster than WinML (p-value < 0.005). With our findings, we sought to provide practical guidance and reference values for future studies aiming to develop single, portable AR systems for real-time medical assistance.


Asunto(s)
Realidad Aumentada , Aprendizaje Profundo , Humanos , Reproducibilidad de los Resultados , Aprendizaje Automático
SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA