Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 2.307
Filtrar
1.
Physiol Behav ; 283: 114623, 2024 Jul 01.
Artículo en Inglés | MEDLINE | ID: mdl-38959990

RESUMEN

BACKGROUND: Exercise has positive effects on psychological well-being, with team sports often associated with superior mental health compared to individual sports. Augmented reality (AR) technology has the potential to convert solitary exercise into multi-person exercise. Given the role of oxytocin in mediating the psychological benefits of exercise and sports, this study aimed to investigate the impact of AR-based multi-person exercise on mood and salivary oxytocin levels. METHODS: Fourteen participants underwent three distinct regimens: non-exercise (Rest), standard solitary cycling exercise (Ex), and AR-based multi-person cycling exercise (Ex+AR). In both Ex and Ex+AR conditions, participants engaged in cycling at a self-regulated pace to maintain a Rating of Perceived Exertion of 10. In the Ex+AR condition, participants' avatars were projected onto a tablet screen, allowing them to cycle alongside ten other virtual avatars in an AR environment. Mood states and saliva samples were collected before and immediately after each 10-minute regimen. Subsequently, salivary oxytocin levels were measured. RESULTS: Notably, only the Ex+AR condition significantly improved mood states associated with depression-dejection and exhibited a non-significant trend toward suppressing anger-hostility in participants. Moreover, the Ex+AR condition led to a significant elevation in salivary oxytocin levels, while the Ex condition showed a non-significant trend toward an increase. However, changes in salivary oxytocin did not show a significant correlation with changes in mood states. CONCLUSIONS: These findings suggest that Ex+AR enhances mood states and promotes oxytocin release. AR-based multi-person exercise may offer greater psychological benefits compared to standard solitary exercise, although the relationship between oxytocin and mood changes remains inconclusive.

2.
Appl Ergon ; 120: 104340, 2024 Jul 03.
Artículo en Inglés | MEDLINE | ID: mdl-38964218

RESUMEN

Augmented reality (AR) environments are emerging as prominent user interfaces and gathering significant attention. However, the associated physical strain on the users presents a considerable challenge. Within this background, this study explores the impact of movement distance (MD) and target-to-user distance (TTU) on the physical load during drag-and-drop (DND) tasks in an AR environment. To address this objective, a user experiment was conducted utilizing a 5× 5 within-subject design with MD (16, 32, 48, 64, and 80 cm) and TTU (40, 80, 120, 160, and 200 cm) as the variables. Physical load was assessed using normalized electromyography (NEMG) (%MVC) indicators of the upper extremity muscles and the physical item of NASA-Task load index (TLX). The results revealed significant variations in the physical load based on MD and TTU. Specifically, both the NEMG and subjective physical workload values increased with increasing MD. Moreover, NEMG increased with decreasing TTU, whereas the subjective physical workload scores increased with increasing TTU. Interaction effects of MD and TTU on NEMG were also significantly observed. These findings suggest that considering the MD and TTU when developing content for interacting with AR objects in AR environments could potentially alleviate user load.

3.
BMC Med Educ ; 24(1): 730, 2024 Jul 05.
Artículo en Inglés | MEDLINE | ID: mdl-38970090

RESUMEN

BACKGROUND: Virtual reality (VR) and augmented reality (AR) are emerging technologies that can be used for cardiopulmonary resuscitation (CPR) training. Compared to traditional face-to-face training, VR/AR-based training has the potential to reach a wider audience, but there is debate regarding its effectiveness in improving CPR quality. Therefore, we conducted a meta-analysis to assess the effectiveness of VR/AR training compared with face-to-face training. METHODS: We searched PubMed, Embase, Cochrane Library, Web of Science, CINAHL, China National Knowledge Infrastructure, and Wanfang databases from the inception of these databases up until December 1, 2023, for randomized controlled trials (RCTs) comparing VR- and AR-based CPR training to traditional face-to-face training. Cochrane's tool for assessing bias in RCTs was used to assess the methodological quality of the included studies. We pooled the data using a random-effects model with Review Manager 5.4, and assessed publication bias with Stata 11.0. RESULTS: Nine RCTs (involving 855 participants) were included, of which three were of low risk of bias. Meta-analyses showed no significant differences between VR/AR-based CPR training and face-to-face CPR training in terms of chest compression depth (mean difference [MD], -0.66 mm; 95% confidence interval [CI], -6.34 to 5.02 mm; P = 0.82), chest compression rate (MD, 3.60 compressions per minute; 95% CI, -1.21 to 8.41 compressions per minute; P = 0.14), overall CPR performance score (standardized mean difference, -0.05; 95% CI, -0.93 to 0.83; P = 0.91), as well as the proportion of participants meeting CPR depth criteria (risk ratio [RR], 0.79; 95% CI, 0.53 to 1.18; P = 0.26) and rate criteria (RR, 0.99; 95% CI, 0.72 to 1.35; P = 0.93). The Egger regression test showed no evidence of publication bias. CONCLUSIONS: Our study showed evidence that VR/AR-based training was as effective as traditional face-to-face CPR training. Nevertheless, there was substantial heterogeneity among the included studies, which reduced confidence in the findings. Future studies need to establish standardized VR/AR-based CPR training protocols, evaluate the cost-effectiveness of this approach, and assess its impact on actual CPR performance in real-life scenarios and patient outcomes. TRIAL REGISTRATION: CRD42023482286.


Asunto(s)
Realidad Aumentada , Reanimación Cardiopulmonar , Realidad Virtual , Reanimación Cardiopulmonar/educación , Humanos , Ensayos Clínicos Controlados Aleatorios como Asunto
4.
J Neurol Surg B Skull Base ; 85(4): 363-369, 2024 Aug.
Artículo en Inglés | MEDLINE | ID: mdl-38966300

RESUMEN

Objective The aim of this work was the development of an augmented reality system including the functionality of conventional surgical navigation systems. Methods An application software for the Augmented Reality System HoloLens 2 from Microsoft was developed. It detects the position of the patient as well as position of surgical instruments in real time and displays it within the two-dimensional (2D) magnetic resonance imaging or computed tomography (CT) images. The surgical pointer instrument, including a pattern that is recognized by the HoloLens 2 sensors, was created with three-dimensional (3D) printing. The technical concept was demonstrated at a cadaver skull to identify anatomical landmarks. Results With the help of the HoloLens 2 and its sensors, the real-time position of the surgical pointer instrument could be shown. The position of the 3D-printed pointer with colored pattern could be recognized within 2D-CT images when stationary and in motion at a cadaver skull. Feasibility could be demonstrated for the clinical application of transsphenoidal pituitary surgery. Conclusion The HoloLens 2 has a high potential for use as a surgical navigation system. With subsequent studies, a further accuracy evaluation will be performed receiving valid data for comparison with conventional surgical navigation systems. In addition to transsphenoidal pituitary surgery, it could be also applied for other surgical disciplines.

6.
Artículo en Inglés | MEDLINE | ID: mdl-38960934

RESUMEN

PURPOSE: Patients with total knee arthroplasty (TKA) often suffer from severe postoperative pain, which seriously hinders postoperative rehabilitation. Extended reality (XR), including virtual reality, augmented reality, and mixed reality, has been increasingly used to relieve pain after TKA. The purpose of this study was to evaluate the effectiveness of XR on relieving pain after TKA. METHODS: The electronic databases including PubMed, Embase, Web of Science, Cochrane Central Register of Controlled Trials (CENTRAL), and clinicaltrials.gov were searched for studies from inception to July 20, 2023. The outcomes were pain score, anxiety score, and physiological parameters related to pain. Meta-analysis was performed using the Review Manager 5.4 software. RESULTS: Overall, 11 randomized control trials (RCTs) with 887 patients were included. The pooled results showed XR had lower pain scores (SMD = - 0.31, 95% CI [- 0.46 to - 0.16], P < 0.0001) and anxiety scores (MD = - 3.95, 95% CI [- 7.76 to - 0.13], P = 0.04) than conventional methods. The subgroup analysis revealed XR had lower pain scores within 2 weeks postoperatively (SMD = - 0.49, 95% CI [- 0.76 to - 0.22], P = 0.0004) and XR had lower pain scores when applying XR combined with conventional methods (SMD = - 0.43, 95% CI [- 0.65 to - 0.20], P = 0.0002). CONCLUSION: This systematic review and meta-analysis found applying XR could significantly reduce postoperative pain and anxiety after TKA. When XR was combined with conventional methods, postoperative pain can be effectively relieved, especially within 2 weeks after the operation. XR is an effective non-pharmacological analgesia scheme.

7.
Neurospine ; 21(2): 432-439, 2024 Jun.
Artículo en Inglés | MEDLINE | ID: mdl-38955520

RESUMEN

OBJECTIVE: Spine surgeons are often at risk of radiation exposure due to intraoperative fluoroscopy, leading to health concerns such as carcinogenesis. This is due to the increasing use of percutaneous pedicle screw (PPS) in spinal surgeries, resulting from the widespread adoption of minimally invasive spine stabilization. This study aimed to elucidate the effectiveness of smart glasses (SG) in PPS insertion under fluoroscopy. METHODS: SG were used as an alternative screen for fluoroscopic images. Operators A (2-year experience in spine surgery) and B (9-year experience) inserted the PPS into the bilateral L1-5 pedicles of the lumbar model bone under fluoroscopic guidance, repeating this procedure twice with and without SG (groups SG and N-SG, respectively). Each vertebral body's insertion time, radiation dose, and radiation exposure time were measured, and the deviation in screw trajectories was evaluated. RESULTS: The groups SG and N-SG showed no significant difference in insertion time for the overall procedure and each operator. However, group SG had a significantly shorter radiation exposure time than group N-SG for the overall procedure (109.1 ± 43.5 seconds vs. 150.9 ± 38.7 seconds; p = 0.003) and operator A (100.0 ± 29.0 seconds vs. 157.9 ± 42.8 seconds; p = 0.003). The radiation dose was also significantly lower in group SG than in group N-SG for the overall procedure (1.3 ± 0.6 mGy vs. 1.7 ± 0.5 mGy; p = 0.023) and operator A (1.2 ± 0.4 mGy vs. 1.8 ± 0.5 mGy; p = 0.013). The 2 groups showed no significant difference in screw deviation. CONCLUSION: The application of SG in fluoroscopic imaging for PPS insertion holds potential as a useful method for reducing radiation exposure.

8.
Sci Rep ; 14(1): 15458, 2024 07 04.
Artículo en Inglés | MEDLINE | ID: mdl-38965266

RESUMEN

In total hip arthroplasty (THA), determining the center of rotation (COR) and diameter of the hip joint (acetabulum and femoral head) is essential to restore patient biomechanics. This study investigates on-the-fly determination of hip COR and size, using off-the-shelf augmented reality (AR) hardware. An AR head-mounted device (HMD) was configured with inside-out infrared tracking enabling the determination of surface coordinates using a handheld stylus. Two investigators examined 10 prosthetic femoral heads and cups, and 10 human femurs. The HMD calculated the diameter and COR through sphere fitting. Results were compared to data obtained from either verified prosthetic geometry or post-hoc CT analysis. Repeated single-observer measurements showed a mean diameter error of 0.63 mm ± 0.48 mm for the prosthetic heads and 0.54 mm ± 0.39 mm for the cups. Inter-observer comparison yielded mean diameter errors of 0.28 mm ± 0.71 mm and 1.82 mm ± 1.42 mm for the heads and cups, respectively. Cadaver testing found a mean COR error of 3.09 mm ± 1.18 mm and a diameter error of 1.10 mm ± 0.90 mm. Intra- and inter-observer reliability averaged below 2 mm. AR-based surface mapping using HMD proved accurate and reliable in determining the diameter of THA components with promise in identifying COR and diameter of osteoarthritic femoral heads.


Asunto(s)
Artroplastia de Reemplazo de Cadera , Realidad Aumentada , Cabeza Femoral , Prótesis de Cadera , Humanos , Cabeza Femoral/cirugía , Cabeza Femoral/diagnóstico por imagen , Artroplastia de Reemplazo de Cadera/instrumentación , Artroplastia de Reemplazo de Cadera/métodos , Tomografía Computarizada por Rayos X , Rotación , Masculino , Articulación de la Cadera/cirugía , Articulación de la Cadera/diagnóstico por imagen , Femenino
9.
Pan Afr Med J ; 47: 157, 2024.
Artículo en Inglés | MEDLINE | ID: mdl-38974699

RESUMEN

The integration of virtual reality (VR) and augmented reality (AR) into the telerehabilitation initiates a major change in the healthcare practice particularly in neurological and also orthopedic rehabilitation. This essay reflects the potential of the VR and AR in their capacity to create immersive, interactive environments that facilitate the recovery. The recent developments have illustrated the ability to enhance the patient engagement and outcomes, especially in tackling the complex motor and cognitive rehabilitation needs. The combination of artificial intelligence (AI) with VR and AR will bring the rehabilitation to the next level by enabling adaptive and responsive treatment programs provided through real-time feedback and predictive analytics. Nevertheless, the issues such as availability, cost, and digital gap among many others present huge obstacles to the mass adoption. This essay provides a very thorough review of the existing level of virtual reality and augmented reality in rehabilitation and examines the many potential gains, drawbacks, and future directions from a different perspective.


Asunto(s)
Inteligencia Artificial , Realidad Aumentada , Telerrehabilitación , Realidad Virtual , Humanos , Rehabilitación Neurológica/métodos
10.
Comput Struct Biotechnol J ; 24: 451-463, 2024 Dec.
Artículo en Inglés | MEDLINE | ID: mdl-38975288

RESUMEN

This report summarises the SMARTCLAP research project, which employs a user-centred design approach to develop a revolutionary smart product service system. The system offers personalised motivation to encourage children with cerebral palsy to actively participate more during their occupational therapy sessions, while providing paediatric occupational therapists with an optimal tool to monitor children's progress from one session to another. The product service system developed includes of a smart wearable device called DigiClap used to interact with a serious game in an Augmented Reality environment. The report highlights the research methodology used to advance the technology readiness level from 4 to 6, acknowledging the contribution of the consortium team and funding source. As part of the technology's maturity process, DigiClap and the respective serious game were evaluated with target users, to identify the system's impact in supporting the children's overall participation and hand function, and to gather feedback from occupational therapists and caregivers on this novel technology. The outcomes of this study are discussed, highlighting limitations and lessons learned. The report also outlines future work and further funding for the sustainability of the project and to reach other individuals who have upper limb limitations. Ultimately, the potential of DigiClap and the overall achievements of this project are discussed.

11.
Appl Neuropsychol Adult ; : 1-4, 2024 Jul 08.
Artículo en Inglés | MEDLINE | ID: mdl-38976768

RESUMEN

The integration of virtual, mixed, and augmented reality technologies in cognitive neuroscience and neuropsychology represents a transformative frontier. In this Commentary, we conducted a meta-analysis of studies that explored the impact of Virtual Reality (VR), Mixed Reality (MR), and Augmented Reality (AR) on cognitive neuroscience and neuropsychology. Our review highlights the versatile applications of VR, ranging from spatial cognition assessments to rehabilitation for Traumatic Brain Injury. We found that MR and AR offer innovative avenues for cognitive training, particularly in memory-related disorders. The applications extend to addressing social cognition disorders and serving as therapeutic interventions for mental health issues. Collaborative efforts between neuroscientists and technology developers are crucial, with reinforcement learning and neuroimaging studies enhancing the potential for improved outcomes. Ethical considerations, including informed consent, privacy, and accessibility, demand careful attention. Our review identified common aspects of the meta-analysis, including the potential of VR technologies in cognitive neuroscience and neuropsychology, the use of MR and AR in memory research, and the role of VR in neurorehabilitation and therapy.

12.
Curr Opin Psychol ; 58: 101842, 2024 Jul 01.
Artículo en Inglés | MEDLINE | ID: mdl-38986168

RESUMEN

By blurring the boundaries between digital and physical realities, Augmented Reality (AR) is transforming consumers' perceptions of themselves and their environments. This review demonstrates AR's capacity to influence psychology and behavior in profound ways. We begin by providing a concise introduction to AR, considering its technical, practical, and theoretical properties. Next, we showcase a multi-disciplinary set of recent studies that explore AR's impact on psychological processes and behavioral outcomes. We conclude by offering a selection of potential future research directions designed to deepen our understanding of the psychological and behavioral implications of AR experiences.

13.
Clin Neurol Neurosurg ; 244: 108412, 2024 Jul 02.
Artículo en Inglés | MEDLINE | ID: mdl-38986364

RESUMEN

BACKGROUND: Catheter shaping is vital in cerebral aneurysm coil embolization; however, understanding three-dimensional (3D) vascular structures on two-dimensional screens is challenging. Although 3D-printed vascular models are helpful, they demand time, effort, and sterility. This study explores whether mixed-reality (MR) devices displaying 3D computer graphics (3D-CG) can address these issues. METHODS: This study focused on magnetic resonance imaging (MRI) of seven cases of cerebral aneurysms. Head-mounted display (HMD) and spatial reality display (SRD) MR devices were used, and applications for 3D-CG display at a 1:1 scale and a 3D-CG control panel were developed. Catheters shaped using a 3D printer, HMD, and SRD were inserted into hollow models to assess their accessibility and positioning. RESULTS: The concordance rate of the 3D printer and HMD groups in terms of accessibility to the aneurysm was 71.4 %, while that of the 3D printer and SRD group was 85.7 %, and that of the HMD and SRD group was 85.7 %. The concordance rates of positioning in the 3D printer and HMD groups, 3D printer and SRD groups, and HMD and SRD groups were 85.7 %, 85.7 %, and 100 %, respectively. CONCLUSIONS: MR devices facilitate catheter shaping in cerebral aneurysm coil embolization and offer a time-efficient, precise, and sterile alternative to traditional 3D printing methods.

14.
Heliyon ; 10(12): e32852, 2024 Jun 30.
Artículo en Inglés | MEDLINE | ID: mdl-38975124

RESUMEN

Nowadays with the increase of high-rise buildings, emergency evacuation is an indispensable part of urban environment management. Due to various disaster incidents occurred in indoor environments, research has concentrated on ways to deal with the different difficulties of indoor emergency evacuation. Although global navigation satellite systems (GNSSs) such as global positioning system (GPS) come in handy in outdoor spaces, they are not of much use in enclosed places, where satellite signals cannot penetrate easily. Therefore, other approaches must be considered for pedestrian navigation to cope with the indoor positioning problem. Another problem in such environments is the information of the building indoor space. The majority of the studies has used prepared maps of the building, which limits their methodology to that specific study area. However, in this study we have proposed an end-to-end method that takes advantage of BIM model of the building, thereby applicable to every structure that has an equivalent building information model (BIM). Moreover, we have used a mixture of Wi-Fi fingerprinting and pedestrian dead reckoning (PDR) method with relatively higher accuracy compared to other similar methods for navigating the user to the exit point. For implementing PDR, we used the sensors in smartphones to calculate user steps and direction. In addition, the navigational information was superimposed on the smartphone screen using augmented reality (AR) technology, thus communicating the direction information in a user-friendly manner. Finally, the AR mobile emergency evacuation application developed was assessed with a sample audience. After an experience with the app, they filled out a questionnaire which was designed in the system usability scale test (SUS) format. The evaluation results showed that the app achieved an acceptable suitability for usage.

15.
ACS Nano ; 2024 Jul 03.
Artículo en Inglés | MEDLINE | ID: mdl-38958405

RESUMEN

Facing the challenge of information security in the current era of information technology, optical encryption based on metasurfaces presents a promising solution to this issue. However, most metasurface-based encryption techniques rely on limited decoding keys and struggle to achieve multidimensional complex encryption. It hinders the progress of optical storage capacity and puts encryption security at a disclosing risk. Here, we propose and experimentally demonstrate a multidimensional encryption system based on chip-integrated metasurfaces that successfully incorporates the simultaneous manipulation of three-dimensional optical parameters, including wavelength, direction, and polarization. Hence, up to eight-channel augmented reality (AR) holograms are concealed by near- and far-field fused encryption, which can only be extracted by correctly providing the three-dimensional decoding keys and then vividly exhibit to the authorizer with low crosstalk, high definition, and no zero-order speckle noise. We envision that the miniature chip-integrated metasurface strategy for multidimensional encryption functionalities promises a feasible route toward the encryption capacity and information security enhancement of the anticounterfeiting performance and optically cryptographic storage.

16.
Front Pediatr ; 12: 1386280, 2024.
Artículo en Inglés | MEDLINE | ID: mdl-38863523

RESUMEN

Introduction: Preoperative three-dimensional (3D) reconstruction using sectional imaging is increasingly used in challenging pediatric cases to aid in surgical planning. Many case series have described various teams' experiences, discussing feasibility and realism, while emphasizing the technological potential for children. Nonetheless, general knowledge on this topic remains limited compared to the broader research landscape. The aim of this review was to explore the current devices and new opportunities provided by preoperative Computed Tomography (CT) scans or Magnetic Resonance Imaging (MRI). Methods: A systematic review was conducted to screen pediatric cases of abdominal and pelvic tumors with preoperative 3D reconstruction published between 2000 and 2023. Discussion: Surgical planning was facilitated through virtual reconstruction or 3D printing. Virtual reconstruction of complex tumors enables precise delineation of solid masses, formulation of dissection plans, and suggests dedicated vessel ligation, optimizing tissue preservation. Vascular mapping is particularly relevant for liver surgery, large neuroblastoma with imaging-defined risk factors (IDRFs), and tumors encasing major vessels, such as complex median retroperitoneal malignant masses. 3D printing can facilitate specific tissue preservation, now accessible with minimally invasive procedures like partial nephrectomy. The latest advancements enable neural plexus reconstruction to guide surgical nerve sparing, for example, hypogastric nerve modelling, typically adjacent to large pelvic tumors. New insights will soon incorporate nerve plexus images into anatomical segmentation reconstructions, facilitated by non-irradiating imaging modalities like MRI. Conclusion: Although not yet published in pediatric surgical procedures, the next anticipated advancement is augmented reality, enhancing real-time intraoperative guidance: the surgeon will use a robotic console overlaying functional and anatomical data onto a magnified surgical field, enhancing robotic precision in confined spaces.

17.
J Eye Mov Res ; 17(3)2024.
Artículo en Inglés | MEDLINE | ID: mdl-38863891

RESUMEN

Mobile eye tracking captures egocentric vision and is well-suited for naturalistic studies. However, its data is noisy, especially when acquired outdoor with multiple participants over several sessions. Area of interest analysis on moving targets is difficult because A) camera and objects move nonlinearly and may disappear/reappear from the scene; and B) off-the-shelf analysis tools are limited to linearly moving objects. As a result, researchers resort to time-consuming manual annotation, which limits the use of mobile eye tracking in naturalistic studies. We introduce a method based on a fine-tuned Vision Transformer (ViT) model for classifying frames with overlaying gaze markers. After fine-tuning a model on a manually labelled training set made of 1.98% (=7845 frames) of our entire data for three epochs, our model reached 99.34% accuracy as evaluated on hold-out data. We used the method to quantify participants' dwell time on a tablet during the outdoor user test of a mobile augmented reality application for biodiversity education. We discuss the benefits and limitations of our approach and its potential to be applied to other contexts.

18.
J Am Coll Radiol ; 2024 Jun 10.
Artículo en Inglés | MEDLINE | ID: mdl-38866067

RESUMEN

Medical Extended Reality (MXR), encompassing augmented reality (AR), virtual reality (VR), and mixed reality (MR), presents a novel paradigm in radiology training by offering immersive, interactive, and realistic learning experiences in healthcare. While traditional educational tools in the field of radiology are essential, it is necessary to capitalize on the innovative and emerging educational applications of XR technologies. At the most basic level of learning anatomy, XR has been extensively utilized with an emphasis on its superiority over conventional learning methods, especially in spatial understanding and recall. For imaging interpretation, XR has fostered the concepts of virtual reading rooms by enabling collaborative learning environments and enhancing image analysis and understanding. Moreover, image-guided interventions in interventional radiology have witnessed an uptick in XR utilization, illustrating its effectiveness in procedural training and skill acquisition for medical students and residents in a safe and risk-free environment. However, there remain several challenges and limitations for XR in radiology education, including technological, economic, ergonomic, and integration into existing curricula. This review explores the transformative potential of MXR in radiology education and training along with insights on the future of XR in radiology education, forecasting advancements in immersive simulations, AI integration for personalized learning, and the potential of cloud-based XR platforms for remote and collaborative training. In summation, MXR's burgeoning role in reshaping radiology education offers a safer, scalable, and more efficient training model that aligns with the dynamic healthcare landscape.

19.
Math Biosci Eng ; 21(5): 5947-5971, 2024 May 15.
Artículo en Inglés | MEDLINE | ID: mdl-38872565

RESUMEN

The technology of robot-assisted prostate seed implantation has developed rapidly. However, during the process, there are some problems to be solved, such as non-intuitive visualization effects and complicated robot control. To improve the intelligence and visualization of the operation process, a voice control technology of prostate seed implantation robot in augmented reality environment was proposed. Initially, the MRI image of the prostate was denoised and segmented. The three-dimensional model of prostate and its surrounding tissues was reconstructed by surface rendering technology. Combined with holographic application program, the augmented reality system of prostate seed implantation was built. An improved singular value decomposition three-dimensional registration algorithm based on iterative closest point was proposed, and the results of three-dimensional registration experiments verified that the algorithm could effectively improve the three-dimensional registration accuracy. A fusion algorithm based on spectral subtraction and BP neural network was proposed. The experimental results showed that the average delay of the fusion algorithm was 1.314 s, and the overall response time of the integrated system was 1.5 s. The fusion algorithm could effectively improve the reliability of the voice control system, and the integrated system could meet the responsiveness requirements of prostate seed implantation.


Asunto(s)
Algoritmos , Realidad Aumentada , Imagen por Resonancia Magnética , Redes Neurales de la Computación , Próstata , Neoplasias de la Próstata , Robótica , Humanos , Masculino , Robótica/instrumentación , Imagen por Resonancia Magnética/métodos , Neoplasias de la Próstata/diagnóstico por imagen , Próstata/diagnóstico por imagen , Imagenología Tridimensional , Voz , Procedimientos Quirúrgicos Robotizados/instrumentación , Procedimientos Quirúrgicos Robotizados/métodos , Holografía/métodos , Holografía/instrumentación , Braquiterapia/instrumentación , Reproducibilidad de los Resultados
20.
Artículo en Inglés | MEDLINE | ID: mdl-38863654

RESUMEN

Tracheal intubation is a crucial procedure performed in airway management to sustain life during various procedures. However, difficult airways can make intubation challenging, which is associated with increased mortality and morbidity. This is particularly important for children who undergo intubation where the situation is difficult. Improved airway management will decrease incidences of repeated attempts, decrease hypoxic injuries in patients, and decrease hospital stays, resulting in better clinical outcomes and reduced costs. Currently, 3D printed models based on CT scans and ultrasound-guided intubation are being used or tested for device fitting and procedure guidance to increase the success rate of intubation, but both have limitations. Maintaining a 3D printing facility can be logistically inconvenient, and it can be time consuming and expensive. Ultrasound-guided intubation can be hindered by operator dependence, limited two-dimensional visualization, and potential artifacts. In this study, we developed an augmented reality (AR) system that allows the overlay of intubation tools and internal airways, providing real-time guidance during the procedure. A child manikin was used to develop and test the AR system. Three-dimensional CT images were acquired from the manikin. Different tissues were segmented to generate the 3D models that were imported into Unity to build the holograms. Phantom experiments demonstrated the AR-guided system for potential applications in tracheal intubation guidance.

SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA
...