Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 81
Filtrar
2.
Artículo en Inglés | MEDLINE | ID: mdl-38411780

RESUMEN

PURPOSE: Analysis of operative fields is expected to aid in estimating procedural workflow and evaluating surgeons' procedural skills by considering the temporal transitions during the progression of the surgery. This study aims to propose an automatic recognition system for the procedural workflow by employing machine learning techniques to identify and distinguish elements in the operative field, including body tissues such as fat, muscle, and dermis, along with surgical tools. METHODS: We conducted annotations on approximately 908 first-person-view images of breast surgery to facilitate segmentation. The annotated images were used to train a pixel-level classifier based on Mask R-CNN. To assess the impact on procedural workflow recognition, we annotated an additional 43,007 images. The network, structured on the Transformer architecture, was then trained with surgical images incorporating masks for body tissues and surgical tools. RESULTS: The instance segmentation of each body tissue in the segmentation phase provided insights into the trend of area transitions for each tissue. Simultaneously, the spatial features of the surgical tools were effectively captured. In regard to the accuracy of procedural workflow recognition, accounting for body tissues led to an average improvement of 3 % over the baseline. Furthermore, the inclusion of surgical tools yielded an additional increase in accuracy by 4 % compared to the baseline. CONCLUSION: In this study, we revealed the contribution of the temporal transition of the body tissues and surgical tools spatial features to recognize procedural workflow in first-person-view surgical videos. Body tissues, especially in open surgery, can be a crucial element. This study suggests that further improvements can be achieved by accurately identifying surgical tools specific to each procedural workflow step.

3.
J Clin Med ; 12(23)2023 Nov 29.
Artículo en Inglés | MEDLINE | ID: mdl-38068460

RESUMEN

Genioplasty is performed for the orthognathic surgical correction of dentofacial deformities. This article reports a safe and accurate method for genioplasty combining a novel three-dimensional (3D) device with mixed reality (MR)-assisted surgery using a registration marker and a head-mounted display. Four types of devices were designed based on the virtual operation: a surgical splint with a connector; an osteotomy device; a repositioning device; and a registration marker. Microsoft HoloLens 2 and Holoeyes MD were used to project holograms created using computed tomography (CT) data onto the surgical field to improve the accuracy of the computer-aided designed and manufactured (CAD/CAM) surgical guides. After making an incision on the oral vestibule, the splint was fitted on the teeth and the osteotomy device was mounted at the junction site, placed directly on the exposed mandible bone surface. Temporary screws were fixed into the screw hole. An ultrasonic cutting instrument was used for the osteotomy. After separating the bone, a repositioning device was connected to the splint junction and bone segment, and repositioning was performed. At the time of repositioning, the registration marker was connected to the splint junction, and mandible repositioning was confirmed three-dimensionally through HoloLens 2 into the position specified in the virtual surgery. The rate of overlay error between the preoperative virtual operation and one-month postoperative CT data within 2 mm was 100%. CAD/CAM combined with MR enabled accurate genioplasty.

4.
Iperception ; 14(6): 20416695231211699, 2023.
Artículo en Inglés | MEDLINE | ID: mdl-37969571

RESUMEN

Visuomotor synchrony in time and space induces a sense of embodiment towards virtual bodies experienced in first-person view using Virtual Reality (VR). Here, we investigated whether temporal visuomotor synchrony affects avatar embodiment even when the movements of the virtual arms are spatially altered from those of the user in a non-human-like manner. In a within-subjects design VR experiment, participants performed a reaching task controlling an avatar whose lower arms bent in inversed and biomechanically impossible directions from the elbow joints. They performed the reaching task using this "unnatural avatar" as well as a "natural avatar," whose arm movements and positions spatially matched the user. The reaching tasks were performed with and without a one second delay between the real and virtual movements. While the senses of body ownership and agency towards the unnatural avatar were significantly lower compared to those towards the natural avatar, temporal visuomotor synchrony did significantly increase the sense of embodiment towards the unnatural avatar as well as the natural avatar. These results suggest that temporal visuomotor synchrony is crucial for inducing embodiment even when the spatial match between the real and virtual limbs is disrupted with movements outside the pre-existing cognitive representations of the human body.

5.
Brain Nerve ; 75(10): 1129-1134, 2023 Oct.
Artículo en Japonés | MEDLINE | ID: mdl-37849363

RESUMEN

Metaverse and XR (extended reality) are rapidly gaining traction in the medical field, being used to assist surgeries. Leveraging high-speed communication network and advanced XR devices, medical images and surgical procedures are displayed in a three-dimensional space within the metaverse and are shared among physicians. Furthermore, there are initiatives in which novice doctors can relive the techniques of experienced doctors over time, with hopes that it will improve the overall quality and operational efficiency of the medical profession.


Asunto(s)
Medicina , Neurocirugia , Médicos , Humanos , Procedimientos Neuroquirúrgicos , Procesamiento de Imagen Asistido por Computador
7.
Surg Endosc ; 37(7): 5414-5420, 2023 07.
Artículo en Inglés | MEDLINE | ID: mdl-37017769

RESUMEN

BACKGROUND: In Japan, the standard treatment for stage II/III advanced low rectal cancer is total mesorectal excision plus lateral lymph node dissection (LLND). There are also recent reports on the use of transanal LLND. However, the transanal anatomy is difficult to understand, and additional support tools are required to improve the surgical safety. The present study examined the utility of holograms with mixed reality as an intraoperative support tool for assessing the complex pelvic anatomy. METHODS: Polygon (stereolithography) files of patients' pelvic organs were created and exported from the SYNAPSE VINCENT imaging system and uploaded into the Holoeyes MD virtual reality software. Three-dimensional images were automatically converted into patient-specific holograms. Each hologram was then installed into a head mount display (HoloLens2), and the surgeons and assistants wore the HoloLens2 when they performed transanal LLND. Twelve digestive surgeons with prior practice in hologram manipulation evaluated the utility of the intraoperative hologram support by means of a questionnaire. RESULTS: Intraoperative hologram support improved the surgical understanding of the lateral lymph node region anatomy. In the questionnaire, 75% of the surgeons answered that the hologram accurately reflected the anatomy, and 92% of the surgeons answered that the anatomy was better understood by simulating the hologram intraoperatively than preoperatively. Moreover, 92% of the surgeons agreed that intraoperative holograms were a useful support tool for improving the surgical safety. CONCLUSIONS: Intraoperative hologram support improved the surgical understanding of the pelvic anatomy for transanal LLND. Intraoperative holograms may represent a next-generation surgical tool for transanal LLND.


Asunto(s)
Escisión del Ganglio Linfático , Neoplasias del Recto , Humanos , Resultado del Tratamiento , Escisión del Ganglio Linfático/métodos , Ganglios Linfáticos/patología , Neoplasias del Recto/cirugía , Neoplasias del Recto/patología , Disección
8.
Artículo en Inglés | MEDLINE | ID: mdl-37027567

RESUMEN

When humans generate stimuli voluntarily, they perceive the stimuli more weakly than those produced by others, which is called sensory attenuation (SA). SA has been investigated in various body parts, but it is unclear whether an extended body induces SA. This study investigated the SA of audio stimuli generated by an extended body. SA was assessed using a sound comparison task in a virtual environment. We prepared robotic arms as extended bodies, and the robotic arms were controlled by facial movements. To evaluate the SA of robotic arms, we conducted two experiments. Experiment 1 investigated the SA of the robotic arms under four conditions. The results showed that robotic arms manipulated by voluntary actions attenuated audio stimuli. Experiment 2 investigated the SA of the robotic arm and innate body under five conditions. The results indicated that the innate body and robotic arm induced SA, while there were differences in the sense of agency between the innate body and robotic arm. Analysis of the results indicated three findings regarding the SA of the extended body. First, controlling the robotic arm with voluntary actions in a virtual environment attenuates the audio stimuli. Second, there were differences in the sense of agency related to SA between extended and innate bodies. Third, the SA of the robotic arm was correlated with the sense of body ownership.

10.
IEEE Trans Vis Comput Graph ; 29(10): 4124-4139, 2023 10.
Artículo en Inglés | MEDLINE | ID: mdl-35653450

RESUMEN

As one of the facial expression recognition techniques for Head-Mounted Display (HMD) users, embedded photo-reflective sensors have been used. In this paper, we investigate how gaze and face directions affect facial expression recognition using the embedded photo-reflective sensors. First, we collected a dataset of five facial expressions (Neutral, Happy, Angry, Sad, Surprised) while looking in diverse directions by moving 1) the eyes and 2) the head. Using the dataset, we analyzed the effect of gaze and face directions by constructing facial expression classifiers in five ways and evaluating the classification accuracy of each classifier. The results revealed that the single classifier that learned the data for all gaze points achieved the highest classification performance. Then, we investigated which facial part was affected by the gaze and face direction. The results showed that the gaze directions affected the upper facial parts, while the face directions affected the lower facial parts. In addition, by removing the bias of facial expression reproducibility, we investigated the pure effect of gaze and face directions in three conditions. The results showed that, in terms of gaze direction, building classifiers for each direction significantly improved the classification accuracy. However, in terms of face directions, there were slight differences between the classifier conditions. Our experimental results implied that multiple classifiers corresponding to multiple gaze and face directions improved facial expression recognition accuracy, but collecting the data of the vertical movement of gaze and face is a practical solution to improving facial expression recognition accuracy.


Asunto(s)
Emociones , Reconocimiento Facial , Reproducibilidad de los Resultados , Gráficos por Computador , Ira , Expresión Facial , Fijación Ocular
11.
Innovations (Phila) ; 17(5): 430-437, 2022 Sep.
Artículo en Inglés | MEDLINE | ID: mdl-36331023

RESUMEN

OBJECTIVE: Virtual reality can be applied preoperatively by surgeons to gain precise insights into a patient's anatomy for planning minimally invasive coronary artery bypass grafting (CABG) with in situ arterial grafts. This study aimed to examine virtual reality simulation for minimally invasive CABG with in situ arterial grafts. METHODS: Preoperative stereolithographic files in 35 in situ arterial grafts were converted using 320-slice computed tomography and workstation. The accurate length and direction of each graft were confirmed through virtual reality glasses. The simulation of graft designs was performed by using an immersive virtual reality platform. RESULTS: The mean harvested lengths of in situ left internal thoracic artery (n = 17), right internal thoracic artery (n = 12), and gastroepiploic artery (n = 6) grafts predicted by virtual reality simulation were 21.4 ± 3.4 cm, 21.2 ± 3.6 cm, and 22.8 ± 4.8 cm. The required lengths of these grafts predicted by virtual reality simulation were 15.8 ± 2.3 cm, 16.4 ± 2.1 cm, and 14.5 ± 4.4 cm. Minimally invasive CABG using virtual reality simulation was completed in 17 patients, of whom 16 patients underwent aortic no-touch total arterial CABG. The surgical strategy was adjusted in 11.8% of the cases due to the 3-dimensional virtual reality-based anatomy evaluation. The early mortality and morbidity were 0%, and the patency of the graft was 100%. The median time to return to full physical activity was 7.1 days. CONCLUSIONS: This study demonstrated the successful development and clinical application of the first dedicated virtual reality platform for planning aortic no-touch total arterial minimally invasive CABG. Virtual reality simulation can allow the accurate preoperative understanding of anatomy and appropriate planning of the graft design with acceptable postoperative outcomes.


Asunto(s)
Arterias Mamarias , Realidad Virtual , Humanos , Procedimientos Quirúrgicos Mínimamente Invasivos/métodos , Puente de Arteria Coronaria/métodos , Arterias Mamarias/trasplante , Vasos Coronarios/diagnóstico por imagen , Vasos Coronarios/cirugía , Resultado del Tratamiento
13.
Sci Rep ; 12(1): 11802, 2022 07 12.
Artículo en Inglés | MEDLINE | ID: mdl-35821275

RESUMEN

In the illusory body ownership, humans feel as if a rubber hand or an avatar in a virtual environment is their own body through visual-tactile synchronization or visual-motor synchronization. Despite the onset time and duration of illusory body ownership has been investigated, it is not clear how the onset time and duration change when a part of the body is missing from the full-body. In this study, we investigated the completeness of the full-body for the illusion onset and duration by comparing the following conditions: complete avatar, avatar missing hands and feet, and avatar with hands and feet only. Our results suggest that avatar hands and feet only shorten the duration of the illusion, and missing body parts, such as only hands and feet or no hands and feet, reduce the sense of body ownership and of agency. However, the effects of avatar completeness on the onset time are unclear, and no conclusions can be made in either direction based on the current findings.


Asunto(s)
Ilusiones , Pie , Mano , Humanos , Propiedad , Percepción Visual
14.
Langenbecks Arch Surg ; 407(6): 2579-2584, 2022 Sep.
Artículo en Inglés | MEDLINE | ID: mdl-35840706

RESUMEN

PURPOSE: Urethral injury is one of the most important complications in transanal total mesorectal excision (TaTME) in male patients with rectal cancer. The purpose of this study was to investigate holographic image-guided surgery in TaTME. METHODS: Polygon (stereolithography) files were created and exported from SYNAPSE VINCENT, and then uploaded into the Holoeyes MD system (Holoeyes Inc., Tokyo, Japan). After uploading the data, the three-dimensional image was automatically converted into a case-specific hologram. The hologram was then installed into the head mount display, HoloLens (Microsoft Corporation, Redmond, WA). The surgeons and assistants wore the HoloLens when they performed TaTME. RESULTS: In a Wi-Fi-enabled operating room, each surgeon, wearing a HoloLens, shared the same hologram and succeeded in adjusting the hologram by making simple hand gestures from their respective angles. The hologram contributed to better comprehension of the positional relationships between the urethra and the surrounding pelvic organs during surgery. All surgeons were able to properly determine the dissection line. CONCLUSIONS: This first experience suggests that intraoperative holograms contributed to reducing the risk of urethral injury and understanding transanal anatomy. Intraoperative holograms have the potential to become a new next-generation surgical support tool for use in spatial awareness and the sharing of information between surgeons.


Asunto(s)
Laparoscopía , Proctectomía , Neoplasias del Recto , Cirugía Asistida por Computador , Cirugía Endoscópica Transanal , Disección/métodos , Humanos , Masculino , Complicaciones Posoperatorias/cirugía , Neoplasias del Recto/diagnóstico por imagen , Neoplasias del Recto/cirugía , Recto/cirugía , Cirugía Asistida por Computador/métodos , Cirugía Endoscópica Transanal/métodos
15.
Sci Rep ; 12(1): 9769, 2022 06 27.
Artículo en Inglés | MEDLINE | ID: mdl-35760810

RESUMEN

The supernumerary robotic limb system expands the motor function of human users by adding extra artificially designed limbs. It is important for us to embody the system as if it is a part of one's own body and to maintain cognitive transparency in which the cognitive load is suppressed. Embodiment studies have been conducted with an expansion of bodily functions through a "substitution" and "extension". However, there have been few studies on the "addition" of supernumerary body parts. In this study, we developed a supernumerary robotic limb system that operates in a virtual environment, and then evaluated whether the extra limb can be regarded as a part of one's own body using a questionnaire and whether the perception of peripersonal space changes with a visuotactile crossmodal congruency task. We found that the participants can embody the extra-limbs after using the supernumerary robotic limb system. We also found a positive correlation between the perceptual change in the crossmodal congruency task and the subjective feeling that the number of one's arms had increased (supernumerary limb sensation). These results suggest that the addition of an extra body part may cause the participants to feel that they had acquired a new body part that differs from their original body part through a functional expansion.


Asunto(s)
Procedimientos Quirúrgicos Robotizados , Robótica , Realidad Virtual , Brazo , Humanos , Espacio Personal
16.
Medicina (Kaunas) ; 58(4)2022 Apr 02.
Artículo en Inglés | MEDLINE | ID: mdl-35454347

RESUMEN

The concept of minimally invasive spine therapy (MIST) has been proposed as a treatment strategy to reduce the need for overall patient care, including not only minimally invasive spine surgery (MISS) but also conservative treatment and rehabilitation. To maximize the effectiveness of patient care in spine surgery, the educational needs of medical students, residents, and patient rehabilitation can be enhanced by digital transformation (DX), including virtual reality (VR), augmented reality (AR), mixed reality (MR), and extended reality (XR), three-dimensional (3D) medical images and holograms; wearable sensors, high-performance video cameras, fifth-generation wireless system (5G) and wireless fidelity (Wi-Fi), artificial intelligence, and head-mounted displays (HMDs). Furthermore, to comply with the guidelines for social distancing due to the unexpected COVID-19 pandemic, the use of DX to maintain healthcare and education is becoming more innovative than ever before. In medical education, with the evolution of science and technology, it has become mandatory to provide a highly interactive educational environment and experience using DX technology for residents and medical students, known as digital natives. This study describes an approach to pre- and intraoperative medical education and postoperative rehabilitation using DX in the field of spine surgery that was implemented during the COVID-19 pandemic and will be utilized thereafter.


Asunto(s)
Realidad Aumentada , COVID-19 , Educación Médica , Inteligencia Artificial , Educación Médica/métodos , Humanos , Pandemias
17.
Ann Gastroenterol Surg ; 6(2): 190-196, 2022 Mar.
Artículo en Inglés | MEDLINE | ID: mdl-35261944

RESUMEN

With the development of three-dimensional (3D) simulation software, preoperative simulation technology is almost completely established. The remaining issue is how to recognize anatomy three-dimensionally. Extended reality is a newly developed technology with several merits for surgical application: no requirement for a sterilized display monitor, better spatial awareness, and the ability to share 3D images among all surgeons. Various technology or devices for intraoperative navigation have also been developed to support the safety and certainty of liver surgery. Consensus recommendations regarding indocyanine green fluorescence were determined in 2021. Extended reality has also been applied to intraoperative navigation, and artificial intelligence (AI) is one of the topics of real-time navigation. AI might overcome the problem of liver deformity with automatic registration. Including the issues described above, this article focuses on recent advances in simulation and navigation in liver surgery from 2020 to 2021.

18.
J Clin Med ; 11(2)2022 Jan 17.
Artículo en Inglés | MEDLINE | ID: mdl-35054164

RESUMEN

In recent years, with the rapid advancement and consumerization of virtual reality, augmented reality, mixed reality, and extended reality (XR) technology, the use of XR technology in spine medicine has also become increasingly popular. The rising use of XR technology in spine medicine has also been accelerated by the recent wave of digital transformation (i.e., case-specific three-dimensional medical images and holograms, wearable sensors, video cameras, fifth generation, artificial intelligence, and head-mounted displays), and further accelerated by the COVID-19 pandemic and the increase in minimally invasive spine surgery. The COVID-19 pandemic has a negative impact on society, but positive impacts can also be expected, including the continued spread and adoption of telemedicine services (i.e., tele-education, tele-surgery, tele-rehabilitation) that promote digital transformation. The purpose of this narrative review is to describe the accelerators of XR (VR, AR, MR) technology in spine medicine and then to provide a comprehensive review of the use of XR technology in spine medicine, including surgery, consultation, education, and rehabilitation, as well as to identify its limitations and future perspectives (status quo and quo vadis).

19.
Langenbecks Arch Surg ; 407(3): 1285-1289, 2022 May.
Artículo en Inglés | MEDLINE | ID: mdl-34557939

RESUMEN

PURPOSE: This study was performed to investigate the potential of intraoperative three-dimensional (3D) holographic cholangiography, which provides a computer graphics model of the biliary tract, with mixed reality techniques. METHODS: Two patients with intraductal papillary neoplasm of the bile duct were enrolled in the study. Intraoperative 3D cholangiography was performed in a hybrid operating room. Three-dimensional polygon data using the acquired cholangiography data were installed into a head mount display (HoloLens; Microsoft Corporation, Redmond, WA, USA). RESULTS: Upon completion of intraoperative 3D cholangiography, a hologram was immediately and successfully made in the operating room using the acquired cholangiography data, and several surgeons wearing the HoloLens succeeded in sharing the same hologram. Compared with usual two-dimensional cholangiography, this 3D holographic cholangiography technique contributed to more accurate reappearance of the bile ducts, especially the B1 origination site, and moving the hologram from the respective operators' angles by means of easy gesture-handling without any monitors. CONCLUSION: Intraoperative 3D holographic cholangiography might be a new next-generation operation-support tool in terms of immediacy, accurate anatomical reappearance, and ease of handling.


Asunto(s)
Neoplasias de los Conductos Biliares , Sistema Biliar , Conductos Biliares/cirugía , Colangiografía , Humanos
20.
Surgery ; 171(4): 1006-1013, 2022 04.
Artículo en Inglés | MEDLINE | ID: mdl-34736791

RESUMEN

BACKGROUND: Mixed-reality technology, a new digital holographic image technology, is used to present 3-dimensional (3D) images in the surgical space using a wearable mixed-reality device. This study aimed to assess the safety and efficacy of laparoscopic cholecystectomy using a holography-guided navigation system as an intraoperative support image.In this prospective observational study, 27 patients with cholelithiasis or mild cholecystitis underwent laparoscopic cholecystectomy between April 2020 and November 2020. Nine patients underwent laparoscopic cholecystectomy with 3D models generated by a wearable mixed-reality device (laparoscopic cholecystectomy with 3D models) and 18 underwent laparoscopic cholecystectomy with conventional two-dimensional images (laparoscopic cholecystectomy with 2D images) as surgical support images. Surgical outcomes such as operative time, blood loss, and perioperative complication rate were measured, and a four-item questionnaire was used for subjective assessment. All surgeries were performed by a mid-career and an experienced surgeon. RESULTS: Median operative times of laparoscopic cholecystectomy with 3-dimensional models and 2-dimensional images were 74.0 and 58.0 minutes, respectively. No intraoperative blood loss or perioperative complications occurred. Although the midcareer surgeon indicated that laparoscopic cholecystectomy with 3-dimensional models was "normal" or "easy" compared with 2-dimensional images in all cases, the experienced surgeon rated 3-dimensional models as more difficult in 3 (33%) of 9 cases. CONCLUSION: This study provides evidence that laparoscopic cholecystectomy with 3-dimensional models is feasible. However, the efficacy of laparoscopic cholecystectomy with 3-dimensional models may depend on the surgeon's experience, as indicated by the different ratings provided by the surgeons.


Asunto(s)
Colecistectomía Laparoscópica , Colecistitis , Holografía , Dispositivos Electrónicos Vestibles , Colecistectomía Laparoscópica/efectos adversos , Colecistectomía Laparoscópica/métodos , Colecistitis/cirugía , Computadores , Humanos
SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA
...