RESUMO
Minimally invasive surgery (MIS) remains technically demanding due to the difficulty of tracking hidden critical structures within the moving anatomy of the patient. In this study, we propose a soft tissue deformation tracking augmented reality (AR) navigation pipeline for laparoscopic surgery of the kidneys. The proposed navigation pipeline addresses two main sub-problems: the initial registration and deformation tracking. Our method utilizes preoperative MR or CT data and binocular laparoscopes without any additional interventional hardware. The initial registration is resolved through a probabilistic rigid registration algorithm and elastic compensation based on dense point cloud reconstruction. For deformation tracking, the sparse feature point displacement vector field continuously provides temporal boundary conditions for the biomechanical model. To enhance the accuracy of the displacement vector field, a novel feature points selection strategy based on deep learning is proposed. Moreover, an ex-vivo experimental method for internal structures error assessment is presented. The ex-vivo experiments indicate an external surface reprojection error of 4.07 ± 2.17mm and a maximum mean absolutely error for internal structures of 2.98mm. In-vivo experiments indicate mean absolutely error of 3.28 ± 0.40mm and 1.90±0.24mm, respectively. The combined qualitative and quantitative findings indicated the potential of our AR-assisted navigation system in improving the clinical application of laparoscopic kidney surgery.
RESUMO
OBJECTIVE: Intraoperative liver deformation poses a considerable challenge during liver surgery, causing significant errors in image-guided surgical navigation systems. This study addresses a critical non-rigid registration problem in liver surgery: the alignment of intrahepatic vascular trees. The goal is to deform the complete vascular shape extracted from preoperative Computed Tomography (CT) volume, aligning it with sparse vascular contour points obtained from intraoperative ultrasound (iUS) images. Challenges arise due to the intricate nature of slender vascular branches, causing existing methods to struggle with accuracy and vascular self-intersection. METHODS: We present a novel non-rigid sparse-dense registration pipeline structured in a coarse-to-fine fashion. In the initial coarse registration stage, we introduce a parametrization deformation graph and a Welsch function-based error metric to enhance convergence and robustness of non-rigid registration. For the fine registration stage, we propose an automatic curvature-based algorithm to detect and eliminate overlapping regions. Subsequently, we generate the complete vascular shape using posterior computation of a Gaussian Process Shape Model. RESULTS: Experimental results using simulated data demonstrate the accuracy and robustness of our proposed method. Evaluation results on the target registration error of tumors highlight the clinical significance of our method in tumor location computation. Comparative analysis against related methods reveals superior accuracy and competitive efficiency of our approach. Moreover, Ex vivo swine liver experiments and clinical experiments were conducted to evaluate the method's performance. CONCLUSION: The experimental results emphasize the accurate and robust performance of our proposed method. SIGNIFICANCE: Our proposed non-rigid registration method holds significant application potential in clinical practice.
Assuntos
Algoritmos , Fígado , Cirurgia Assistida por Computador , Tomografia Computadorizada por Raios X , Fígado/diagnóstico por imagem , Fígado/cirurgia , Fígado/irrigação sanguínea , Cirurgia Assistida por Computador/métodos , Humanos , Tomografia Computadorizada por Raios X/métodos , Animais , Neoplasias Hepáticas/diagnóstico por imagem , Neoplasias Hepáticas/cirurgia , Suínos , Processamento de Imagem Assistida por Computador/métodos , Hepatectomia/métodos , Ultrassonografia/métodosRESUMO
BACKGROUND AND OBJECTIVE: With the urgent demands for rapid and precise localization of pulmonary nodules in procedures such as transthoracic puncture biopsy and thoracoscopic surgery, many surgical navigation and robotic systems are applied in the clinical practice of thoracic operation. However, current available positioning methods have certain limitations, including high radiation exposure, large errors from respiratory, complicated and time-consuming procedures, etc. METHODS: To address these issues, a preoperative computed tomography (CT) image-guided robotic system for transthoracic puncture was proposed in this study. Firstly, an algorithm for puncture path planning based on constraints from clinical knowledge was developed. This algorithm enables the calculation of Pareto optimal solutions for multiple clinical targets concerning puncture angle, puncture length, and distance from hazardous areas. Secondly, to eradicate intraoperative radiation exposure, a fast registration method based on preoperative CT and gated respiration compensation was proposed. The registration process could be completed by the direct selection of points on the skin near the sternum using a hand-held probe. Gating detection and joint optimization algorithms are then performed on the collected point cloud data to compensate for errors from respiratory motion. Thirdly, to enhance accuracy and intraoperative safety, the puncture guide was utilized as an end effector to restrict the movement of the optically tracked needle, then risky actions with patient contact would be strictly limited. RESULTS: The proposed system was evaluated through phantom experiments on our custom-designed simulation test platform for patient respiratory motion to assess its accuracy and feasibility. The results demonstrated an average target point error (TPE) of 2.46 ± 0.68 mm and an angle error (AE) of 1.49 ± 0.45° for the robotic system. CONCLUSIONS: In conclusion, our proposed system ensures accuracy, surgical efficiency, and safety while also reducing needle insertions and radiation exposure in transthoracic puncture procedures, thus offering substantial potential for clinical application.
Assuntos
Procedimentos Cirúrgicos Robóticos , Cirurgia Assistida por Computador , Humanos , Procedimentos Cirúrgicos Robóticos/métodos , Biópsia por Agulha , Cirurgia Assistida por Computador/métodos , Punções , AlgoritmosRESUMO
BACKGROUNDS: The key to successful dental implant surgery is to place the implants accurately along the pre-operative planned paths. The application of surgical navigation systems can significantly improve the safety and accuracy of implantation. However, the frequent shift of the views of the surgeon between the surgical site and the computer screen causes troubles, which is expected to be solved by the introduction of mixed-reality technology through the wearing of HoloLens devices by enabling the alignment of the virtual three-dimensional (3D) image with the actual surgical site in the same field of view. METHODS: This study utilized mixed reality technology to enhance dental implant surgery navigation. Our first step was reconstructing a virtual 3D model from pre-operative cone-beam CT (CBCT) images. We then obtained the relative position between objects using the navigation device and HoloLens camera. Via the algorithms of virtual-actual registration, the transformation matrixes between the HoloLens devices and the navigation tracker were acquired through the HoloLens-tracker registration, and the transformation matrixes between the virtual model and the patient phantom through the image-phantom registration. In addition, the algorithm of surgical drill calibration assisted in acquiring transformation matrixes between the surgical drill and the patient phantom. These algorithms allow real-time tracking of the surgical drill's location and orientation relative to the patient phantom under the navigation device. With the aid of the HoloLens 2, virtual 3D images and actual patient phantoms can be aligned accurately, providing surgeons with a clear visualization of the implant path. RESULTS: Phantom experiments were conducted using 30 patient phantoms, with a total of 102 dental implants inserted. Comparisons between the actual implant paths and the pre-operatively planned implant paths showed that our system achieved a coronal deviation of 1.507 ± 0.155 mm, an apical deviation of 1.542 ± 0.143 mm, and an angular deviation of 3.468 ± 0.339°. The deviation was not significantly different from that of the navigation-guided dental implant placement but better than the freehand dental implant placement. CONCLUSION: Our proposed system realizes the integration of the pre-operative planned dental implant paths and the patient phantom, which helps surgeons achieve adequate accuracy in traditional dental implant surgery. Furthermore, this system is expected to be applicable to animal and cadaveric experiments in further studies.
RESUMO
PURPOSE: The treatment of pelvic and acetabular fractures remains technically demanding, and traditional surgical navigation systems suffer from the hand-eye mis-coordination. This paper describes a multi-view interactive virtual-physical registration method to enhance the surgeon's depth perception and a mixed reality (MR)-based surgical navigation system for pelvic and acetabular fracture fixation. METHODS: First, the pelvic structure is reconstructed by segmentation in a preoperative CT scan, and an insertion path for the percutaneous LC-II screw is computed. A custom hand-held registration cube is used for virtual-physical registration. Three strategies are proposed to improve the surgeon's depth perception: vertices alignment, tremble compensation and multi-view averaging. During navigation, distance and angular deviation visual cues are updated to help the surgeon with the guide wire insertion. The methods have been integrated into an MR module in a surgical navigation system. RESULTS: Phantom experiments were conducted. Ablation experimental results demonstrated the effectiveness of each strategy in the virtual-physical registration method. The proposed method achieved the best accuracy in comparison with related works. For percutaneous guide wire placement, our system achieved a mean bony entry point error of 2.76 ± 1.31 mm, a mean bony exit point error of 4.13 ± 1.74 mm, and a mean angular deviation of 3.04 ± 1.22°. CONCLUSIONS: The proposed method can improve the virtual-physical fusion accuracy. The developed MR-based surgical navigation system has clinical application potential. Cadaver and clinical experiments will be conducted in future.
Assuntos
Realidade Aumentada , Fraturas da Coluna Vertebral , Cirurgia Assistida por Computador , Humanos , Cirurgia Assistida por Computador/métodos , Pelve/cirurgia , Fixação Interna de Fraturas/métodosRESUMO
Orthopedic surgery remains technically demanding due to the complex anatomical structures and cumbersome surgical procedures. The introduction of image-guided orthopedic surgery (IGOS) has significantly decreased the surgical risk and improved the operation results. This review focuses on the application of recent advances in artificial intelligence (AI), deep learning (DL), augmented reality (AR) and robotics in image-guided spine surgery, joint arthroplasty, fracture reduction and bone tumor resection. For the pre-operative stage, key technologies of AI and DL based medical image segmentation, 3D visualization and surgical planning procedures are systematically reviewed. For the intra-operative stage, the development of novel image registration, surgical tool calibration and real-time navigation are reviewed. Furthermore, the combination of the surgical navigation system with AR and robotic technology is also discussed. Finally, the current issues and prospects of the IGOS system are discussed, with the goal of establishing a reference and providing guidance for surgeons, engineers, and researchers involved in the research and development of this area.
Assuntos
Procedimentos Ortopédicos , Robótica , Cirurgia Assistida por Computador , Inteligência Artificial , Cirurgia Assistida por Computador/métodosRESUMO
PURPOSE: Free fibula flap is the gold standard for the treatment of mandibular defects. However, the existing preoperative planning protocol is cumbersome to execute, costly to learn, and poorly collaborative with the robot-assisted cutting of the fibular osteotomy plane. METHODS: A surgical planning system for robotic assisted mandibular reconstruction with fibula free flap is proposed in this study. A fibular osteotomy planning algorithm is presented so that the virtual surgical planning of the fibular osteotomy segments can be obtained automatically with selected mandibular anatomical landmarks. The planned osteotomy planes are then converted into the motion path of the robotic arm, and the automatic fibula osteotomy is completed under optical navigation. RESULTS: Surgical planning was performed on 35 patients to verify the feasibility of our system's virtual surgical planning module, with an average time of 13 min. Phantom experiments were performed to evaluate the reliability and stability of this system. The average distance and angular deviations of the osteotomy planes are 1.04 ± 0.68 mm and 1.56 ±1.10°, respectively. CONCLUSIONS: Our system can achieve not only precise and convenient preoperative planning, but also safe and reliable osteotomy trajectory. The clinical applications of our system for mandibular reconstruction surgery are expected soon.
Assuntos
Retalhos de Tecido Biológico , Reconstrução Mandibular , Procedimentos Cirúrgicos Robóticos , Cirurgia Assistida por Computador , Humanos , Reconstrução Mandibular/métodos , Retalhos de Tecido Biológico/cirurgia , Reprodutibilidade dos Testes , Cirurgia Assistida por Computador/métodos , Mandíbula/diagnóstico por imagem , Mandíbula/cirurgiaRESUMO
PURPOSE: Precise determination of target is an essential procedure in prostate interventions, such as prostate biopsy, lesion detection, and targeted therapy. However, the prostate delineation may be tough in some cases due to tissue ambiguity or lack of partial anatomical boundary. In this study, we propose a novel supervised registration-based algorithm for precise prostate segmentation, which combines the convolutional neural network (CNN) with a statistical shape model (SSM). METHODS: The proposed network mainly consists of two branches. One called SSM-Net branch was exploited to predict the shape transform matrix, shape control parameters, and shape fine-tuning vector, for the generation of the prostate boundary. Furthermore, according to the inferred boundary, a normalized distance map was calculated as the output of SSM-Net. Another branch named ResU-Net was employed to predict a probability label map from the input images at the same time. Integrating the output of these two branches, the optimal weighted sum of the distance map and the probability map was regarded as the prostate segmentation. RESULTS: Two public data sets PROMISE12 and NCI-ISBI 2013 were utilized to evaluate the performance of the proposed algorithm. The results demonstrated that the segmentation algorithm achieved the best performance with an SSM of 9500 nodes, which obtained a dice of 0.907 and an average surface distance of 1.85 mm. Compared with other methods, our algorithm delineates the prostate region more accurately and efficiently. In addition, we verified the impact of model elasticity augmentation and the fine-tuning item on the network segmentation capability. As a result, both factors have improved the delineation accuracy, with dice increased by 10% and 7%, respectively. CONCLUSIONS: Our segmentation method has the potential to be an effective and robust approach for prostate segmentation.
Assuntos
Imageamento Tridimensional , Próstata , Algoritmos , Humanos , Processamento de Imagem Assistida por Computador/métodos , Imageamento Tridimensional/métodos , Imageamento por Ressonância Magnética/métodos , Masculino , Modelos Estatísticos , Redes Neurais de Computação , Próstata/diagnóstico por imagemRESUMO
OBJECTIVE: Cervical pedicle screw (CPS) placement surgery remains technically demanding due to the complicated anatomy with neurovascular structures. State-of-the-art surgical navigation or robotic systems still suffer from the problem of hand-eye coordination and soft tissue deformation. In this study, we aim at tracking the intraoperative soft tissue deformation and constructing a virtual-physical fusion surgical scene, and integrating them into the robotic system for CPS placement surgery. METHODS: Firstly, we propose a real-time deformation computation method based on the prior shape model and intraoperative partial information acquired from ultrasound images. According to the generated posterior shape, the structure representation of deformed target tissue gets updated continuously. Secondly, a hand tremble compensation method is proposed to improve the accuracy and robustness of the virtual-physical calibration procedure, and a mixed reality based surgical scene is further constructed for CPS placement surgery. Thirdly, we integrate the soft tissue deformation method and virtual-physical fusion method into our previously proposed surgical robotic system, and the surgical workflow for CPS placement surgery is introduced. RESULTS: We conducted phantom and animal experiments to evaluate the feasibility and accuracy of the proposed system. Our system yielded a mean surface distance error of 1.52 ± 0.43 mm for soft tissue deformation computing, and an average distance deviation of 1.04 ± 0.27 mm for CPS placement. CONCLUSION: Results demonstrate that our system involves tremendous clinical application potential. SIGNIFICANCE: Our proposed system promotes the efficiency and safety of the CPS placement surgery.
Assuntos
Realidade Aumentada , Parafusos Pediculares , Procedimentos Cirúrgicos Robóticos , Fusão Vertebral , Cirurgia Assistida por Computador , Animais , Vértebras Cervicais/diagnóstico por imagem , Vértebras Cervicais/cirurgia , Fusão Vertebral/métodos , Cirurgia Assistida por Computador/métodosRESUMO
BACKGROUND: The accurate distal locking of intramedullary (IM) nails is a clinical challenge for surgeons. Although many navigation systems have been developed, a real-time guide method with free radiation exposure, better user convenience, and high cost performance has not been proposed. METHODS: This paper aims to develop an electromagnetic navigation system named TianXuan-MDTS that provides surgeons with a proven surgical solution. And the registration method with external landmarks for IM nails and calibration algorithm for guiders were proposed. A puncture experiment, model experiments measured by 3D Slicer and cadaver experiments (2 cadaveric leg specimens and 6 drilling operations) are conducted to evaluate its performance and stability. RESULTS: The registration deviations (TRE) is 1.05± 0.13 mm. In the puncture experiment, a success rate of 96% can be achieved in 45.94 s. TianXuan-MDTS were evaluated on 3 tibia model. The results demonstrated that all 9 screw holes were successfully prepared at a rate of 100% in 91.67 s. And the entry point, end point, and angular deviations were 1.60±0.20 mm, 1.47±0.18 mm, and 3.10±0.84°, respectively. Postoperative fluoroscopy in cadaver experiments showed that all drills were in the distal locking holes, with a success rate of 100% and the average time 143.17± 18.27 s. CONCLUSIONS: The experimental results indicate that our system with novel registration and calibration methods could serve as a feasible and promising tool to assist surgeons during distal locking.
Assuntos
Fixação Intramedular de Fraturas , Cirurgia Assistida por Computador , Pinos Ortopédicos , Fenômenos Eletromagnéticos , Fluoroscopia , HumanosRESUMO
BACKGROUND AND OBJECTIVE: The distal interlocking of intramedullary nail remains a technically demanding procedure. Existing augmented reality based solutions still suffer from hand-eye coordination problem, prolonged operation time, and inadequate resolution. In this study, an augmented reality based navigation system for distal interlocking of intramedullary nail is developed using Microsoft HoloLens 2, the state-of-the-art optical see-through head-mounted display. METHODS: A customized registration cube is designed to assist surgeons with better depth perception when performing registration procedures. During drilling, surgeons can obtain accurate and in-situ visualization of intramedullary nail and drilling path, and dynamic navigation is enabled. An intraoperative warning system is proposed to provide intuitive feedback of real-time deviations and electromagnetic disturbances. RESULTS: The preclinical phantom experiment showed that the reprojection errors along the X, Y, and Z axes were 1.55 ± 0.27 mm, 1.71 ± 0.40 mm, and 2.84 ± 0.78 mm, respectively. The end-to-end evaluation method indicated the distance error was 1.61 ± 0.44 mm, and the 3D angle error was 1.46 ± 0.46°. A cadaver experiment was also conducted to evaluate the feasibility of the system. CONCLUSION: Our system has potential advantages over the 2D-screen based navigation system and the pointing device based navigation system in terms of accuracy and time consumption, and has tremendous application prospects.
Assuntos
Realidade Aumentada , Fixação Intramedular de Fraturas , Cirurgia Assistida por Computador , Fixadores Internos , Imagens de FantasmasRESUMO
OBJECTIVE: To realize the three-dimensional visual output of surgical navigation information by studying the cross-linking of mixed reality display devices and high-precision optical navigators. METHODS: Applying quaternion-based point alignment algorithms to realize the positioning configuration of mixed reality display devices, high-precision optical navigators, real-time patient tracking and calibration technology; based on open source SDK and development tools, developing mixed reality surgery based on visual positioning and tracking system. In this study, four patients were selected for mixed reality-assisted tumor resection and reconstruction and re-examined 1 month after the operation. We reconstructed postoperative CT and use 3DMeshMetric to form the error distribution map, and completed the error analysis and quality control. RESULTS: Realized the cross-linking of mixed reality display equipment and high-precision optical navigator, developed a digital maxillofacial surgery system based on mixed reality technology and successfully implemented mixed reality-assisted tumor resection and reconstruction in 4 cases. CONCLUSIONS: The maxillofacial digital surgery system based on mixed reality technology can superimpose and display three-dimensional navigation information in the surgeon's field of vision. Moreover, it solves the problem of visual conversion and space conversion of the existing navigation system. It improves the work efficiency of digitally assisted surgery, effectively reduces the surgeon's dependence on spatial experience and imagination, and protects important anatomical structures during surgery. It is a significant clinical application value and potential.
RESUMO
Background: Research proves that the apprenticeship model, which is the gold standard for training surgical residents, is obsolete. For that reason, there is a continuing effort toward the development of high-fidelity surgical simulators to replace the apprenticeship model. Applying Virtual Reality Augmented Reality (AR) and Mixed Reality (MR) in surgical simulators increases the fidelity, level of immersion and overall experience of these simulators.Areas covered: The objective of this review is to provide a comprehensive overview of the application of VR, AR and MR for distinct surgical disciplines, including maxillofacial surgery and neurosurgery. The current developments in these areas, as well as potential future directions, are discussed.Expert opinion: The key components for incorporating VR into surgical simulators are visual and haptic rendering. These components ensure that the user is completely immersed in the virtual environment and can interact in the same way as in the physical world. The key components for the application of AR and MR into surgical simulators include the tracking system as well as the visual rendering. The advantages of these surgical simulators are the ability to perform user evaluations and increase the training frequency of surgical residents.