RESUMO
BACKGROUND: Augmented reality (AR) is still a primarily theoretical concept in areas such as bowel, liver, gallbladder, and jaw surgeries because of the limitation of visualization accuracy of hidden organs and internal structures. This paper aims to improve the cutting accuracy, visualizing accuracy, and processing time of the augmented video. METHODOLOGY: The proposed system consists of an enhanced block-matching algorithm (BMA) with ghosting map technique. RESULTS: Results proved that proposed system reduced the visualization error, which ranges from 1.48 to 1.83 mm against the existing system visualization error 1.67 to 2.0. Similarly, the processing time also improved 59 to 72 ms/frame over the 50 to 58 ms/frame. CONCLUSION: This study showed the improvement and solved the problem soft tissue reconstruction and visualization on the AR video that used in bowel and gallbladder surgeries.
Assuntos
Realidade Aumentada , Procedimentos Cirúrgicos do Sistema Digestório/instrumentação , Procedimentos Cirúrgicos do Sistema Digestório/métodos , Algoritmos , Vesícula Biliar/cirurgia , Humanos , Processamento de Imagem Assistida por Computador , Imageamento Tridimensional/métodos , Intestinos/cirurgia , Laparoscopia/métodos , Fígado/cirurgia , Imagens de Fantasmas , Reprodutibilidade dos Testes , Cirurgia Assistida por Computador/métodos , Tomografia Computadorizada por Raios X , Interface Usuário-Computador , Gravação em VídeoRESUMO
The purpose of this study is to replace the manual process (selecting the landmarks on mesh and anchor points on the video) by Intensity-based Automatic Registration method to reach registration accuracy and low processing time. The proposed system consists of an Enhanced Intensity-based Automatic Registration (EIbAR) using Modified Zero Normalized Cross Correlation (MZNCC) algorithm. The proposed system was implemented on videos of breast cancer tumors. Results showed that the proposed algorithm-as compared to a reference-improved registration accuracy by an average of 2 mm. In addition, the proposed algorithm-as compared to a reference-reduced the number of pixel matching, thereby reducing processing time on the video by an average of 22 ms/frame. The proposed system can, thus, provide an acceptable accuracy and processing time during scene augmentation of videos, which provides a seamless use of augmented-reality for surgeons in visualizing cancer tumors.
Assuntos
Neoplasias da Mama/diagnóstico por imagem , Neoplasias da Mama/cirurgia , Processamento de Imagem Assistida por Computador/métodos , Imageamento Tridimensional/métodos , Cirurgia Assistida por Computador/métodos , Algoritmos , Realidade Aumentada , Feminino , Humanos , Reconhecimento Automatizado de Padrão , Reprodutibilidade dos Testes , Tomografia Computadorizada por Raios X , Gravação em VídeoRESUMO
BACKGROUND: Augmented reality (AR) is gaining attention in medicine because of the convenience and innovation that it brings to operating rooms. Furthermore, oral and maxillofacial surgery (OMS), which is one of sensitive and narrow spatial surgery, requires high accuracy in image registration and low processing time of the system. However, the current systems are suffering from image registration problems while matching two different posture images. We thus aimed to increase that overlay accuracy and decrease the processing time. METHODOLOGY: The proposed system consists of an Iterative Closest Point (ICP) algorithm, which is the combination of a rotation invariant and Manhattan error metric, to provide the best initial parameters and to decrease the computational cost by sorting high and low processing pixel images, respectively. RESULT: The study on maxillary and mandibular jaw bone demonstrates that the proposed work overlay accuracy ranges from 0.22 to 0.30 mm, and processing time ranges from 10 to 14 frames per second as opposed to the 0.23- to 0.35-mm overlay accuracy and the current 8 to 12 frames per second processing time. CONCLUSION: This research aimed to improve the visualization and fast AR system for the OMS. Thus, the proposed system achieved an improvement in overlay accuracy and processing time by implementing the Rotation Invariant and Manhattan error metric ICP algorithm.
Assuntos
Realidade Aumentada , Cirurgia Assistida por Computador , Cirurgia Bucal , Algoritmos , Humanos , Imageamento Tridimensional , RotaçãoRESUMO
BACKGROUND AND AIM: Surgical telepresence has been implemented using Mixed reality (MR) but, MR is theory based and only used for investigating research. The Aim of this paper is to propose and implement a new solution by merging augmented video (generating in local site) and virtual expertise surgeon hand (remote site). This system is to improve the visualization of surgical area, overlay accuracy in the merged video without having any discoloured patterns on hand, smudging artefacts on surgeon hand boundary and occluded areas of surgical area. METHODOLOGY: The Proposed system consists of an Enhanced Multi-Layer Mean Value Cloning (EMLMV) algorithm that improves the overlay accuracy, visualization accuracy and the processing time. This proposed algorithm includes trimap and alpha matting as a pre-processing stage of merging process, which helps to remove the smudging and discoloured artefacts surrounded by remote surgeon hand. RESULTS: Results showing that the proposed system improved the accuracy by reducing the overlay error of merging image from 1.3â¯mm (Millimeter) to 0.9â¯mm. Furthermore, it improves the visibility of surgeon hand in the final merged image from 98.4% (visibility of pixels) to 99.1% (visibility of pixels). Similarly, the processing time in our proposed solution is reduced, which is computed as 10 s to produce 50 frames, whilst, the state of art solution computes 11 s for the same number of frames. CONCLUSION: The proposed system focuses on the merging of augmented reality video (local site), and the virtual reality video (remote site) with the accurate visualization. we consider discoloured areas, smudging artefacts and occlusion as the main aspects to improve the accuracy of merged video in terms of overlay error and visualization error. So, the proposed system would produce the merged video with the removal of artefacts around the expert surgeon hand.