Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 9 de 9
Filtrar
Mais filtros










Base de dados
Intervalo de ano de publicação
1.
Artigo em Inglês | MEDLINE | ID: mdl-38777946

RESUMO

PURPOSE: Calibration of an optical see-through head-mounted display is critical for augmented reality-based surgical navigation. While conventional methods have advanced, calibration errors remain significant. Moreover, prior research has focused primarily on calibration accuracy and procedure, neglecting the impact on the overall surgical navigation system. Consequently, these enhancements do not necessarily translate to accurate augmented reality in the optical see-through head mount due to systemic errors, including those in calibration. METHOD: This study introduces a simulated augmented reality-based calibration to address these issues. By replicating the augmented reality that appeared in the optical see-through head mount, the method achieves calibration that compensates for augmented reality errors, thereby reducing them. The process involves two distinct calibration approaches, followed by adjusting the transformation matrix to minimize displacement in the simulated augmented reality. RESULTS: The efficacy of this method was assessed through two accuracy evaluations: registration accuracy and augmented reality accuracy. Experimental results showed an average translational error of 2.14 mm and rotational error of 1.06° across axes in both approaches. Additionally, augmented reality accuracy, measured by the overlay regions' ratio, increased to approximately 95%. These findings confirm the enhancement in both calibration and augmented reality accuracy with the proposed method. CONCLUSION: The study presents a calibration method using simulated augmented reality, which minimizes augmented reality errors. This approach, requiring minimal manual intervention, offers a more robust and precise calibration technique for augmented reality applications in surgical navigation.

2.
Comput Methods Programs Biomed ; 238: 107618, 2023 Aug.
Artigo em Inglês | MEDLINE | ID: mdl-37247472

RESUMO

BACKGROUND AND OBJECTIVES: An augmented reality (AR)-based surgical guidance system is often used with high-magnification zoom lens systems such as a surgical microscope, particularly in neurology or otolaryngology. To superimpose the internal structures of relevant organs on the microscopy image, an accurate calibration process to obtain the camera intrinsic and hand-eye parameters of the microscope is essential. However, conventional calibration methods are unsuitable for surgical microscopes because of their narrow depth of focus at high magnifications. To realize AR-based surgical guidance with a high-magnification surgical microscope, we herein propose a new calibration method that is applicable to the highest magnification levels as well as low magnifications. METHODS: The key idea of the proposed method is to find the relationship between the focal length and the hand-eye parameters, which remains constant regardless of the magnification level. Based on this, even if the magnification changes arbitrarily during surgery, the intrinsic and hand-eye parameters are recalculated quickly and accurately with one or two pictures of the pattern. We also developed a dedicated calibration tool with a prism to take focused pattern images without interfering with the surgery. RESULTS: The proposed calibration method ensured an AR error of < 1 mm for all magnification levels. In addition, the variation of focal length was within 1% regardless of the magnification level, and the corresponding variation with the conventional calibration method exceeded 20% at high magnification levels. CONCLUSIONS: The comparative study showed that the proposed method has outstanding accuracy and reproducibility for a high-magnification surgical microscope. The proposed calibration method is applicable to various endoscope or microscope systems with zoom lens.


Assuntos
Microscopia , Calibragem , Reprodutibilidade dos Testes
3.
Comput Methods Programs Biomed ; 228: 107239, 2023 Jan.
Artigo em Inglês | MEDLINE | ID: mdl-36410266

RESUMO

BACKGROUND AND OBJECTIVE: Image-guided robotic surgery for fracture reduction is a medical procedure in which surgeons control a surgical robot to align the fractured bones by using a navigation system that shows the rotation and distance of bone movement. In such robotic surgeries, it is necessary to estimate the relationship between the robot and patient (bone), a task known as robot-patient registration, to realize the navigation. Through the registration, a fracture state in real-world can be simulated in virtual space of the navigation system. METHODS: This paper proposes an approach to realize robot-patient registration for an optical-tracker-free robotic fracture-reduction system. Instead of the optical tracker which is a three-dimensional position localizer, X-ray images are used to realize the robot-patient registration, combining the relationship of both the robot and patient with regards to C-arm. The proposed method consists of two steps of registration, where initial registration is followed by refined registration which adopts particle swarm optimization with the minimum cross-reprojection error based on bidirectional X-ray images. To address the unrecognizable features due to interference between the robot and bone, we also developed attachable robot features. The allocated robot features could be clearly extracted from the X-ray images, and precise registration could be realized through the particle swarm optimization. RESULTS: The proposed method was evaluated in phantom and ex-vivo experiments involving a caprine cadaver. For the phantom experiments, the average translational and rotational errors were 1.88 mm and 2.45°, respectively, and the corresponding errors in the ex vivo experiments were 2.64 mm and 3.32° The results demonstrated the effectiveness of the proposed robot-patient registration. CONCLUSIONS: The proposed method enable to estimate the three-dimensional relationship between fractured bones in real-world by using only two-dimensional images, and the relationship is accurately simulated in virtual reality for the navigation. Therefore, a reduction procedure for successful treatment of bone fractures in image-guided robotic surgery can be expected with the aid of the proposed registration method.


Assuntos
Procedimentos Cirúrgicos Robóticos , Robótica , Animais , Humanos , Cabras , Fixação de Fratura
4.
Comput Assist Surg (Abingdon) ; 27(1): 50-62, 2022 12.
Artigo em Inglês | MEDLINE | ID: mdl-36510708

RESUMO

To develop a patient-specific 3 D reconstruction of a femur modeled using the statistical shape model (SSM) and X-ray images, it is assumed that the target shape is not outside the range of variations allowed by the SSM built from a training dataset. We propose the shape-partitioned statistical shape model (SPSSM) to cover significant variations in the target shape. This model can divide a shape into several segments of anatomical interest. We break up the eigenvector matrix into the corresponding representative matrices for the SPSSM by preserving the relevant rows of the original matrix without segmenting the shape and building an independent SSM for each segment. To quantify the reconstruction error of the proposed method, we generated two groups of deformation models of the femur which cannot be easily represented by the conventional SSM. One group of femurs had an anteversion angle deformation, and the other group of femurs had two different scales of the femoral head. Each experiment was performed using the leave-one-out method for twelve femurs. When the femoral head was rotated by 30°, the average reconstruction error of the conventional SSM was 5.34 mm, which was reduced to 3.82 mm for the proposed SPSSM. When the femoral head size was decreased by 20%, the average reconstruction error of the SSM was 4.70 mm, which was reduced to 3.56 mm for the SPSSM. When the femoral head size was increased by 20%, the average reconstruction error of the SSM was 4.28 mm, which was reduced to 3.10 mm for the SPSSM. The experimental results for the two groups of deformation models showed that the proposed SPSSM outperformed the conventional SSM.


Assuntos
Fêmur , Imageamento Tridimensional , Humanos , Imageamento Tridimensional/métodos , Raios X , Fêmur/diagnóstico por imagem , Fêmur/cirurgia , Modelos Estatísticos
5.
J Digit Imaging ; 34(5): 1249-1263, 2021 10.
Artigo em Inglês | MEDLINE | ID: mdl-34505959

RESUMO

The C-arm X-ray system is a common intraoperative imaging modality used to observe the state of a fractured bone in orthopedic surgery. Using C-arm, the bone fragments are aligned during surgery, and their lengths and angles with respect to the entire bone are measured to verify the fracture reduction. Since the field-of-view of the C-arm is too narrow to visualize the entire bone, a panoramic X-ray image is utilized to enlarge it by stitching multiple images. To achieve X-ray image stitching with feature detection, the extraction of accurate and densely matched features within the overlap region between images is imperative. However, since the features are highly affected by the properties and sizes of the overlap regions in consecutive X-ray images, the accuracy and density of matched features cannot be guaranteed. To solve this problem, a heterogeneous stitching of X-ray images was proposed. This heterogeneous stitching was completed according to the overlap region based on homographic evaluation. To acquire sufficiently matched features within the limited overlap region, integrated feature detection was used to estimate a homography. The homography was then evaluated to confirm its accuracy. When the estimated homography was incorrect, local regions around the matched feature were derived from integrated feature detection and substituted to re-estimate the homography. Successful X-ray image stitching of the C-arm was achieved by estimating the optimal homography for each image. Based on phantom and ex-vivo experiments using the proposed method, we confirmed a panoramic X-ray image construction that was robust compared to the conventional methods.


Assuntos
Algoritmos , Humanos , Imagens de Fantasmas , Raios X
6.
IEEE Trans Biomed Eng ; 67(9): 2669-2682, 2020 09.
Artigo em Inglês | MEDLINE | ID: mdl-31976878

RESUMO

OBJECTIVE: Augmented reality (AR) navigation using a position sensor in endoscopic surgeries relies on the quality of patient-image registration and hand-eye calibration. Conventional methods collect the necessary data to compute two output transformation matrices separately. However, the AR display setting during surgery generally differs from that during preoperative processes. Although conventional methods can identify optimal solutions under initial conditions, AR display errors are unavoidable during surgery owing to the inherent computational complexity of AR processes, such as error accumulation over successive matrix multiplications, and tracking errors of position sensor. METHODS: We propose the simultaneous optimization of patient-image registration and hand-eye calibration in an AR environment before surgery. The relationship between the endoscope and a virtual object to overlay is first calculated using an endoscopic image, which also functions as a reference during optimization. After including the tracking information from the position sensor, patient-image registration and hand-eye calibration are optimized in terms of least-squares. RESULTS: Experiments with synthetic data verify that the proposed method is less sensitive to computation and tracking errors. A phantom experiment with a position sensor is also conducted. The accuracy of the proposed method is significantly higher than that of the conventional method. CONCLUSION: The AR accuracy of the proposed method is compared with those of the conventional ones, and the superiority of the proposed method is verified. SIGNIFICANCE: This study demonstrates that the proposed method exhibits substantial potential for improving AR navigation accuracy.


Assuntos
Realidade Aumentada , Cirurgia Assistida por Computador , Calibragem , Endoscópios , Humanos , Imageamento Tridimensional , Imagens de Fantasmas
7.
Int J Comput Assist Radiol Surg ; 13(10): 1671-1682, 2018 Oct.
Artigo em Inglês | MEDLINE | ID: mdl-30014167

RESUMO

PURPOSE: For augmented reality surgical navigation based on C-arm imaging, accuracy of the overlaid augmented reality onto the X-ray image is imperative. However, overlay displacement is generated when a conventional pinhole model describing a geometric relationship of a normal camera is adopted for C-arm calibration. Thus, a modified model for C-arm calibration is proposed to reduce this displacement, which is essential for accurate surgical navigation. METHOD: Based on the analysis of displacement pattern generated for three-dimensional objects, we assumed that displacement originated by moving the X-ray source position according to the depth. In the proposed method, X-ray source movement was modeled as variable intrinsic parameters and represented in the pinhole model by replacing the point source with a planar source. RESULTS: The improvement which represents a reduced displacement was verified by comparing overlay accuracy for augmented reality surgical navigation between the conventional and proposed methods. The proposed method achieved more accurate overlay on the X-ray image in spatial position as well as depth of the object volume. CONCLUSION: We validated that intrinsic parameters that describe the source position were dependent on depth for a three-dimensional object and showed that displacement can be reduced and become independent of depth by using the proposed planar source model.


Assuntos
Imageamento Tridimensional/instrumentação , Imageamento Tridimensional/métodos , Cirurgia Assistida por Computador/métodos , Tomografia Computadorizada por Raios X/instrumentação , Tomografia Computadorizada por Raios X/métodos , Algoritmos , Calibragem , Humanos , Modelos Estatísticos , Imagens de Fantasmas , Radiografia/instrumentação , Radiografia/métodos , Reprodutibilidade dos Testes
8.
Biomed Eng Online ; 17(1): 64, 2018 May 24.
Artigo em Inglês | MEDLINE | ID: mdl-29793498

RESUMO

BACKGROUND: In longitudinal electroencephalography (EEG) studies, repeatable electrode positioning is essential for reliable EEG assessment. Conventional methods use anatomical landmarks as fiducial locations for the electrode placement. Since the landmarks are manually identified, the EEG assessment is inevitably unreliable because of individual variations among the subjects and the examiners. To overcome this unreliability, an augmented reality (AR) visualization-based electrode guidance system was proposed. METHODS: The proposed electrode guidance system is based on AR visualization to replace the manual electrode positioning. After scanning and registration of the facial surface of a subject by an RGB-D camera, the AR of the initial electrode positions as reference positions is overlapped with the current electrode positions in real time. Thus, it can guide the position of the subsequently placed electrodes with high repeatability. RESULTS: The experimental results with the phantom show that the repeatability of the electrode positioning was improved compared to that of the conventional 10-20 positioning system. CONCLUSION: The proposed AR guidance system improves the electrode positioning performance with a cost-effective system, which uses only RGB-D camera. This system can be used as an alternative to the international 10-20 system.


Assuntos
Eletroencefalografia/instrumentação , Realidade Virtual , Eletrodos , Cabeça , Humanos
9.
IEEE Trans Image Process ; 22(5): 1859-72, 2013 May.
Artigo em Inglês | MEDLINE | ID: mdl-23314777

RESUMO

This paper presents a novel way to reduce noise introduced or exacerbated by image enhancement methods, in particular algorithms based on the random spray sampling technique, but not only. According to the nature of sprays, output images of spray-based methods tend to exhibit noise with unknown statistical distribution. To avoid inappropriate assumptions on the statistical characteristics of noise, a different one is made. In fact, the non-enhanced image is considered to be either free of noise or affected by non-perceivable levels of noise. Taking advantage of the higher sensitivity of the human visual system to changes in brightness, the analysis can be limited to the luma channel of both the non-enhanced and enhanced image. Also, given the importance of directional content in human vision, the analysis is performed through the dual-tree complex wavelet transform (DTWCT). Unlike the discrete wavelet transform, the DTWCT allows for distinction of data directionality in the transform space. For each level of the transform, the standard deviation of the non-enhanced image coefficients is computed across the six orientations of the DTWCT, then it is normalized. The result is a map of the directional structures present in the non-enhanced image. Said map is then used to shrink the coefficients of the enhanced image. The shrunk coefficients and the coefficients from the non-enhanced image are then mixed according to data directionality. Finally, a noise-reduced version of the enhanced image is computed via the inverse transforms. A thorough numerical analysis of the results has been performed in order to confirm the validity of the proposed approach.


Assuntos
Processamento de Imagem Assistida por Computador/métodos , Análise de Ondaletas , Algoritmos , Animais , Humanos
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA
...