Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 2 de 2
Filtrar
Mais filtros











Base de dados
Intervalo de ano de publicação
1.
J Cancer Res Ther ; 20(4): 1338-1343, 2024 Aug 01.
Artigo em Inglês | MEDLINE | ID: mdl-39206996

RESUMO

OBJECTIVES: This study aimed to evaluate the accuracy of percutaneous computed tomography (CT)-guided puncture based on machine vision and augmented reality in a phantom. MATERIALS AND METHODS: The surgical space coordinate system was established, and accurate registration was ensured using the hierarchical optimization framework. Machine vision tracking and augmented reality display technologies were used for puncture navigation. CT was performed on a phantom, and puncture paths with three different lengths were planned from the surface of the phantom to the metal ball. Puncture accuracy was evaluated by measuring the target positioning error (TPE), lateral error (LE), angular error (AE), and first success rate (FSR) based on the obtained CT images. RESULTS: A highly qualified attending interventional physician performed a total of 30 punctures using puncture navigation. For the short distance (4.5-5.5 cm), the TPE, LE, AE, and FSR were 1.90 ± 0.62 mm, 1.23 ± 0.70 mm, 1.39 ± 0.86°, and 60%, respectively. For the medium distance (9.5-10.5 cm), the TPE, LE, AE, and FSR were 2.35 ± 0.95 mm, 2.00 ± 1.07 mm, 1.20 ± 0.62°, and 40%, respectively. For the long distance (14.5-15.5 cm), the TPE, LE, AE, and FSR were 2.81 ± 1.17 mm, 2.33 ± 1.34 mm, 0.99 ± 0.55°, and 30%, respectively. CONCLUSION: The augmented reality and machine vision-based CT-guided puncture navigation system allows for precise punctures in a phantom. Further studies are needed to explore its clinical applicability.


Assuntos
Realidade Aumentada , Imagens de Fantasmas , Tomografia Computadorizada por Raios X , Tomografia Computadorizada por Raios X/métodos , Humanos , Punções/métodos , Cirurgia Assistida por Computador/métodos
2.
IEEE Trans Image Process ; 30: 5835-5847, 2021.
Artigo em Inglês | MEDLINE | ID: mdl-34138709

RESUMO

The Coarse-To-Fine (CTF) matching scheme has been widely applied to reduce computational complexity and matching ambiguity in stereo matching and optical flow tasks by converting image pairs into multi-scale representations and performing matching from coarse to fine levels. Despite its efficiency, it suffers from several weaknesses, such as tending to blur the edges and miss small structures like thin bars and holes. We find that the pixels of small structures and edges are often assigned with wrong disparity/flow in the upsampling process of the CTF framework, introducing errors to the fine levels and leading to such weaknesses. We observe that these wrong disparity/flow values can be avoided if we select the best-matched value among their neighborhood, which inspires us to propose a novel differentiable Neighbor-Search Upsampling (NSU) module. The NSU module first estimates the matching scores and then selects the best-matched disparity/flow for each pixel from its neighbors. It effectively preserves finer structure details by exploiting the information from the finer level while upsampling the disparity/flow. The proposed module can be a drop-in replacement of the naive upsampling in the CTF matching framework and allows the neural networks to be trained end-to-end. By integrating the proposed NSU module into a baseline CTF matching network, we design our Detail Preserving Coarse-To-Fine (DPCTF) matching network. Comprehensive experiments demonstrate that our DPCTF can boost performances for both stereo matching and optical flow tasks. Notably, our DPCTF achieves new state-of-the-art performances for both tasks - it outperforms the competitive baseline (Bi3D) by 28.8% (from 0.73 to 0.52) on EPE of the FlyingThings3D stereo dataset, and ranks first in KITTI flow 2012 benchmark. The code is available at https://github.com/Deng-Y/DPCTF.

SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA