Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 2 de 2
Filtrar
Mais filtros

Base de dados
Ano de publicação
Tipo de documento
Intervalo de ano de publicação
1.
J Craniofac Surg ; 24(5): 1573-6, 2013 Sep.
Artigo em Inglês | MEDLINE | ID: mdl-24036729

RESUMO

PURPOSE: The aim of this study was to demonstrate the feasibility of building a craniofacial virtual reality model by image fusion of 3-dimensional (3D) CT models and 3 dMD stereophotogrammetric facial surface. METHODS: A CT scan and stereophotography were performed. The 3D CT models were reconstructed by Materialise Mimics software, and the stereophotogrammetric facial surface was reconstructed by 3 dMD patient software. All 3D CT models were exported as Stereo Lithography file format, and the 3 dMD model was exported as Virtual Reality Modeling Language file format. Image registration and fusion were performed in Mimics software. Genetic algorithm was used for precise image fusion alignment with minimum error. The 3D CT models and the 3 dMD stereophotogrammetric facial surface were finally merged into a single file and displayed using Deep Exploration software. Errors between the CT soft tissue model and 3 dMD facial surface were also analyzed. RESULTS: Virtual model based on CT-3 dMD image fusion clearly showed the photorealistic face and bone structures. Image registration errors in virtual face are mainly located in bilateral cheeks and eyeballs, and the errors are more than 1.5 mm. However, the image fusion of whole point cloud sets of CT and 3 dMD is acceptable with a minimum error that is less than 1 mm. CONCLUSIONS: The ease of use and high reliability of CT-3 dMD image fusion allows the 3D virtual head to be an accurate, realistic, and widespread tool, and has a great benefit to virtual face model.


Assuntos
Simulação por Computador , Anormalidades Craniofaciais/diagnóstico , Imageamento Tridimensional/métodos , Imagem Multimodal/métodos , Fotogrametria , Tomografia Computadorizada por Raios X , Interface Usuário-Computador , Estudos de Viabilidade , Feminino , Humanos , Interpretação de Imagem Assistida por Computador/métodos , Reprodutibilidade dos Testes , Sensibilidade e Especificidade , Software , Adulto Jovem
2.
Int J Comput Assist Radiol Surg ; 18(5): 909-919, 2023 May.
Artigo em Inglês | MEDLINE | ID: mdl-36418763

RESUMO

BACKGROUND: Virtual reality (VR) technology is an ideal alternative of operation training and surgical teaching. However, virtual surgery is usually carried out using the mouse or data gloves, which affects the authenticity of virtual operation. A virtual surgery system with gesture recognition and real-time image feedback was explored to realize more authentic immersion. METHOD: Gesture recognition technology proposed with an efficient and real-time algorithm and high fidelity was explored. The recognition of hand contour, palm and fingertip was firstly realized by hand data extraction. Then, an Support Vector Machine classifier was utilized to classify and recognize common gestures after extraction of feature recognition. The algorithm of collision detection adopted Axis Aligned Bounding Box binary tree to build hand and scalpel collision models. What's more, nominal radius theorem (NRT) and separating axis theorem (SAT) were applied for speeding up collision detection. Based on the maxillofacial virtual surgical system we proposed before, the feasibility of integration of the above technologies in this prototype system was evaluated. RESULTS: Ten kinds of signal static gestures were designed to test gesture recognition algorithms. The accuracy of gestures recognition is more than 80%, some of which were over 90%. The generation speed of collision detection model met the software requirements with the method of NRT and SAT. The response time of gesture] recognition was less than 40 ms, namely the speed of hand gesture recognition system was greater than 25 Hz. On the condition of integration of hand gesture recognition, typical virtual surgical procedures including grabbing a scalpel, puncture site selection, virtual puncture operation and incision were carried out with realization of real-time image feedback. CONCLUSION: Based on the previous maxillofacial virtual surgical system that consisted of VR, triangular mesh collision detection and maxillofacial biomechanical model construction, the integration of hand gesture recognition was a feasible method to improve the interactivity and immersion of virtual surgical operation training.


Assuntos
Gestos , Cirurgia Bucal , Animais , Camundongos , Algoritmos , Software , Interface Usuário-Computador , Mãos
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA