Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 2 de 2
Filtrar
Más filtros












Base de datos
Intervalo de año de publicación
1.
IEEE Trans Vis Comput Graph ; 24(11): 2993-3004, 2018 11.
Artículo en Inglés | MEDLINE | ID: mdl-30207957

RESUMEN

We propose a new approach for 3D reconstruction of dynamic indoor and outdoor scenes in everyday environments, leveraging only cameras worn by a user. This approach allows 3D reconstruction of experiences at any location and virtual tours from anywhere. The key innovation of the proposed ego-centric reconstruction system is to capture the wearer's body pose and facial expression from near-body views, e.g. cameras on the user's glasses, and to capture the surrounding environment using outward-facing views. The main challenge of the ego-centric reconstruction, however, is the poor coverage of the near-body views - that is, the user's body and face are observed from vantage points that are convenient for wear but inconvenient for capture. To overcome these challenges, we propose a parametric-model-based approach to user motion estimation. This approach utilizes convolutional neural networks (CNNs) for near-view body pose estimation, and we introduce a CNN-based approach for facial expression estimation that combines audio and video. For each time-point during capture, the intermediate model-based reconstructions from these systems are used to re-target a high-fidelity pre-scanned model of the user. We demonstrate that the proposed self-sufficient, head-worn capture system is capable of reconstructing the wearer's movements and their surrounding environment in both indoor and outdoor situations without any additional views. As a proof of concept, we show how the resulting 3D-plus-time reconstruction can be immersively experienced within a virtual reality system (e.g., the HTC Vive). We expect that the size of the proposed egocentric capture-and-reconstruction system will eventually be reduced to fit within future AR glasses, and will be widely useful for immersive 3D telepresence, virtual tours, and general use-anywhere 3D content creation.


Asunto(s)
Expresión Facial , Imagenología Tridimensional/métodos , Postura/fisiología , Interfaz Usuario-Computador , Grabación en Video/métodos , Humanos , Internet , Redes Neurales de la Computación
2.
Stud Health Technol Inform ; 220: 55-62, 2016.
Artículo en Inglés | MEDLINE | ID: mdl-27046554

RESUMEN

This paper introduces a computer-based system that is designed to record a surgical procedure with multiple depth cameras and reconstruct in three dimensions the dynamic geometry of the actions and events that occur during the procedure. The resulting 3D-plus-time data takes the form of dynamic, textured geometry and can be immersively examined at a later time; equipped with a Virtual Reality headset such as Oculus Rift DK2, a user can walk around the reconstruction of the procedure room while controlling playback of the recorded surgical procedure with simple VCR-like controls (play, pause, rewind, fast forward). The reconstruction can be annotated in space and time to provide more information of the scene to users. We expect such a system to be useful in applications such as training of medical students and nurses.


Asunto(s)
Evaluación Educacional/métodos , Cirugía General/educación , Imagenología Tridimensional/métodos , Quirófanos/métodos , Fotograbar/métodos , Cirugía Asistida por Computador/métodos , Instrucción por Computador , Humanos , Imagenología Tridimensional/instrumentación , Reconocimiento de Normas Patrones Automatizadas/métodos , Fotograbar/instrumentación , Reproducibilidad de los Resultados , Sensibilidad y Especificidad , Programas Informáticos , Cirugía Asistida por Computador/instrumentación , Integración de Sistemas , Juegos de Video , Imagen de Cuerpo Entero/instrumentación , Imagen de Cuerpo Entero/métodos
SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA
...