Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 3 de 3
Filtrar
Mais filtros

Base de dados
Tipo de documento
País de afiliação
Intervalo de ano de publicação
1.
Stud Health Technol Inform ; 173: 372-8, 2012.
Artigo em Inglês | MEDLINE | ID: mdl-22357021

RESUMO

We introduce the notion of Shader Lamps Virtual Patients (SLVP) - the combination of projector-based Shader Lamps Avatars and interactive virtual humans. This paradigm uses Shader Lamps Avatars technology to give a 3D physical presence to conversational virtual humans, improving their social interactivity and enabling them to share the physical space with the user. The paradigm scales naturally to multiple viewers, allowing for scenarios where an instructor and multiple students are involved in the training. We have developed a physical-virtual patient for medical students to conduct ophthalmic exams, in an interactive training experience. In this experience, the trainee practices multiple skills simultaneously, including using a surrogate optical instrument in front of a physical head, conversing with the patient about his fears, observing realistic head motion, and practicing patient safety. Here we present a prototype system and results from a preliminary formative evaluation of the system.


Assuntos
Simulação por Computador , Pacientes , Interface Usuário-Computador , Competência Clínica , Técnicas de Diagnóstico Oftalmológico , Humanos , Imageamento Tridimensional
2.
BMC Bioinformatics ; 8: 389, 2007 Oct 15.
Artigo em Inglês | MEDLINE | ID: mdl-17937818

RESUMO

BACKGROUND: Three-dimensional (3-D) visualization of multimodality neuroimaging data provides a powerful technique for viewing the relationship between structure and function. A number of applications are available that include some aspect of 3-D visualization, including both free and commercial products. These applications range from highly specific programs for a single modality, to general purpose toolkits that include many image processing functions in addition to visualization. However, few if any of these combine both stand-alone and remote multi-modality visualization in an open source, portable and extensible tool that is easy to install and use, yet can be included as a component of a larger information system. RESULTS: We have developed a new open source multimodality 3-D visualization application, called MindSeer, that has these features: integrated and interactive 3-D volume and surface visualization, Java and Java3D for true cross-platform portability, one-click installation and startup, integrated data management to help organize large studies, extensibility through plugins, transparent remote visualization, and the ability to be integrated into larger information management systems. We describe the design and implementation of the system, as well as several case studies that demonstrate its utility. These case studies are available as tutorials or demos on the associated website: http://sig.biostr.washington.edu/projects/MindSeer. CONCLUSION: MindSeer provides a powerful visualization tool for multimodality neuroimaging data. Its architecture and unique features also allow it to be extended into other visualization domains within biomedicine.


Assuntos
Mapeamento Encefálico/métodos , Encéfalo/anatomia & histologia , Encéfalo/fisiologia , Diagnóstico por Imagem/métodos , Interpretação de Imagem Assistida por Computador/métodos , Software , Interface Usuário-Computador , Algoritmos , Gráficos por Computador , Bases de Dados Factuais , Humanos , Neurociências/métodos , Técnica de Subtração , Integração de Sistemas
3.
IEEE Trans Vis Comput Graph ; 22(4): 1367-76, 2016 Apr.
Artigo em Inglês | MEDLINE | ID: mdl-26780797

RESUMO

We describe an augmented reality, optical see-through display based on a DMD chip with an extremely fast (16 kHz) binary update rate. We combine the techniques of post-rendering 2-D offsets and just-in-time tracking updates with a novel modulation technique for turning binary pixels into perceived gray scale. These processing elements, implemented in an FPGA, are physically mounted along with the optical display elements in a head tracked rig through which users view synthetic imagery superimposed on their real environment. The combination of mechanical tracking at near-zero latency with reconfigurable display processing has given us a measured average of 80 µs of end-to-end latency (from head motion to change in photons from the display) and also a versatile test platform for extremely-low-latency display systems. We have used it to examine the trade-offs between image quality and cost (i.e. power and logical complexity) and have found that quality can be maintained with a fairly simple display modulation scheme.

SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA