Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 9 de 9
Filtrar
1.
J Gerontol Nurs ; 46(4): 41-47, 2020 Apr 01.
Artigo em Inglês | MEDLINE | ID: mdl-32219456

RESUMO

The current study aimed to categorize fall risk appraisal and quantify discrepancies between perceived fall risk measured subjectively using the short Fall Efficacy Scale-International and physiological fall risk measured objectively using the portable BTrackS™ Assess Balance System. One hundred two community-dwelling older adults were evaluated in this cross-sectional study. Approximately 40% of participants had maladaptive fall risk appraisals, which were either irrational (high perceived risk despite low physiological fall risk) or incongruent (low perceived risk but high physiological fall risk). The remaining 60% of participants had adaptive fall risk appraisals, which were either rational (low perceived risk aligned with low physiological fall risk) or congruent (high perceived risk aligned with high physiological fall risk). Among participants with rational, congruent, irrational, and incongruent appraisals, 21.7%, 66.7%, 28%, and 18.8%, respectively, reported having a history of falls (p < 0.01). Using technology to identify discrepancies in perceived and physiological fall risks can potentially increase the success of fall risk screening and guide fall interventions to target perceived or physiological components of balance. [Journal of Gerontological Nursing, 46(4), 41-47.].


Assuntos
Acidentes por Quedas/prevenção & controle , Avaliação Geriátrica/métodos , Idoso , Idoso de 80 Anos ou mais , Estudos Transversais , Feminino , Humanos , Vida Independente , Masculino , Equilíbrio Postural , Medição de Risco , Fatores de Risco , Tecnologia
2.
IEEE Trans Vis Comput Graph ; 29(12): 4936-4950, 2023 Dec.
Artigo em Inglês | MEDLINE | ID: mdl-35905060

RESUMO

In a future of pervasive augmented reality (AR), AR systems will need to be able to efficiently draw or guide the attention of the user to visual points of interest in their physical-virtual environment. Since AR imagery is overlaid on top of the user's view of their physical environment, these attention guidance techniques must not only compete with other virtual imagery, but also with distracting or attention-grabbing features in the user's physical environment. Because of the wide range of physical-virtual environments that pervasive AR users will find themselves in, it is difficult to design visual cues that "pop out" to the user without performing a visual analysis of the user's environment, and changing the appearance of the cue to stand out from its surroundings. In this article, we present an initial investigation into the potential uses of dichoptic visual cues for optical see-through AR displays, specifically cues that involve having a difference in hue, saturation, or value between the user's eyes. These types of cues have been shown to be preattentively processed by the user when presented on other stereoscopic displays, and may also be an effective method of drawing user attention on optical see-through AR displays. We present two user studies: one that evaluates the saliency of dichoptic visual cues on optical see-through displays, and one that evaluates their subjective qualities. Our results suggest that hue-based dichoptic cues or "Forbidden Colors" may be particularly effective for these purposes, achieving significantly lower error rates in a pop out task compared to value-based and saturation-based cues.

3.
IEEE Trans Vis Comput Graph ; 29(11): 4751-4760, 2023 11.
Artigo em Inglês | MEDLINE | ID: mdl-37782611

RESUMO

Human speech perception is generally optimal in quiet environments, however it becomes more difficult and error prone in the presence of noise, such as other humans speaking nearby or ambient noise. In such situations, human speech perception is improved by speech reading, i.e., watching the movements of a speaker's mouth and face, either consciously as done by people with hearing loss or subconsciously by other humans. While previous work focused largely on speech perception of two-dimensional videos of faces, there is a gap in the research field focusing on facial features as seen in head-mounted displays, including the impacts of display resolution, and the effectiveness of visually enhancing a virtual human face on speech perception in the presence of noise. In this paper, we present a comparative user study ( N=21) in which we investigated an audio-only condition compared to two levels of head-mounted display resolution ( 1832×1920 or 916×960 pixels per eye) and two levels of the native or visually enhanced appearance of a virtual human, the latter consisting of an up-scaled facial representation and simulated lipstick (lip coloring) added to increase contrast. To understand effects on speech perception in noise, we measured participants' speech reception thresholds (SRTs) for each audio-visual stimulus condition. These thresholds indicate the decibel levels of the speech signal that are necessary for a listener to receive the speech correctly 50% of the time. First, we show that the display resolution significantly affected participants' ability to perceive the speech signal in noise, which has practical implications for the field, especially in social virtual environments. Second, we show that our visual enhancement method was able to compensate for limited display resolution and was generally preferred by participants. Specifically, our participants indicated that they benefited from the head scaling more than the added facial contrast from the simulated lipstick. We discuss relationships, implications, and guidelines for applications that aim to leverage such enhancements.


Assuntos
Percepção da Fala , Humanos , Gráficos por Computador , Face , Fala , Audição
4.
IEEE Trans Vis Comput Graph ; 27(8): 3534-3545, 2021 08.
Artigo em Inglês | MEDLINE | ID: mdl-31869794

RESUMO

In this article, we investigate the effects of the physical influence of a virtual human (VH) in the context of face-to-face interaction in a mixed reality environment. In Experiment 1, participants played a tabletop game with a VH, in which each player takes a turn and moves their own token along the designated spots on the shared table. We compared two conditions as follows: the VH in the virtual condition moves a virtual token that can only be seen through augmented reality (AR) glasses, while the VH in the physical condition moves a physical token as the participants do; therefore the VH's token can be seen even in the periphery of the AR glasses. For the physical condition, we designed an actuator system underneath the table. The actuator moves a magnet under the table which then moves the VH's physical token over the surface of the table. Our results indicate that participants felt higher co-presence with the VH in the physical condition, and participants assessed the VH as a more physical entity compared to the VH in the virtual condition. We further observed transference effects when participants attributed the VH's ability to move physical objects to other elements in the real world. Also, the VH's physical influence improved participants' overall experience with the VH. In Experiment 2, we further looked into the question how the physical-virtual latency in movements affected the perceived plausibility of the VH's interaction with the real world. Our results indicate that a slight temporal difference between the physical token reacting to the virtual hand's movement increased the perceived realism and causality of the mixed reality interaction. We discuss potential explanations for the findings and implications for future shared mixed reality tabletop setups.


Assuntos
Realidade Aumentada , Gráficos por Computador , Interação Social , Jogos de Vídeo , Realidade Virtual , Adolescente , Adulto , Feminino , Humanos , Masculino , Movimento/fisiologia , Óculos Inteligentes , Fatores de Tempo , Adulto Jovem
5.
IEEE Trans Vis Comput Graph ; 26(5): 1934-1944, 2020 05.
Artigo em Inglês | MEDLINE | ID: mdl-32070964

RESUMO

Human gaze awareness is important for social and collaborative interactions. Recent technological advances in augmented reality (AR) displays and sensors provide us with the means to extend collaborative spaces with real-time dynamic AR indicators of one's gaze, for example via three-dimensional cursors or rays emanating from a partner's head. However, such gaze cues are only as useful as the quality of the underlying gaze estimation and the accuracy of the display mechanism. Depending on the type of the visualization, and the characteristics of the errors, AR gaze cues could either enhance or interfere with collaborations. In this paper, we present two human-subject studies in which we investigate the influence of angular and depth errors, target distance, and the type of gaze visualization on participants' performance and subjective evaluation during a collaborative task with a virtual human partner, where participants identified targets within a dynamically walking crowd. First, our results show that there is a significant difference in performance for the two gaze visualizations ray and cursor in conditions with simulated angular and depth errors: the ray visualization provided significantly faster response times and fewer errors compared to the cursor visualization. Second, our results show that under optimal conditions, among four different gaze visualization methods, a ray without depth information provides the worst performance and is rated lowest, while a combination of a ray and cursor with depth information is rated highest. We discuss the subjective and objective performance thresholds and provide guidelines for practitioners in this field.


Assuntos
Realidade Aumentada , Gráficos por Computador , Tecnologia de Rastreamento Ocular , Fixação Ocular/fisiologia , Adolescente , Adulto , Feminino , Humanos , Masculino , Análise e Desempenho de Tarefas , Adulto Jovem
6.
Simul Healthc ; 15(2): 115-121, 2020 04.
Artigo em Inglês | MEDLINE | ID: mdl-31895310

RESUMO

INTRODUCTION: We introduce a new type of patient simulator referred to as the Physical-Virtual Patient Simulator (PVPS). The PVPS combines the tangible characteristics of a human-shaped physical form with the flexibility and richness of a virtual patient. The PVPS can exhibit a range of multisensory cues, including visual cues (eg, capillary refill, facial expressions, appearance changes), auditory cues (eg, verbal responses, heart sounds), and tactile cues (eg, localized temperature, pulse). METHODS: We describe the implementation of the technology, technical testing with healthcare experts, and an institutional review board-approved pilot experiment involving 22 nurse practitioner students interacting with a simulated child in 2 scenarios: sepsis and child abuse. The nurse practitioners were asked qualitative questions about ease of use and the cues they noticed. RESULTS: Participants found it easy to interact with the PVPS and had mixed but encouraging responses regarding realism. In the sepsis scenario, participants reported the following cues leading to their diagnoses: temperature, voice, mottled skin, attitude and facial expressions, breathing and cough, vitals and oxygen saturation, and appearance of the mouth and tongue. For the child abuse scenario, they reported the skin appearance on the arms and abdomen, perceived attitude, facial expressions, and inconsistent stories. CONCLUSIONS: We are encouraged by the initial results and user feedback regarding the perceived realism of visual (eg, mottling), audio (eg, breathing sounds), and tactile (eg, temperature) cues displayed by the PVPS, and ease of interaction with the simulator.


Assuntos
Profissionais de Enfermagem/educação , Treinamento por Simulação , Criança , Maus-Tratos Infantis/diagnóstico , Humanos , Sepse/diagnóstico , Interface Usuário-Computador
7.
Front Robot AI ; 5: 114, 2018.
Artigo em Inglês | MEDLINE | ID: mdl-33500993

RESUMO

Social presence, or the feeling of being there with a "real" person, is a crucial component of interactions that take place in virtual reality. This paper reviews the concept, antecedents, and implications of social presence, with a focus on the literature regarding the predictors of social presence. The article begins by exploring the concept of social presence, distinguishing it from two other dimensions of presence-telepresence and self-presence. After establishing the definition of social presence, the article offers a systematic review of 233 separate findings identified from 152 studies that investigate the factors (i.e., immersive qualities, contextual differences, and individual psychological traits) that predict social presence. Finally, the paper discusses the implications of heightened social presence and when it does and does not enhance one's experience in a virtual environment.

8.
IEEE Trans Vis Comput Graph ; 24(11): 2947-2962, 2018 11.
Artigo em Inglês | MEDLINE | ID: mdl-30188833

RESUMO

In 2008, Zhou et al. presented a survey paper summarizing the previous ten years of ISMAR publications, which provided invaluable insights into the research challenges and trends associated with that time period. Ten years later, we review the research that has been presented at ISMAR conferences since the survey of Zhou et al., at a time when both academia and the AR industry are enjoying dramatic technological changes. Here we consider the research results and trends of the last decade of ISMAR by carefully reviewing the ISMAR publications from the period of 2008-2017, in the context of the first ten years. The numbers of papers for different research topics and their impacts by citations were analyzed while reviewing them-which reveals that there is a sharp increase in AR evaluation and rendering research. Based on this review we offer some observations related to potential future research areas or trends, which could be helpful to AR researchers and industry members looking ahead.

9.
J Biomed Discov Collab ; 4: 4, 2009 Apr 19.
Artigo em Inglês | MEDLINE | ID: mdl-19521951

RESUMO

Two-dimensional (2D) videoconferencing has been explored widely in the past 15-20 years to support collaboration in healthcare. Two issues that arise in most evaluations of 2D videoconferencing in telemedicine are the difficulty obtaining optimal camera views and poor depth perception. To address these problems, we are exploring the use of a small array of cameras to reconstruct dynamic three-dimensional (3D) views of a remote environment and of events taking place within. The 3D views could be sent across wired or wireless networks to remote healthcare professionals equipped with fixed displays or with mobile devices such as personal digital assistants (PDAs). The remote professionals' viewpoints could be specified manually or automatically (continuously) via user head or PDA tracking, giving the remote viewers head-slaved or hand-slaved virtual cameras for monoscopic or stereoscopic viewing of the dynamic reconstructions. We call this idea remote 3D medical collaboration. In this article we motivate and explain the vision for 3D medical collaboration technology; we describe the relevant computer vision, computer graphics, display, and networking research; we present a proof-of-concept prototype system; and we present evaluation results supporting the general hypothesis that 3D remote medical collaboration technology could offer benefits over conventional 2D videoconferencing in emergency healthcare.

SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA