Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 59
Filtrar
Mais filtros

Base de dados
Tipo de documento
Intervalo de ano de publicação
1.
J Acoust Soc Am ; 141(4): EL427, 2017 04.
Artigo em Inglês | MEDLINE | ID: mdl-28464642

RESUMO

Sound propagation encompasses various acoustic phenomena including reverberation. Current virtual acoustic methods ranging from parametric filters to physically accurate solvers can simulate reverberation with varying degrees of fidelity. The effects of reverberant sounds generated using different propagation algorithms on acoustic distance perception are investigated. In particular, two classes of methods for real time sound propagation in dynamic scenes based on parametric filters and ray tracing are evaluated. The study shows that ray tracing enables more distance accuracy as compared to the approximate, filter-based method. This suggests that accurate reverberation in virtual reality results in better reproduction of acoustic distances.


Assuntos
Acústica , Localização de Som , Som , Estimulação Acústica , Adulto , Sinais (Psicologia) , Feminino , Humanos , Julgamento , Masculino , Pessoa de Meia-Idade , Movimento (Física) , Processamento de Sinais Assistido por Computador , Fatores de Tempo , Vibração , Adulto Jovem
2.
J Acoust Soc Am ; 141(3): 2289, 2017 03.
Artigo em Inglês | MEDLINE | ID: mdl-28372101

RESUMO

Outdoor sound propagation benefits from algorithms that can handle, in a computationally efficient manner, inhomogeneous media, complex boundary surfaces, and large spatial expanse. One recent work by Mo, Yeh, Lin, and Manocha [Appl. Acoust. 104, 142-151 (2016)] proposed a ray tracing method using analytic ray curves as tracing primitives, which improved the performance of propagation paths computation over rectilinear ray tracers. In this paper, an algorithm is developed that extends the performance improvement to field computation; it combines the analytic ray curve tracer with fast pressure computation based on the Gaussian beam model. The algorithm is validated against published results on benchmarks in atmospheric and ocean acoustics, and its application is demonstrated on a scene with terrains and buildings of realistic complexity and under a variety of atmospheric conditions. This algorithm is able to compute characteristic sound fields for fully general media profiles and complex three dimensional scenes at close-to-interactive speed.

3.
J Acoust Soc Am ; 135(6): 3231-42, 2014 Jun.
Artigo em Inglês | MEDLINE | ID: mdl-24907788

RESUMO

Acoustic pulse propagation in outdoor urban environments is a physically complex phenomenon due to the predominance of reflection, diffraction, and scattering. This is especially true in non-line-of-sight cases, where edge diffraction and high-order scattering are major components of acoustic energy transport. Past work by Albert and Liu [J. Acoust. Soc. Am. 127, 1335-1346 (2010)] has shown that many of these effects can be captured using a two-dimensional finite-difference time-domain method, which was compared to the measured data recorded in an army training village. In this paper, a full three-dimensional analysis of acoustic pulse propagation is presented. This analysis is enabled by the adaptive rectangular decomposition method by Raghuvanshi, Narain and Lin [IEEE Trans. Visual. Comput. Graphics 15, 789-801 (2009)], which models sound propagation in the same scene in three dimensions. The simulation is run at a much higher usable bandwidth (nearly 450 Hz) and took only a few minutes on a desktop computer. It is shown that a three-dimensional solution provides better agreement with measured data than two-dimensional modeling, especially in cases where propagation over rooftops is important. In general, the predicted acoustic responses match well with measured results for the source/sensor locations.

4.
Sci Rep ; 14(1): 10606, 2024 05 08.
Artigo em Inglês | MEDLINE | ID: mdl-38719904

RESUMO

Increasing use of social media has resulted in many detrimental effects in youth. With very little control over multimodal content consumed on these platforms and the false narratives conveyed by these multimodal social media postings, such platforms often impact the mental well-being of the users. To reduce these negative effects of multimodal social media content, an important step is to understand creators' intent behind sharing content and to educate their social network of this intent. Towards this goal, we propose INTENT-O-METER, a perceived human intent prediction model for multimodal (image and text) social media posts. INTENT-O-METER models ideas from psychology and cognitive modeling literature, in addition to using the visual and textual features for an improved perceived intent prediction model. INTENT-O-METER leverages Theory of Reasoned Action (TRA) factoring in (i) the creator's attitude towards sharing a post, and (ii) the social norm or perception towards the multimodal post in determining the creator's intention. We also introduce INTENTGRAM, a dataset of 55K social media posts scraped from public Instagram profiles. We compare INTENT-O-METER with state-of-the-art intent prediction approaches on four perceived intent prediction datasets, Intentonomy, MDID, MET-Meme, and INTENTGRAM. We observe that leveraging TRA in addition to visual and textual features-as opposed to using only the latter-results in improved prediction accuracy by up to 7.5 % in Top-1 accuracy and 8 % in AUC on INTENTGRAM. In summary, we also develop a web browser application mimicking a popular social media platform and show users social media content overlaid with these intent labels. From our analysis, around 70 % users confirmed that tagging posts with intent labels helped them become more aware of the content consumed, and they would be open to experimenting with filtering content based on these labels. However, more extensive user evaluation is required to understand how adding such perceived intent labels mitigate the negative effects of social media.


Assuntos
Intenção , Mídias Sociais , Humanos , Teoria do Comportamento Planejado
5.
Artigo em Inglês | MEDLINE | ID: mdl-38451772

RESUMO

In this paper, we present a novel multi-modal attention guidance method designed to address the challenges of turn-taking dynamics in meetings and enhance group conversations within virtual reality (VR) environments. Recognizing the difficulties posed by a confined field of view and the absence of detailed gesture tracking in VR, our proposed method aims to mitigate the challenges of noticing new speakers attempting to join the conversation. This approach tailors attention guidance, providing a nuanced experience for highly engaged participants while offering subtler cues for those less engaged, thereby enriching the overall meeting dynamics. Through group interview studies, we gathered insights to guide our design, resulting in a prototype that employs light as a diegetic guidance mechanism, complemented by spatial audio. The combination creates an intuitive and immersive meeting environment, effectively directing users' attention to new speakers. An evaluation study, comparing our method to state-of-the-art attention guidance approaches, demonstrated significantly faster response times (p < 0.001), heightened perceived conversation satisfaction (p < 0.001), and preference (p < 0.001) for our method. Our findings contribute to the understanding of design implications for VR social attention guidance, opening avenues for future research and development.

6.
IEEE Trans Vis Comput Graph ; 30(5): 2570-2579, 2024 May.
Artigo em Inglês | MEDLINE | ID: mdl-38437086

RESUMO

We provide the first perceptual quantification of user's sensitivity to radial optic flow artifacts and demonstrate a promising approach for masking this optic flow artifact via blink suppression. Near-eye HMOs allow users to feel immersed in virtual environments by providing visual cues, like motion parallax and stereoscopy, that mimic how we view the physical world. However, these systems exhibit a variety of perceptual artifacts that can limit their usability and the user's sense of presence in VR. One well-known artifact is the vergence-accommodation conflict (VAC). Varifocal displays can mitigate VAC, but bring with them other artifacts such as a change in virtual image size (radial optic flow) when the focal plane changes. We conducted a set of psychophysical studies to measure users' ability to perceive this radial flow artifact before, during, and after self-initiated blinks. Our results showed that visual sensitivity was reduced by a factor of 10 at the start and for ~70 ms after a blink was detected. Pre- and post-blink sensitivity was, on average, ~O.15% image size change during normal viewing and increased to ~1.5- 2.0% during blinks. Our results imply that a rapid (under 70 ms) radial optic flow distortion can go unnoticed during a blink. Furthermore, our results provide empirical data that can be used to inform engineering requirements for both hardware design and software-based graphical correction algorithms for future varifocal near-eye displays. Our project website is available at https://gamma.umd.edu/ROF/.


Assuntos
Fluxo Óptico , Gráficos por Computador , Acomodação Ocular , Algoritmos , Software
7.
Artigo em Inglês | MEDLINE | ID: mdl-37027697

RESUMO

We present PACE, a novel method for modifying motion-captured virtual agents to interact with and move throughout dense, cluttered 3D scenes. Our approach changes a given motion sequence of a virtual agent as needed to adjust to the obstacles and objects in the environment. We first take the individual frames of the motion sequence most important for modeling interactions with the scene and pair them with the relevant scene geometry, obstacles, and semantics such that interactions in the agents motion match the affordances of the scene (e.g., standing on a floor or sitting in a chair). We then optimize the motion of the human by directly altering the high-DOF pose at each frame in the motion to better account for the unique geometric constraints of the scene. Our formulation uses novel loss functions that maintain a realistic flow and natural-looking motion. We compare our method with prior motion generating techniques and highlight the benefits of our method with a perceptual study and physical plausibility metrics. Human raters preferred our method over the prior approaches. Specifically, they preferred our method 57.1% of the time versus the state-of-the-art method using existing motions, and 81.0% of the time versus a state-of-the-art motion synthesis method. Additionally, our method performs significantly higher on established physical plausibility and interaction metrics. Specifically, we outperform competing methods by over 1.2% in terms of the non-collision metric and by over 18% in terms of the contact metric. We have integrated our interactive system with Microsoft HoloLens and demonstrate its benefits in real-world indoor scenes. Our project website is available at https://gamma.umd.edu/pace/.

8.
IEEE Trans Vis Comput Graph ; 29(12): 5422-5433, 2023 Dec.
Artigo em Inglês | MEDLINE | ID: mdl-36219658

RESUMO

In developing virtual acoustic environments, it is important to understand the relationship between the computation cost and the perceptual significance of the resultant numerical error. In this article, we propose a quality criterion that evaluates the error significance of path-tracing-based sound propagation simulators. We present an analytical formula that estimates the error signal power spectrum. With this spectrum estimation, we can use a modified Zwicker's loudness model to calculate the relative loudness of the error signal masked by the ideal output. Our experimental results show that the proposed criterion can explain the human perception of simulation error in a variety of cases.

9.
Artigo em Inglês | MEDLINE | ID: mdl-37037228

RESUMO

Although distance learning presents a number of interesting educational advantages as compared to in-person instruction, it is not without its downsides. We first assess the educational challenges presented by distance learning as a whole and identify 4 main challenges that distance learning currently presents as compared to in-person instruction: the lack of social interaction, reduced student engagement and focus, reduced comprehension and information retention, and the lack of flexible and customizable instructor resources. After assessing each of these challenges in-depth, we examine how AR/VR technologies might serve to address each challenge along with their current shortcomings, and finally outline the further research that is required to fully understand the potential of AR/VR technologies as they apply to distance learning.

10.
IEEE Trans Vis Comput Graph ; 28(8): 3035-3049, 2022 08.
Artigo em Inglês | MEDLINE | ID: mdl-33315568

RESUMO

We present the design and results of an experiment investigating the occurrence of self-illusion and its contribution to realistic behavior consistent with a virtual role in virtual environments. Self-illusion is a generalized illusion about one's self in cognition, eliciting a sense of being associated with a role in a virtual world, despite sure knowledge that this role is not the actual self in the real world. We validate and measure self-illusion through an experiment where each participant occupies a non-human perspective and plays a non-human role using this role's behavior patterns. 77 participants were enrolled for the user study according to the priori power analysis. In the mixed-design experiment with different levels of manipulations, we asked the participants to play a cat (a non-human role) within an immersive VE and captured their different kinds of responses, finding that the participants with higher self-illusion can connect themselves to the virtual role more easily. Based on statistical analysis of questionnaires and behavior data, there is some evidence that self-illusion can be considered a novel psychological component of presence because it is dissociated from sense of embodiment (SoE), plausibility illusion (Psi), and place illusion (PI). Moreover, self-illusion has the potential to be an effective evaluation metric for user experience in a virtual reality system for certain applications.


Assuntos
Ilusões , Realidade Virtual , Cognição , Gráficos por Computador , Humanos , Interface Usuário-Computador
11.
Front Neurorobot ; 16: 843026, 2022.
Artigo em Inglês | MEDLINE | ID: mdl-35645759

RESUMO

Recently, there have been many advances in autonomous driving society, attracting a lot of attention from academia and industry. However, existing studies mainly focus on cars, extra development is still required for self-driving truck algorithms and models. In this article, we introduce an intelligent self-driving truck system. Our presented system consists of three main components, 1) a realistic traffic simulation module for generating realistic traffic flow in testing scenarios, 2) a high-fidelity truck model which is designed and evaluated for mimicking real truck response in real world deployment, and 3) an intelligent planning module with learning-based decision making algorithm and multi-mode trajectory planner, taking into account the truck's constraints, road slope changes, and the surrounding traffic flow. We provide quantitative evaluations for each component individually to demonstrate the fidelity and performance of each part. We also deploy our proposed system on a real truck and conduct real world experiments which show our system's capacity of mitigating sim-to-real gap. Our code is available at https://github.com/InceptioResearch/IITS.

12.
IEEE Trans Vis Comput Graph ; 27(5): 2535-2544, 2021 May.
Artigo em Inglês | MEDLINE | ID: mdl-33750709

RESUMO

We present a novel redirected walking controller based on alignment that allows the user to explore large and complex virtual environments, while minimizing the number of collisions with obstacles in the physical environment. Our alignment-based redirection controller, ARC, steers the user such that their proximity to obstacles in the physical environment matches the proximity to obstacles in the virtual environment as closely as possible. To quantify a controller's performance in complex environments, we introduce a new metric, Complexity Ratio (CR), to measure the relative environment complexity and characterize the difference in navigational complexity between the physical and virtual environments. Through extensive simulation-based experiments, we show that ARC significantly outperforms current state-of-the-art controllers in its ability to steer the user on a collision-free path. We also show through quantitative and qualitative measures of performance that our controller is robust in complex environments with many obstacles. Our method is applicable to arbitrary environments and operates without any user input or parameter tweaking, aside from the layout of the environments. We have implemented our algorithm on the Oculus Quest head-mounted display and evaluated its performance in environments with varying complexity. Our project website is available at https://ganuna.umd.edu/arc/.

13.
IEEE Trans Vis Comput Graph ; 27(11): 4267-4277, 2021 Nov.
Artigo em Inglês | MEDLINE | ID: mdl-34449367

RESUMO

We present a new approach for redirected walking in static and dynamic scenes that uses techniques from robot motion planning to compute the redirection gains that steer the user on collision-free paths in the physical space. Our first contribution is a mathematical framework for redirected walking using concepts from motion planning and configuration spaces. This framework highlights various geometric and perceptual constraints that tend to make collision-free redirected walking difficult. We use our framework to propose an efficient solution to the redirection problem that uses the notion of visibility polygons to compute the free spaces in the physical environment and the virtual environment. The visibility polygon provides a concise representation of the entire space that is visible, and therefore walkable, to the user from their position within an environment. Using this representation of walkable space, we apply redirected walking to steer the user to regions of the visibility polygon in the physical environment that closely match the region that the user occupies in the visibility polygon in the virtual environment. We show that our algorithm is able to steer the user along paths that result in significantly fewer resets than existing state-of-the-art algorithms in both static and dynamic scenes. Our project website is available at https://ganuna.umd.edu/vis.poly/.

14.
IEEE Trans Vis Comput Graph ; 27(6): 2967-2979, 2021 06.
Artigo em Inglês | MEDLINE | ID: mdl-31751243

RESUMO

We present a data-driven algorithm for generating gaits of virtual characters with varying dominance traits. Our formulation utilizes a user study to establish a data-driven dominance mapping between gaits and dominance labels. We use our dominance mapping to generate walking gaits for virtual characters that exhibit a variety of dominance traits while interacting with the user. Furthermore, we extract gait features based on known criteria in visual perception and psychology literature that can be used to identify the dominance levels of any walking gait. We validate our mapping and the perceived dominance traits by a second user study in an immersive virtual environment. Our gait dominance classification algorithm can classify the dominance traits of gaits with ˜73 percent accuracy. We also present an application of our approach that simulates interpersonal relationships between virtual characters. To the best of our knowledge, ours is the first practical approach to classifying gait dominance and generate dominance traits in virtual characters.


Assuntos
Gráficos por Computador , Análise da Marcha , Modelos Psicológicos , Predomínio Social , Realidade Virtual , Adulto , Algoritmos , Feminino , Humanos , Processamento de Imagem Assistida por Computador , Aprendizagem , Masculino , Caminhada/fisiologia , Caminhada/psicologia , Adulto Jovem
15.
PLoS One ; 16(12): e0259713, 2021.
Artigo em Inglês | MEDLINE | ID: mdl-34851982

RESUMO

Observing social/physical distancing norms between humans has become an indispensable precaution to slow down the transmission of COVID-19. We present a novel method to automatically detect pairs of humans in a crowded scenario who are not maintaining social distancing, i.e. about 2 meters of space between them using an autonomous mobile robot and existing CCTV (Closed-Circuit TeleVision) cameras. The robot is equipped with commodity sensors, namely an RGB-D (Red Green Blue-Depth) camera and a 2-D lidar to detect social distancing breaches within their sensing range and navigate towards the location of the breach. Moreover, it discreetly alerts the relevant people to move apart by using a mounted display. In addition, we also equip the robot with a thermal camera that transmits thermal images to security/healthcare personnel who monitors COVID symptoms such as a fever. In indoor scenarios, we integrate the mobile robot setup with a static wall-mounted CCTV camera to further improve the number of social distancing breaches detected, accurately pursuing walking groups of people etc. We highlight the performance benefits of our robot + CCTV approach in different static and dynamic indoor scenarios.


Assuntos
COVID-19/prevenção & controle , Monitoramento Ambiental/métodos , Distanciamento Físico , Robótica , Algoritmos , COVID-19/transmissão , COVID-19/virologia , Monitoramento Ambiental/instrumentação , Humanos , Fotografação , SARS-CoV-2/isolamento & purificação
16.
IEEE Trans Vis Comput Graph ; 27(3): 1953-1966, 2021 Mar.
Artigo em Inglês | MEDLINE | ID: mdl-31613770

RESUMO

Interactive multi-agent simulation algorithms are used to compute the trajectories and behaviors of different entities in virtual reality scenarios. However, current methods involve considerable parameter tweaking to generate plausible behaviors. We introduce a novel approach (Heter-Sim) that combines physics-based simulation methods with data-driven techniques using an optimization-based formulation. Our approach is general and can simulate heterogeneous agents corresponding to human crowds, traffic, vehicles, or combinations of different agents with varying dynamics. We estimate motion states from real-world datasets that include information about position, velocity, and control direction. Our optimization algorithm considers several constraints, including velocity continuity, collision avoidance, attraction, direction control. Other constraints are implemented by introducing a novel energy function to control the motions of heterogeneous agents. To accelerate the computations, we reduce the search space for both collision avoidance and optimal solution computation. Heter-Sim can simulate tens or hundreds of agents at interactive rates and we compare its accuracy with real-world datasets and prior algorithms. We also perform user studies that evaluate the plausible behaviors generated by our algorithm and a user study that evaluates the plausibility of our algorithm via VR.

17.
Sci Robot ; 6(55)2021 06 30.
Artigo em Inglês | MEDLINE | ID: mdl-34193561

RESUMO

Excavators are widely used for material handling applications in unstructured environments, including mining and construction. Operating excavators in a real-world environment can be challenging due to extreme conditions-such as rock sliding, ground collapse, or excessive dust-and can result in fatalities and injuries. Here, we present an autonomous excavator system (AES) for material loading tasks. Our system can handle different environments and uses an architecture that combines perception and planning. We fuse multimodal perception sensors, including LiDAR and cameras, along with advanced image enhancement, material and texture classification, and object detection algorithms. We also present hierarchical task and motion planning algorithms that combine learning-based techniques with optimization-based methods and are tightly integrated with the perception modules and the controller modules. We have evaluated AES performance on compact and standard excavators in many complex indoor and outdoor scenarios corresponding to material loading into dump trucks, waste material handling, rock capturing, pile removal, and trenching tasks. We demonstrate that our architecture improves the efficiency and autonomously handles different scenarios. AES has been deployed for real-world operations for long periods and can operate robustly in challenging scenarios. AES achieves 24 hours per intervention, i.e., the system can continuously operate for 24 hours without any human intervention. Moreover, the amount of material handled by AES per hour is closely equivalent to an experienced human operator.

18.
IEEE Trans Vis Comput Graph ; 27(11): 4107-4118, 2021 11.
Artigo em Inglês | MEDLINE | ID: mdl-34449365

RESUMO

We present a CPU-based real-time cloth animation method for dressing virtual humans of various shapes and poses. Our approach formulates the clothing deformation as a high-dimensional function of body shape parameters and pose parameters. In order to accelerate the computation, our formulation factorizes the clothing deformation into two independent components: the deformation introduced by body pose variation (Clothing Pose Model) and the deformation from body shape variation (Clothing Shape Model). Furthermore, we sample and cluster the poses spanning the entire pose space and use those clusters to efficiently calculate the anchoring points. We also introduce a sensitivity-based distance measurement to both find nearby anchoring points and evaluate their contributions to the final animation. Given a query shape and pose of the virtual agent, we synthesize the resulting clothing deformation by blending the Taylor expansion results of nearby anchoring points. Compared to previous methods, our approach is general and able to add the shape dimension to any clothing pose model. Furthermore, we can animate clothing represented with tens of thousands of vertices at 50+ FPS on a CPU. We also conduct a user evaluation and show that our method can improve a user's perception of dressed virtual agents in an immersive virtual environment (IVE) compared to a realtime linear blend skinning method.


Assuntos
Gráficos por Computador , Humanos
19.
IEEE Trans Vis Comput Graph ; 26(5): 1991-2001, 2020 May.
Artigo em Inglês | MEDLINE | ID: mdl-32070967

RESUMO

We present a new method to capture the acoustic characteristics of real-world rooms using commodity devices, and use the captured characteristics to generate similar sounding sources with virtual models. Given the captured audio and an approximate geometric model of a real-world room, we present a novel learning-based method to estimate its acoustic material properties. Our approach is based on deep neural networks that estimate the reverberation time and equalization of the room from recorded audio. These estimates are used to compute material properties related to room reverberation using a novel material optimization objective. We use the estimated acoustic material characteristics for audio rendering using interactive geometric sound propagation and highlight the performance on many real-world scenarios. We also perform a user study to evaluate the perceptual similarity between the recorded sounds and our rendered audio.

20.
IEEE Trans Vis Comput Graph ; 26(5): 1902-1911, 2020 05.
Artigo em Inglês | MEDLINE | ID: mdl-32070980

RESUMO

We conduct novel analyses of users' gaze behaviors in dynamic virtual scenes and, based on our analyses, we present a novel CNN-based model called DGaze for gaze prediction in HMD-based applications. We first collect 43 users' eye tracking data in 5 dynamic scenes under free-viewing conditions. Next, we perform statistical analysis of our data and observe that dynamic object positions, head rotation velocities, and salient regions are correlated with users' gaze positions. Based on our analysis, we present a CNN-based model (DGaze) that combines object position sequence, head velocity sequence, and saliency features to predict users' gaze positions. Our model can be applied to predict not only realtime gaze positions but also gaze positions in the near future and can achieve better performance than prior method. In terms of realtime prediction, DGaze achieves a 22.0% improvement over prior method in dynamic scenes and obtains an improvement of 9.5% in static scenes, based on using the angular distance as the evaluation metric. We also propose a variant of our model called DGaze_ET that can be used to predict future gaze positions with higher precision by combining accurate past gaze data gathered using an eye tracker. We further analyze our CNN architecture and verify the effectiveness of each component in our model. We apply DGaze to gaze-contingent rendering and a game, and also present the evaluation results from a user study.


Assuntos
Gráficos por Computador , Tecnologia de Rastreamento Ocular , Fixação Ocular/fisiologia , Redes Neurais de Computação , Adolescente , Adulto , Feminino , Humanos , Masculino , Software , Análise e Desempenho de Tarefas , Adulto Jovem
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA