Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 6 de 6
Filtrar
1.
J Behav Med ; 42(5): 973-983, 2019 Oct.
Artigo em Inglês | MEDLINE | ID: mdl-30790211

RESUMO

Hyperarousal is a critical component of insomnia, particularly at bedtime when individuals are trying to fall asleep. The current study evaluated the effect of a novel, acute behavioral experimental manipulation (combined immersive audio-visual relaxation and biofeedback) in reducing bedtime physiological hyperarousal in women with insomnia symptoms. After a clinical/adaptation polysomnographic (PSG) night, sixteen women with insomnia symptoms had two random-order PSG nights: immersive audio-visual respiratory bio-feedback across the falling asleep period (manipulation night), and no pre-sleep arousal manipulation (control night). While using immersive audio-visual respiratory bio-feedback, overall heart rate variability was increased and heart rate (HR) was reduced (by ~ 5 bpm; p < 0.01), reflecting downregulation of autonomic pre-sleep arousal, relative to no-manipulation. HR continued to be lower during sleep, and participants had fewer awakenings and sleep stage transitions on the manipulation night relative to the control night (p < 0.05). The manipulation did not affect sleep onset latency or other PSG parameters. Overall, this novel behavioral approach targeting the falling asleep process emphasizes the importance of pre-sleep hyperarousal as a potential target for improving sleep and nocturnal autonomic function during sleep in insomnia.


Assuntos
Nível de Alerta/fisiologia , Biorretroalimentação Psicológica/métodos , Retroalimentação Sensorial , Distúrbios do Início e da Manutenção do Sono/terapia , Adulto , Feminino , Frequência Cardíaca/fisiologia , Humanos , Pessoa de Meia-Idade , Projetos Piloto , Polissonografia , Distúrbios do Início e da Manutenção do Sono/fisiopatologia , Adulto Jovem
2.
IEEE Trans Vis Comput Graph ; 27(11): 4236-4244, 2021 Nov.
Artigo em Inglês | MEDLINE | ID: mdl-34449369

RESUMO

Proper occlusion based rendering is very important to achieve realism in all indoor and outdoor Augmented Reality (AR) applications. This paper addresses the problem of fast and accurate dynamic occlusion reasoning by real objects in the scene for large scale outdoor AR applications. Conceptually, proper occlusion reasoning requires an estimate of depth for every point in augmented scene which is technically hard to achieve for outdoor scenarios, especially in the presence of moving objects. We propose a method to detect and automatically infer the depth for real objects in the scene without explicit detailed scene modeling and depth sensing (e.g. without using sensors such as 3D-LiDAR). Specifically, we employ instance segmentation of color image data to detect real dynamic objects in the scene and use either a top-down terrain elevation model or deep learning based monocular depth estimation model to infer their metric distance from the camera for proper occlusion reasoning in real time. The realized solution is implemented in a low latency real-time framework for video-see-though AR and is directly extendable to optical-see-through AR. We minimize latency in depth reasoning and occlusion rendering by doing semantic object tracking and prediction in video frames.

3.
IEEE Trans Vis Comput Graph ; 21(5): 611-23, 2015 May.
Artigo em Inglês | MEDLINE | ID: mdl-26357208

RESUMO

In this paper we present an augmented reality binocular system to allow long range high precision augmentation of live telescopic imagery with aerial and terrain based synthetic objects, vehicles, people and effects. The inserted objects must appear stable in the display and must not jitter and drift as the user pans around and examines the scene with the binoculars. The design of the system is based on using two different cameras with wide field of view and narrow field of view lenses enclosed in a binocular shaped shell. Using the wide field of view gives us context and enables us to recover the 3D location and orientation of the binoculars much more robustly, whereas the narrow field of view is used for the actual augmentation as well as to increase precision in tracking. We present our navigation algorithm that uses the two cameras in combination with an inertial measurement unit and global positioning system in an extended Kalman filter and provides jitter free, robust and real-time pose estimation for precise augmentation. We have demonstrated successful use of our system as part of information sharing example as well as a live simulated training system for observer training, in which fixed and rotary wing aircrafts, ground vehicles, and weapon effects are combined with real world scenes.

4.
IEEE Trans Pattern Anal Mach Intell ; 36(11): 2241-54, 2014 Nov.
Artigo em Inglês | MEDLINE | ID: mdl-26353064

RESUMO

This paper presents a novel approach to recovering estimates of 3D structure and motion of a dynamic scene from a sequence of binocular stereo images. The approach is based on matching spatiotemporal orientation distributions between left and right temporal image streams, which encapsulates both local spatial and temporal structure for disparity estimation. By capturing spatial and temporal structure in this unified fashion, both sources of information combine to yield disparity estimates that are naturally temporal coherent, while helping to resolve matches that might be ambiguous when either source is considered alone. Further, by allowing subsets of the orientation measurements to support different disparity estimates, an approach to recovering multilayer disparity from spacetime stereo is realized. Similarly, the matched distributions allow for direct recovery of dense, robust estimates of 3D scene flow. The approach has been implemented with real-time performance on commodity GPUs using OpenCL. Empirical evaluation shows that the proposed approach yields qualitatively and quantitatively superior estimates in comparison to various alternative approaches, including the ability to provide accurate multilayer estimates in the presence of (semi)transparent and specular surfaces.

5.
IEEE Trans Pattern Anal Mach Intell ; 35(3): 527-40, 2013 Mar.
Artigo em Inglês | MEDLINE | ID: mdl-26353139

RESUMO

This paper provides a unified framework for the interrelated topics of action spotting, the spatiotemporal detection and localization of human actions in video, and action recognition, the classification of a given video into one of several predefined categories. A novel compact local descriptor of video dynamics in the context of action spotting and recognition is introduced based on visual spacetime oriented energy measurements. This descriptor is efficiently computed directly from raw image intensity data and thereby forgoes the problems typically associated with flow-based features. Importantly, the descriptor allows for the comparison of the underlying dynamics of two spacetime video segments irrespective of spatial appearance, such as differences induced by clothing, and with robustness to clutter. An associated similarity measure is introduced that admits efficient exhaustive search for an action template, derived from a single exemplar video, across candidate video sequences. The general approach presented for action spotting and recognition is amenable to efficient implementation, which is deemed critical for many important applications. For action spotting, details of a real-time GPU-based instantiation of the proposed approach are provided. Empirical evaluation of both action spotting and action recognition on challenging datasets suggests the efficacy of the proposed approach, with state-of-the-art performance documented on standard datasets.


Assuntos
Atividades Humanas/classificação , Processamento de Imagem Assistida por Computador/métodos , Reconhecimento Automatizado de Padrão/métodos , Análise Espaço-Temporal , Algoritmos , Bases de Dados Factuais , Humanos , Gravação em Vídeo
6.
IEEE Trans Pattern Anal Mach Intell ; 34(6): 1206-19, 2012 Jun.
Artigo em Inglês | MEDLINE | ID: mdl-22516652

RESUMO

This paper is concerned with the recovery of temporally coherent estimates of 3D structure and motion of a dynamic scene from a sequence of binocular stereo images. A novel approach is presented based on matching of spatiotemporal quadric elements (stequels) between views, as this primitive encapsulates both spatial and temporal image structure for 3D estimation. Match constraints are developed for bringing stequels into correspondence across binocular views. With correspondence established, temporally coherent disparity estimates are obtained without explicit motion recovery. Further, the matched stequels also will be shown to support direct recovery of scene flow estimates. Extensive algorithmic evaluation with ground truth data incorporated in both local and global correspondence paradigms shows the considerable benefit of using stequels as a matching primitive and its advantages in comparison to alternative methods of enforcing temporal coherence in disparity estimation. Additional experiments document the usefulness of stequel matching for 3D scene flow estimation.


Assuntos
Algoritmos , Imageamento Tridimensional/métodos , Visão Binocular , Percepção de Profundidade/fisiologia , Humanos , Interpretação de Imagem Assistida por Computador/métodos , Reconhecimento Automatizado de Padrão/métodos , Análise Espaço-Temporal , Tomografia de Coerência Óptica
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA