Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 7 de 7
Filtrar
Mais filtros

Base de dados
Tipo de documento
Intervalo de ano de publicação
1.
Opt Express ; 25(16): 19085-19093, 2017 Aug 07.
Artigo em Inglês | MEDLINE | ID: mdl-29041102

RESUMO

Through analyzing the structure of the phase-shifted helical long-period fiber grating (HLPG), which is fabricated based on the thermally twisting method, we have shown that there exists a phase-diffusion effect when the thermal region is larger than the grating period itself, i.e., the inserted phase preset at particular period will be diffused to several neighboring periods, which causes a large distortion in the transmission spectrum. We have analytically proved that this kind of phase-diffusion effect can be quantified by doing the convolution between the preset phase function and the phase-diffusion function in spatial domain. According to the analytical results, we have proposed and successfully demonstrated a pre-compensation method to solve the phase diffusion effect. As an example, a phase-shifted HLPG with π phase-shift precisely inserted at middle position of the grating has been presented.

2.
Opt Express ; 25(7): 7402-7407, 2017 Apr 03.
Artigo em Inglês | MEDLINE | ID: mdl-28380862

RESUMO

We demonstrate a simple and robust method to write a phase-shifted helical long-period fiber grating (HLPG), where an equivalent phase-shift is formed by changing the local period of the grating during the fabrication process. Furthermore, we propose and demonstrate a simple method to characterize the phase-shift formed in a HLPG, which is realized by directly analyzing the imaging pattern of the fabricated HLPG using a stereo microscope under a white light illumination. Unlike the previous methods which are indirectly realized either by measuring the transmission spectrum of the fabricated HLPG or by analyzing the differential interference contrast (DIC) microscopic images of the fabricated HLPG, the proposed method can be used to well estimate the grating period as well as the phase-shift inserted in the HLPG in situ, which could considerably facilitate the fabrication technique of the HLPG by using CO2 laser.

3.
IEEE Trans Pattern Anal Mach Intell ; 38(6): 1070-83, 2016 06.
Artigo em Inglês | MEDLINE | ID: mdl-26372209

RESUMO

Recently, head pose estimation (HPE) from low-resolution surveillance data has gained in importance. However, monocular and multi-view HPE approaches still work poorly under target motion, as facial appearance distorts owing to camera perspective and scale changes when a person moves around. To this end, we propose FEGA-MTL, a novel framework based on Multi-Task Learning (MTL) for classifying the head pose of a person who moves freely in an environment monitored by multiple, large field-of-view surveillance cameras. Upon partitioning the monitored scene into a dense uniform spatial grid, FEGA-MTL simultaneously clusters grid partitions into regions with similar facial appearance, while learning region-specific head pose classifiers. In the learning phase, guided by two graphs which a-priori model the similarity among (1) grid partitions based on camera geometry and (2) head pose classes, FEGA-MTL derives the optimal scene partitioning and associated pose classifiers. Upon determining the target's position using a person tracker at test time, the corresponding region-specific classifier is invoked for HPE. The FEGA-MTL framework naturally extends to a weakly supervised setting where the target's walking direction is employed as a proxy in lieu of head orientation. Experiments confirm that FEGA-MTL significantly outperforms competing single-task and multi-task learning methods in multi-view settings.


Assuntos
Algoritmos , Cabeça , Movimento (Física) , Humanos , Aprendizagem , Orientação
4.
IEEE Trans Pattern Anal Mach Intell ; 38(8): 1707-20, 2016 08.
Artigo em Inglês | MEDLINE | ID: mdl-26540677

RESUMO

Studying free-standing conversational groups (FCGs) in unstructured social settings (e.g., cocktail party ) is gratifying due to the wealth of information available at the group (mining social networks) and individual (recognizing native behavioral and personality traits) levels. However, analyzing social scenes involving FCGs is also highly challenging due to the difficulty in extracting behavioral cues such as target locations, their speaking activity and head/body pose due to crowdedness and presence of extreme occlusions. To this end, we propose SALSA, a novel dataset facilitating multimodal and Synergetic sociAL Scene Analysis, and make two main contributions to research on automated social interaction analysis: (1) SALSA records social interactions among 18 participants in a natural, indoor environment for over 60 minutes, under the poster presentation and cocktail party contexts presenting difficulties in the form of low-resolution images, lighting variations, numerous occlusions, reverberations and interfering sound sources; (2) To alleviate these problems we facilitate multimodal analysis by recording the social interplay using four static surveillance cameras and sociometric badges worn by each participant, comprising the microphone, accelerometer, bluetooth and infrared sensors. In addition to raw data, we also provide annotations concerning individuals' personality as well as their position, head, body orientation and F-formation information over the entire event duration. Through extensive experiments with state-of-the-art approaches, we show (a) the limitations of current methods and (b) how the recorded multiple cues synergetically aid automatic analysis of social interactions. SALSA is available at http://tev.fbk.eu/salsa.


Assuntos
Algoritmos , Conjuntos de Dados como Assunto , Processos Grupais , Reconhecimento Automatizado de Padrão , Comportamento Social , Adulto , Sinais (Psicologia) , Feminino , Humanos , Relações Interpessoais , Iluminação , Masculino , Gravação em Vídeo , Adulto Jovem
5.
IEEE Trans Image Process ; 23(12): 5599-611, 2014 Dec.
Artigo em Inglês | MEDLINE | ID: mdl-25361507

RESUMO

Robust action recognition under viewpoint changes has received considerable attention recently. To this end, self-similarity matrices (SSMs) have been found to be effective view-invariant action descriptors. To enhance the performance of SSM-based methods, we propose multitask linear discriminant analysis (LDA), a novel multitask learning framework for multiview action recognition that allows for the sharing of discriminative SSM features among different views (i.e., tasks). Inspired by the mathematical connection between multivariate linear regression and LDA, we model multitask multiclass LDA as a single optimization problem by choosing an appropriate class indicator matrix. In particular, we propose two variants of graph-guided multitask LDA: 1) where the graph weights specifying view dependencies are fixed a priori and 2) where graph weights are flexibly learnt from the training data. We evaluate the proposed methods extensively on multiview RGB and RGBD video data sets, and experimental results confirm that the proposed approaches compare favorably with the state-of-the-art.

6.
J Vis ; 14(3): 31, 2014 Mar 26.
Artigo em Inglês | MEDLINE | ID: mdl-24672021

RESUMO

A basic question in vision research regards where people look in complex scenes and how this influences their performance in various tasks. Previous studies with static images have demonstrated a close link between where people look and what they remember. Here, we examined the pattern of eye movements when participants watched neutral and emotional clips from Hollywood-style movies. Participants answered multiple-choice memory questions concerning visual and auditory scene details immediately upon viewing 1-min-long neutral or emotional movie clips. Fixations were more narrowly focused for emotional clips, and immediate memory for object details was worse compared to matched neutral scenes, implying preferential attention to emotional events. Although we found the expected correlation between where people looked and what they remembered for neutral clips, this relationship broke down for emotional clips. When participants were subsequently presented with key frames (static images) extracted from the movie clips such that presentation duration of the target objects (TOs) corresponding to the multiple-choice questions was matched and the earlier questions were repeated, more fixations were observed on the TOs, and memory performance also improved significantly, confirming that emotion modulates the relationship between gaze position and memory performance. Finally, in a long-term memory test, old/new recognition performance was significantly better for emotional scenes as compared to neutral scenes. Overall, these results are consistent with the hypothesis that emotional content draws eye fixations and strengthens memory for the scene gist while weakening encoding of peripheral scene details.


Assuntos
Emoções/fisiologia , Movimentos Oculares/fisiologia , Memória de Longo Prazo/fisiologia , Memória de Curto Prazo/fisiologia , Filmes Cinematográficos , Adulto , Atenção , Feminino , Humanos , Masculino , Adulto Jovem
7.
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA