Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 4 de 4
Filtrar
Mais filtros








Base de dados
Intervalo de ano de publicação
2.
Sensors (Basel) ; 23(20)2023 Oct 22.
Artigo em Inglês | MEDLINE | ID: mdl-37896723

RESUMO

The objective of vehicle search is to locate and identify vehicles in uncropped, real-world images, which is the combination of two tasks: vehicle detection and re-identification (Re-ID). As an emerging research topic, vehicle search plays a significant role in the perception of cooperative autonomous vehicles and road driving in the distant future and has become a trend in the future development of intelligent driving. However, there is no suitable dataset for this study. The Tsinghua University DAIR-V2X dataset is utilized to create the first cross-camera vehicle search dataset, DAIR-V2XSearch, which combines the cameras at both ends of the vehicle and the road in real-world scenes. The primary purpose of the current search network is to address the pedestrian issue. Due to varying task scenarios, it is necessary to re-establish the network in order to resolve the problem of vast differences in different perspectives caused by vehicle searches. A phased feature extraction network (PFE-Net) is proposed as a solution to the cross-camera vehicle search problem. Initially, the anchor-free YOLOX framework is selected as the backbone network, which not only improves the network's performance but also eliminates the fuzzy situation in which multiple anchor boxes correspond to a single vehicle ID in the Re-ID branch. Second, for the vehicle Re-ID branch, a camera grouping module is proposed to effectively address issues such as sudden changes in perspective and disparities in shooting under different cameras. Finally, a cross-level feature fusion module is designed to enhance the model's ability to extract subtle vehicle features and the Re-ID's precision. Experiments demonstrate that our proposed PFE-Net achieves the highest precision in the DAIR-V2XSearch dataset.

4.
Front Psychol ; 3: 336, 2012.
Artigo em Inglês | MEDLINE | ID: mdl-23055990

RESUMO

In natural vision both stimulus features and cognitive/affective factors influence an observer's attention. However, the relationship between stimulus-driven ("bottom-up") and cognitive/affective ("top-down") factors remains controversial: Can affective salience counteract strong visual stimulus signals and shift attention allocation irrespective of bottom-up features? Is there any difference between negative and positive scenes in terms of their influence on attention deployment? Here we examined the impact of affective factors on eye movement behavior, to understand the competition between visual stimulus-driven salience and affective salience and how they affect gaze allocation in complex scene viewing. Building on our previous research, we compared predictions generated by a visual salience model with measures indexing participant-identified emotionally meaningful regions of each image. To examine how eye movement behavior differs for negative, positive, and neutral scenes, we examined the influence of affective salience in capturing attention according to emotional valence. Taken together, our results show that affective salience can override stimulus-driven salience and overall emotional valence can determine attention allocation in complex scenes. These findings are consistent with the hypothesis that cognitive/affective factors play a dominant role in active gaze control.

SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA