Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 2 de 2
Filter
Add more filters








Database
Language
Publication year range
1.
Neural Netw ; 179: 106576, 2024 Jul 31.
Article in English | MEDLINE | ID: mdl-39121790

ABSTRACT

Visible-infrared person re-identification (VIPR) plays an important role in intelligent transportation systems. Modal discrepancies between visible and infrared images seriously confuse person appearance discrimination, e.g., the similarity of the same class of different modalities is lower than the similarity between different classes of the same modality. Worse still, the modal discrepancies and appearance discrepancies are coupled with each other. The prevailing practice is to disentangle modal and appearance discrepancies, but it usually requires complex decoupling networks. In this paper, rather than disentanglement, we propose to measure and optimize modal discrepancies. We explore a cross-modal group-relation (CMGR) to describe the relationship between the same group of people in two different modalities. The CMGR has great potential in modal invariance because it considers more stable groups rather than individuals, so it is a good measurement for modal discrepancies. Furthermore, we design a group-relation correlation (GRC) loss function based on Pearson correlations to optimize CMGR, which can be easily integrated with the learning of VIPR's appearance features. Consequently, our CMGR model serves as a pivotal constraint to minimize modal discrepancies, operating in a manner similar to a loss function. It is applied solely during the training phase, thereby obviating the need for any execution during the inference phase. Experimental results on two public datasets (i.e., RegDB and SYSU-MM01) demonstrate that our CMGR method is superior to state-of-the-art approaches. In particular, on the RegDB dataset, with the help of CMGR, the rank-1 identification rate has improved by more than 7% compared to the case of not using CMGR.

2.
Sensors (Basel) ; 23(3)2023 Jan 27.
Article in English | MEDLINE | ID: mdl-36772466

ABSTRACT

Visible-infrared person re-identification (VIPR) has great potential for intelligent transportation systems for constructing smart cities, but it is challenging to utilize due to the huge modal discrepancy between visible and infrared images. Although visible and infrared data can appear to be two domains, VIPR is not identical to domain adaptation as it can massively eliminate modal discrepancies. Because VIPR has complete identity information on both visible and infrared modalities, once the domain adaption is overemphasized, the discriminative appearance information on the visible and infrared domains would drain. For that, we propose a novel margin-based modal adaptive learning (MMAL) method for VIPR in this paper. On each domain, we apply triplet and label smoothing cross-entropy functions to learn appearance-discriminative features. Between the two domains, we design a simple yet effective marginal maximum mean discrepancy (M3D) loss function to avoid an excessive suppression of modal discrepancies to protect the features' discriminative ability on each domain. As a result, our MMAL method could learn modal-invariant yet appearance-discriminative features for improving VIPR. The experimental results show that our MMAL method acquires state-of-the-art VIPR performance, e.g., on the RegDB dataset in the visible-to-infrared retrieval mode, the rank-1 accuracy is 93.24% and the mean average precision is 83.77%.

SELECTION OF CITATIONS
SEARCH DETAIL