Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 4 de 4
Filter
Add more filters










Database
Language
Publication year range
1.
Article in English | MEDLINE | ID: mdl-38696300

ABSTRACT

In the task of multiview multilabel (MVML) classification, each instance is represented by several heterogeneous features and associated with multiple semantic labels. Existing MVML methods mainly focus on leveraging the shared subspace to comprehensively explore multiview consensus information across different views, while it is still an open problem whether such shared subspace representation is effective to characterize all relevant labels when formulating a desired MVML model. In this article, we propose a novel label-driven view-specific fusion MVML method named L-VSM, which bypasses seeking for a shared subspace representation and instead directly encodes the feature representation of each individual view to contribute to the final multilabel classifier induction. Specifically, we first design a label-driven feature graph construction strategy and construct all instances under various feature representations into the corresponding feature graphs. Then, these view-specific feature graphs are integrated into a unified graph by linking the different feature representations within each instance. Afterward, we adopt a graph attention mechanism to aggregate and update all feature nodes on the unified graph to generate structural representations for each instance, where both intraview correlations and interview alignments are jointly encoded to discover the underlying consensuses and complementarities across different views. Moreover, to explore the widespread label correlations in multilabel learning (MLL), the transformer architecture is introduced to construct a dynamic semantic-aware label graph and accordingly generate structural semantic representations for each specific class. Finally, we derive an instance-label affinity score for each instance by averaging the affinity scores of its different feature representations with the multilabel soft margin loss. Extensive experiments on various MVML applications have verified that our proposed L-VSM has achieved superior performance against state-of-the-art methods. The codes are available at https://gengyulyu.github.io/homepage/assets/codes/LVSM.zip.

2.
Article in English | MEDLINE | ID: mdl-38656845

ABSTRACT

In the task of multiview multilabel (MVML) classification, each object is described by several heterogeneous view features and annotated with multiple relevant labels. Existing MVML methods usually assume that these heterogeneous features are strictly view-aligned, and they directly conduct cross-view information fusion to train a multilabel prediction model. However, in real-world scenarios, such strict view-aligned requirement can be hardly satisfied due to the recurrent spatiotemporal asynchronism when collecting MVML data, which would cause inaccurate multiview fusion results and degrade the classification performance. To address this issue, we propose a generalized nonaligned MVML (GNAM) classification method, which achieves multiview information fusion while aligning cross-view features and accordingly learns a desired multilabel classifier. Specifically, we first introduce a multiorder matching alignment strategy to achieve cross-view feature alignments, where both first-order feature correspondence and second-order structure correspondence are jointly integrated to guarantee the compactness of the view-alignment results. Afterward, a commonality-and individuality-based multiview fusion structure is formulated on the aligned-view features to excavate the consistencies and complementarities across different views, which leads all relevant multiview semantic labels, especially rare labels, to be characterized more comprehensively. Finally, we embed adaptive global label correlations to multilabel classification model to further enhance its semantic expression integrity and develop an alternative algorithm to optimize the whole model. Extensive experimental results have verified that GNAM is significantly superior to other state-of-the-art methods.

3.
IEEE Trans Cybern ; 53(3): 1618-1628, 2023 Mar.
Article in English | MEDLINE | ID: mdl-34499612

ABSTRACT

Partial multilabel learning (PML) aims to learn from training data, where each instance is associated with a set of candidate labels, among which only a part is correct. The common strategy to deal with such a problem is disambiguation, that is, identifying the ground-truth labels from the given candidate labels. However, the existing PML approaches always focus on leveraging the instance relationship to disambiguate the given noisy label space, while the potentially useful information in label space is not effectively explored. Meanwhile, the existence of noise and outliers in training data also makes the disambiguation operation less reliable, which inevitably decreases the robustness of the learned model. In this article, we propose a prior label knowledge regularized self-representation PML approach, called PAKS, where the self-representation scheme and prior label knowledge are jointly incorporated into a unified framework. Specifically, we introduce a self-representation model with a low-rank constraint, which aims to learn the subspace representations of distinct instances and explore the high-order underlying correlation among different instances. Meanwhile, we incorporate prior label knowledge into the above self-representation model, where the prior label knowledge is regarded as the complement of features to obtain an accurate self-representation matrix. The core of PAKS is to take advantage of the data membership preference, which is derived from the prior label knowledge, to purify the discovered membership of the data and accordingly obtain more representative feature subspace for model induction. Enormous experiments on both synthetic and real-world datasets show that our proposed approach can achieve superior or comparable performance to state-of-the-art approaches.

4.
IEEE Trans Cybern ; 52(2): 899-911, 2022 Feb.
Article in English | MEDLINE | ID: mdl-32452795

ABSTRACT

Partial-label learning (PLL) aims to solve the problem where each training instance is associated with a set of candidate labels, one of which is the correct label. Most PLL algorithms try to disambiguate the candidate label set, by either simply treating each candidate label equally or iteratively identifying the true label. Nonetheless, existing algorithms usually treat all labels and instances equally, and the complexities of both labels and instances are not taken into consideration during the learning stage. Inspired by the successful application of a self-paced learning strategy in the machine-learning field, we integrate the self-paced regime into the PLL framework and propose a novel self-paced PLL (SP-PLL) algorithm, which could control the learning process to alleviate the problem by ranking the priorities of the training examples together with their candidate labels during each learning iteration. Extensive experiments and comparisons with other baseline methods demonstrate the effectiveness and robustness of the proposed method.


Subject(s)
Algorithms , Machine Learning
SELECTION OF CITATIONS
SEARCH DETAIL
...