Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 9 de 9
Filtrar
1.
Entropy (Basel) ; 22(6)2020 Jun 05.
Artículo en Inglés | MEDLINE | ID: mdl-33286397

RESUMEN

Deep learning has achieved many successes in different fields but can sometimes encounter an overfitting problem when there are insufficient amounts of labeled samples. In solving the problem of learning with limited training data, meta-learning is proposed to remember some common knowledge by leveraging a large number of similar few-shot tasks and learning how to adapt a base-learner to a new task for which only a few labeled samples are available. Current meta-learning approaches typically uses Shallow Neural Networks (SNNs) to avoid overfitting, thus wasting much information in adapting to a new task. Moreover, the Euclidean space-based gradient descent in existing meta-learning approaches always lead to an inaccurate update of meta-learners, which poses a challenge to meta-learning models in extracting features from samples and updating network parameters. In this paper, we propose a novel meta-learning model called Multi-Stage Meta-Learning (MSML) to post the bottleneck during the adapting process. The proposed method constrains a network to Stiefel manifold so that a meta-learner could perform a more stable gradient descent in limited steps so that the adapting process can be accelerated. An experiment on the mini-ImageNet demonstrates that the proposed method reached a better accuracy under 5-way 1-shot and 5-way 5-shot conditions.

2.
J Biomed Inform ; 53: 381-9, 2015 Feb.
Artículo en Inglés | MEDLINE | ID: mdl-25549938

RESUMEN

For cancer classification problems based on gene expression, the data usually has only a few dozen sizes but has thousands to tens of thousands of genes which could contain a large number of irrelevant genes. A robust feature selection algorithm is required to remove irrelevant genes and choose the informative ones. Support vector data description (SVDD) has been applied to gene selection for many years. However, SVDD cannot address the problems with multiple classes since it only considers the target class. In addition, it is time-consuming when applying SVDD to gene selection. This paper proposes a novel fast feature selection method based on multiple SVDD and applies it to multi-class microarray data. A recursive feature elimination (RFE) scheme is introduced to iteratively remove irrelevant features, so the proposed method is called multiple SVDD-RFE (MSVDD-RFE). To make full use of all classes for a given task, MSVDD-RFE independently selects a relevant gene subset for each class. The final selected gene subset is the union of these relevant gene subsets. The effectiveness and accuracy of MSVDD-RFE are validated by experiments on five publicly available microarray datasets. Our proposed method is faster and more effective than other methods.


Asunto(s)
Neoplasias/diagnóstico , Reconocimiento de Normas Patrones Automatizadas/métodos , Máquina de Vectores de Soporte , Algoritmos , Inteligencia Artificial , Teorema de Bayes , Neoplasias del Colon/diagnóstico , Neoplasias del Colon/genética , Diagnóstico por Computador/métodos , Expresión Génica , Perfilación de la Expresión Génica , Regulación Leucémica de la Expresión Génica , Regulación Neoplásica de la Expresión Génica , Humanos , Leucemia/diagnóstico , Leucemia/genética , Modelos Estadísticos , Neoplasias/genética , Análisis de Secuencia por Matrices de Oligonucleótidos , Programas Informáticos
3.
IEEE Trans Neural Netw Learn Syst ; 29(8): 3388-3403, 2018 08.
Artículo en Inglés | MEDLINE | ID: mdl-28783644

RESUMEN

We propose a robust inductive semi-supervised label prediction model over the embedded representation, termed adaptive embedded label propagation with weight learning (AELP-WL), for classification. AELP-WL offers several properties. First, our method seamlessly integrates the robust adaptive embedded label propagation with adaptive weight learning into a unified framework. By minimizing the reconstruction errors over embedded features and embedded soft labels jointly, our AELP-WL can explicitly ensure the learned weights to be joint optimal for representation and classification, which differs from most existing LP models that perform weight learning separately by an independent step before label prediction. Second, existing models usually precalculate the weights over the original samples that may contain unfavorable features and noise decreasing performance. To this end, our model adds a constraint that decomposes original data into a sparse component encoding embedded noise-removed sparse representations of samples and a sparse error part fitting noise, and then performs the adaptive weight learning over the embedded sparse representations. Third, our AELP-WL computes the projected soft labels by trading-off the manifold smoothness and label fitness errors over the adaptive weights and the embedded representations for enhancing the label estimation power. By including a regressive label approximation error for simultaneous minimization to correlate sample features with the embedded soft labels, the out-of-sample issue is naturally solved. By minimizing the reconstruction errors over features and embedded soft labels, classification error and label approximation error jointly, state-of-the-art results are delivered.

4.
IEEE Trans Neural Netw Learn Syst ; 29(8): 3798-3814, 2018 08.
Artículo en Inglés | MEDLINE | ID: mdl-28922127

RESUMEN

In this paper, we propose an analysis mechanism-based structured analysis discriminative dictionary learning analysis discriminative dictionary learning, framework. The ADDL seamlessly integrates ADDL, analysis representation, and analysis classifier training into a unified model. The applied analysis mechanism can make sure that the learned dictionaries, representations, and linear classifiers over different classes are independent and discriminating as much as possible. The dictionary is obtained by minimizing a reconstruction error and an analytical incoherence promoting term that encourages the subdictionaries associated with different classes to be independent. To obtain the representation coefficients, ADDL imposes a sparse -norm constraint on the coding coefficients instead of using or norm, since the - or -norm constraint applied in most existing DL criteria makes the training phase time consuming. The code-extraction projection that bridges data with the sparse codes by extracting special features from the given samples is calculated via minimizing a sparse code approximation term. Then we compute a linear classifier based on the approximated sparse codes by an analysis mechanism to simultaneously consider the classification and representation powers. Thus, the classification approach of our model is very efficient, because it can avoid the extra time-consuming sparse reconstruction process with trained dictionary for each new test data as most existing DL algorithms. Simulations on real image databases demonstrate that our ADDL model can obtain superior performance over other state of the arts.

5.
Neural Netw ; 94: 260-273, 2017 Oct.
Artículo en Inglés | MEDLINE | ID: mdl-28822323

RESUMEN

In this paper, we mainly propose a novel adaptive transductive label propagation approach by joint discriminative clustering on manifolds for representing and classifying high-dimensional data. Our framework seamlessly combines the unsupervised manifold learning, discriminative clustering and adaptive classification into a unified model. Also, our method incorporates the adaptive graph weight construction with label propagation. Specifically, our method is capable of propagating label information using adaptive weights over low-dimensional manifold features, which is different from most existing studies that usually predict the labels and construct the weights in the original Euclidean space. For transductive classification by our formulation, we first perform the joint discriminative K-means clustering and manifold learning to capture the low-dimensional nonlinear manifolds. Then, we construct the adaptive weights over the learnt manifold features, where the adaptive weights are calculated through performing the joint minimization of the reconstruction errors over features and soft labels so that the graph weights can be joint-optimal for data representation and classification. Using the adaptive weights, we can easily estimate the unknown labels of samples. After that, our method returns the updated weights for further updating the manifold features. Extensive simulations on image classification and segmentation show that our proposed algorithm can deliver the state-of-the-art performance on several public datasets.


Asunto(s)
Aprendizaje Automático , Redes Neurales de la Computación , Clasificación/métodos , Análisis por Conglomerados
6.
Neural Netw ; 96: 55-70, 2017 Dec.
Artículo en Inglés | MEDLINE | ID: mdl-28987977

RESUMEN

We propose a robust Alternating Low-Rank Representation (ALRR) model formed by an alternating forward-backward representation process. For forward representation, ALRR first recovers the low-rank PCs and random corruptions by an adaptive local Robust PCA (RPCA). Then, ALRR performs a joint Lp-norm and L2,p-norm minimization (0


Asunto(s)
Aprendizaje Automático , Reconocimiento Visual de Modelos , Algoritmos , Humanos
7.
IEEE Trans Image Process ; 26(4): 1607-1622, 2017 Apr.
Artículo en Inglés | MEDLINE | ID: mdl-28103554

RESUMEN

We propose two nuclear- and L2,1-norm regularized 2D neighborhood preserving projection (2DNPP) methods for extracting representative 2D image features. 2DNPP extracts neighborhood preserving features by minimizing a Frobenius norm-based reconstruction error that is very sensitive noise and outliers in given data. To make the distance metric more reliable and robust, and encode the neighborhood reconstruction error more accurately, we minimize the nuclear- and L2,1-norm-based reconstruction error, respectively and measure it over each image. Technically, we propose two enhanced variants of 2DNPP, nuclear-norm-based 2DNPP and sparse reconstruction-based 2DNPP. Besides, to optimize the projection for more promising feature extraction, we also add the nuclear- and sparse L2,1-norm constraints on it accordingly, where L2,1-norm ensures the projection to be sparse in rows so that discriminative features are learnt in the latent subspace and the nuclear-norm ensures the low-rank property of features by projecting data into their respective subspaces. By fully considering the neighborhood preserving power, using more reliable and robust distance metric, and imposing the low-rank or sparse constraints on projections at the same time, our methods can outperform related state-of-the-arts in a variety of simulation settings.

8.
IEEE Trans Image Process ; 25(6): 2429-43, 2016 06.
Artículo en Inglés | MEDLINE | ID: mdl-27046875

RESUMEN

Recovering low-rank and sparse subspaces jointly for enhanced robust representation and classification is discussed. Technically, we first propose a transductive low-rank and sparse principal feature coding (LSPFC) formulation that decomposes given data into a component part that encodes low-rank sparse principal features and a noise-fitting error part. To well handle the outside data, we then present an inductive LSPFC (I-LSPFC). I-LSPFC incorporates embedded low-rank and sparse principal features by a projection into one problem for direct minimization, so that the projection can effectively map both inside and outside data into the underlying subspaces to learn more powerful and informative features for representation. To ensure that the learned features by I-LSPFC are optimal for classification, we further combine the classification error with the feature coding error to form a unified model, discriminative LSPFC (D-LSPFC), to boost performance. The model of D-LSPFC seamlessly integrates feature coding and discriminative classification, so the representation and classification powers can be enhanced. The proposed approaches are more general, and several recent existing low-rank or sparse coding algorithms can be embedded into our problems as special cases. Visual and numerical results demonstrate the effectiveness of our methods for representation and classification.

9.
Comput Biol Med ; 64: 236-45, 2015 Sep.
Artículo en Inglés | MEDLINE | ID: mdl-26232671

RESUMEN

The family of discriminant neighborhood embedding (DNE) methods is typical graph-based methods for dimension reduction, and has been successfully applied to face recognition. This paper proposes a new variant of DNE, called similarity-balanced discriminant neighborhood embedding (SBDNE) and applies it to cancer classification using gene expression data. By introducing a novel similarity function, SBDNE deals with two data points in the same class and the different classes with different ways. The homogeneous and heterogeneous neighbors are selected according to the new similarity function instead of the Euclidean distance. SBDNE constructs two adjacent graphs, or between-class adjacent graph and within-class adjacent graph, using the new similarity function. According to these two adjacent graphs, we can generate the local between-class scatter and the local within-class scatter, respectively. Thus, SBDNE can maximize the between-class scatter and simultaneously minimize the within-class scatter to find the optimal projection matrix. Experimental results on six microarray datasets show that SBDNE is a promising method for cancer classification.


Asunto(s)
Biología Computacional/métodos , Perfilación de la Expresión Génica/métodos , Neoplasias/clasificación , Neoplasias/genética , Algoritmos , Bases de Datos Genéticas , Humanos , Neoplasias/metabolismo , Análisis de Secuencia por Matrices de Oligonucleótidos
SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA