Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 14 de 14
Filtrar
1.
BMC Bioinformatics ; 14: 198, 2013 Jun 19.
Artículo en Inglés | MEDLINE | ID: mdl-23777239

RESUMEN

BACKGROUND: Microarray technology is widely used in cancer diagnosis. Successfully identifying gene biomarkers will significantly help to classify different cancer types and improve the prediction accuracy. The regularization approach is one of the effective methods for gene selection in microarray data, which generally contain a large number of genes and have a small number of samples. In recent years, various approaches have been developed for gene selection of microarray data. Generally, they are divided into three categories: filter, wrapper and embedded methods. Regularization methods are an important embedded technique and perform both continuous shrinkage and automatic gene selection simultaneously. Recently, there is growing interest in applying the regularization techniques in gene selection. The popular regularization technique is Lasso (L1), and many L1 type regularization terms have been proposed in the recent years. Theoretically, the Lq type regularization with the lower value of q would lead to better solutions with more sparsity. Moreover, the L1/2 regularization can be taken as a representative of Lq (0

Asunto(s)
Regulación de la Expresión Génica , Modelos Logísticos , Neoplasias/clasificación , Neoplasias/genética , Algoritmos , Marcadores Genéticos , Humanos , Neoplasias/metabolismo , Análisis de Secuencia por Matrices de Oligonucleótidos/métodos
2.
ScientificWorldJournal ; 2013: 475702, 2013.
Artículo en Inglés | MEDLINE | ID: mdl-24453861

RESUMEN

A new adaptive L1/2 shooting regularization method for variable selection based on the Cox's proportional hazards mode being proposed. This adaptive L1/2 shooting algorithm can be easily obtained by the optimization of a reweighed iterative series of L1 penalties and a shooting strategy of L1/2 penalty. Simulation results based on high dimensional artificial data show that the adaptive L1/2 shooting regularization method can be more accurate for variable selection than Lasso and adaptive Lasso methods. The results from real gene expression dataset (DLBCL) also indicate that the L1/2 regularization method performs competitively.


Asunto(s)
Regulación de la Expresión Génica , Estimación de Kaplan-Meier , Modelos Biológicos , Modelos de Riesgos Proporcionales , Animales
3.
Neural Netw ; 135: 91-104, 2021 Mar.
Artículo en Inglés | MEDLINE | ID: mdl-33373885

RESUMEN

Recently, the focus of functional connectivity analysis of human brain has shifted from merely revealing the inter-regional functional correlation over the entire scan duration to capturing the time-varying information of brain networks and characterizing time-resolved reoccurring patterns of connectivity. Much effort has been invested into developing approaches that can track changes in re-occurring patterns of functional connectivity over time. In this paper, we propose a sparse deep dictionary learning method to characterize the essential differences of reoccurring patterns of time-varying functional connectivity between different age groups. The proposed method combines both the interpretability of sparse dictionary learning and the capability of extracting sparse nonlinear higher-level features in the latent space of sparse deep autoencoder. In other words, it learns a sparse dictionary of the original data by considering the nonlinear representation of the data in the encoder layer based on a sparse deep autoencoder. In this way, the nonlinear structure and higher-level features of the data can be captured by deep dictionary learning. The proposed method is applied to the analysis of the Philadelphia Neurodevelopmental Cohort. It shows that there exist essential differences in the reoccurrence patterns of function connectivity between child and young adult groups. Specially, children have more diffusive functional connectivity patterns while young adults possess more focused functional connectivity patterns, and the brain function transits from undifferentiated systems to specialized neural networks with the growth.


Asunto(s)
Algoritmos , Encéfalo/diagnóstico por imagen , Encéfalo/crecimiento & desarrollo , Aprendizaje Profundo , Redes Neurales de la Computación , Adolescente , Niño , Preescolar , Femenino , Humanos , Lactante , Imagen por Resonancia Magnética/métodos , Masculino , Adulto Joven
4.
Sci Rep ; 9(1): 13504, 2019 09 18.
Artículo en Inglés | MEDLINE | ID: mdl-31534156

RESUMEN

The widespread applications in microarray technology have produced the vast quantity of publicly available gene expression datasets. However, analysis of gene expression data using biostatistics and machine learning approaches is a challenging task due to (1) high noise; (2) small sample size with high dimensionality; (3) batch effects and (4) low reproducibility of significant biomarkers. These issues reveal the complexity of gene expression data, thus significantly obstructing microarray technology in clinical applications. The integrative analysis offers an opportunity to address these issues and provides a more comprehensive understanding of the biological systems, but current methods have several limitations. This work leverages state of the art machine learning development for multiple gene expression datasets integration, classification and identification of significant biomarkers. We design a novel integrative framework, MVIAm - Multi-View based Integrative Analysis of microarray data for identifying biomarkers. It applies multiple cross-platform normalization methods to aggregate multiple datasets into a multi-view dataset and utilizes a robust learning mechanism Multi-View Self-Paced Learning (MVSPL) for gene selection in cancer classification problems. We demonstrate the capabilities of MVIAm using simulated data and studies of breast cancer and lung cancer, it can be applied flexibly and is an effective tool for facing the four challenges of gene expression data analysis. Our proposed model makes microarray integrative analysis more systematic and expands its range of applications.


Asunto(s)
Biomarcadores de Tumor/genética , Perfilación de la Expresión Génica/métodos , Análisis de Secuencia de ADN/métodos , Algoritmos , Expresión Génica/genética , Regulación Neoplásica de la Expresión Génica/genética , Humanos , Aprendizaje Automático , Análisis de Secuencia por Matrices de Oligonucleótidos/métodos , Reproducibilidad de los Resultados , Transcriptoma/genética
5.
IEEE Trans Neural Netw Learn Syst ; 29(5): 1716-1731, 2018 05.
Artículo en Inglés | MEDLINE | ID: mdl-28368832

RESUMEN

Iterative thresholding is a dominating strategy for sparse optimization problems. The main goal of iterative thresholding methods is to find a so-called -sparse solution. However, the setting of regularization parameters or the estimation of the true sparsity are nontrivial in iterative thresholding methods. To overcome this shortcoming, we propose a preference-based multiobjective evolutionary approach to solve sparse optimization problems in compressive sensing. Our basic strategy is to search the knee part of weakly Pareto front with preference on the true -sparse solution. In the noiseless case, it is easy to locate the exact position of the -sparse solution from the distribution of the solutions found by our proposed method. Therefore, our method has the ability to detect the true sparsity. Moreover, any iterative thresholding methods can be used as a local optimizer in our proposed method, and no prior estimation of sparsity is required. The proposed method can also be extended to solve sparse optimization problems with noise. Extensive experiments have been conducted to study its performance on artificial signals and magnetic resonance imaging signals. Our experimental results have shown that our proposed method is very effective for detecting sparsity and can improve the reconstruction ability of existing iterative thresholding methods.

6.
BMC Med Genomics ; 9: 11, 2016 Mar 01.
Artículo en Inglés | MEDLINE | ID: mdl-26932592

RESUMEN

BACKGROUND: One of the most important objectives of the clinical cancer research is to diagnose cancer more accurately based on the patients' gene expression profiles. Both Cox proportional hazards model (Cox) and accelerated failure time model (AFT) have been widely adopted to the high risk and low risk classification or survival time prediction for the patients' clinical treatment. Nevertheless, two main dilemmas limit the accuracy of these prediction methods. One is that the small sample size and censored data remain a bottleneck for training robust and accurate Cox classification model. In addition to that, similar phenotype tumours and prognoses are actually completely different diseases at the genotype and molecular level. Thus, the utility of the AFT model for the survival time prediction is limited when such biological differences of the diseases have not been previously identified. METHODS: To try to overcome these two main dilemmas, we proposed a novel semi-supervised learning method based on the Cox and AFT models to accurately predict the treatment risk and the survival time of the patients. Moreover, we adopted the efficient L1/2 regularization approach in the semi-supervised learning method to select the relevant genes, which are significantly associated with the disease. RESULTS: The results of the simulation experiments show that the semi-supervised learning model can significant improve the predictive performance of Cox and AFT models in survival analysis. The proposed procedures have been successfully applied to four real microarray gene expression and artificial evaluation datasets. CONCLUSIONS: The advantages of our proposed semi-supervised learning method include: 1) significantly increase the available training samples from censored data; 2) high capability for identifying the survival risk classes of patient in Cox model; 3) high predictive accuracy for patients' survival time in AFT model; 4) strong capability of the relevant biomarker selection. Consequently, our proposed semi-supervised learning model is one more appropriate tool for survival analysis in clinical cancer research.


Asunto(s)
Neoplasias/mortalidad , Simulación por Computador , Bases de Datos como Asunto , Regulación Neoplásica de la Expresión Génica , Humanos , Modelos Teóricos , Neoplasias/genética , Análisis de Secuencia por Matrices de Oligonucleótidos , Modelos de Riesgos Proporcionales , Análisis de Supervivencia
7.
Neural Netw ; 18(7): 914-23, 2005 Sep.
Artículo en Inglés | MEDLINE | ID: mdl-15936925

RESUMEN

Simultaneous approximation of a function and its derivatives are required in many science and engineering applications. There have been many studies on the simultaneous approximation capability of feedforward neural networks (FNNs). Most of the studies are, however, only concerned with density or feasibility of performing simultaneous approximation with FNNs, and no quantitative estimation on approximation accuracy of the simultaneous approximation is given. Moreover, all existing density or feasibility results are established in the uniform metric only, and provide no solution to topology specification of the FNNs used. In this paper, by means of the Bernstein-Durrmeyer operator, a class of FNNs is constructed which realize the simultaneous approximation of any smooth multivariate function and all its existing partial derivatives. We present, by making use of multivariate approximation tools, a quantitative upper bound estimation on approximation accuracy of the simultaneous approximation of the FNNs in terms of the modulus of smoothness of the functions to be approximated. The obtained results reveals that the approximation speed of the constructed FNNs depends not only on the number of hidden units used, but also on the smoothness of functions to be approximated.


Asunto(s)
Algoritmos , Redes Neurales de la Computación , Inteligencia Artificial , Matemática
8.
Neural Netw ; 15(1): 95-103, 2002 Jan.
Artículo en Inglés | MEDLINE | ID: mdl-11958493

RESUMEN

The stability of neural networks is a prerequisite for successful applications of the networks as either associative memories or optimization solvers. Because the integration and communication delays are ubiquitous, the stability of neural networks with delays has received extensive attention. However, the approach used in the previous investigation is mainly based on Liapunov's direct method. Since the construction of Liapunov function is very skilful, there is little compatibility among the existing results. In this paper, we develop a new approach to stability analysis of Hopfield-type neural networks with time-varying delays by defining two novel quantities of nonlinear function similar to the matrix norm and the matrix measure, respectively. With the new approach, we present sufficient conditions of the stability, which are either the generalization of those existing or new. The developed approach may be also applied for any general system with time delays rather than Hopfield-type neural networks.


Asunto(s)
Redes Neurales de la Computación , Dinámicas no Lineales , Factores de Tiempo
9.
Neural Netw ; 17(1): 73-85, 2004 Jan.
Artículo en Inglés | MEDLINE | ID: mdl-14690709

RESUMEN

The neuron state modeling and the local field modeling provides two fundamental modeling approaches to neural network research, based on which a neural network system can be called either as a static neural network model or as a local field neural network model. These two models are theoretically compared in terms of their trajectory transformation property, equilibrium correspondence property, nontrivial attractive manifold property, global convergence as well as stability in many different senses. The comparison reveals an important stability invariance property of the two models in the sense that the stability (in any sense) of the static model is equivalent to that of a subsystem deduced from the local field model when restricted to a specific manifold. Such stability invariance property lays a sound theoretical foundation of validity of a useful, cross-fertilization type stability analysis methodology for various neural network models.


Asunto(s)
Redes Neurales de la Computación , Neuronas/fisiología , Algoritmos , Inteligencia Artificial , Simulación por Computador , Humanos , Modelos Neurológicos , Red Nerviosa/fisiología , Dinámicas no Lineales
10.
IEEE Trans Cybern ; 43(6): 2054-65, 2013 Dec.
Artículo en Inglés | MEDLINE | ID: mdl-23757515

RESUMEN

Extreme learning machines (ELMs) have been proposed for generalized single-hidden-layer feedforward networks which need not be neuron alike and perform well in both regression and classification applications. The problem of determining the suitable network architectures is recognized to be crucial in the successful application of ELMs. This paper first proposes a dynamic ELM (D-ELM) where the hidden nodes can be recruited or deleted dynamically according to their significance to network performance, so that not only the parameters can be adjusted but also the architecture can be self-adapted simultaneously. Then, this paper proves in theory that such D-ELM using Lebesgue p-integrable hidden activation functions can approximate any Lebesgue p-integrable function on a compact input set. Simulation results obtained over various test problems demonstrate and verify that the proposed D-ELM does a good job reducing the network size while preserving good generalization performance.


Asunto(s)
Algoritmos , Inteligencia Artificial , Técnicas de Apoyo para la Decisión , Modelos Teóricos , Redes Neurales de la Computación , Reconocimiento de Normas Patrones Automatizadas/métodos , Simulación por Computador
11.
IEEE Trans Neural Netw Learn Syst ; 23(2): 330-41, 2012 Feb.
Artículo en Inglés | MEDLINE | ID: mdl-24808511

RESUMEN

The online backpropagation (BP) training procedure has been extensively explored in scientific research and engineering applications. One of the main factors affecting the performance of the online BP training is the learning rate. This paper proposes a new dynamic learning rate which is based on the estimate of the minimum error. The global convergence theory of the online BP training procedure with the proposed learning rate is further studied. It is proved that: 1) the error sequence converges to the global minimum error; and 2) the weight sequence converges to a fixed point at which the error function attains its global minimum. The obtained global convergence theory underlies the successful applications of the online BP training procedure. Illustrative examples are provided to support the theoretical analysis.


Asunto(s)
Algoritmos , Inteligencia Artificial , Modelos Estadísticos , Redes Neurales de la Computación , Reconocimiento de Normas Patrones Automatizadas/métodos , Simulación por Computador , Retroalimentación , Sistemas en Línea
12.
IEEE Trans Neural Netw Learn Syst ; 23(2): 365-71, 2012 Feb.
Artículo en Inglés | MEDLINE | ID: mdl-24808516

RESUMEN

Extreme learning machines (ELMs) have been proposed for generalized single-hidden-layer feedforward networks which need not be neuron-like and perform well in both regression and classification applications. In this brief, we propose an ELM with adaptive growth of hidden nodes (AG-ELM), which provides a new approach for the automated design of networks. Different from other incremental ELMs (I-ELMs) whose existing hidden nodes are frozen when the new hidden nodes are added one by one, in AG-ELM the number of hidden nodes is determined in an adaptive way in the sense that the existing networks may be replaced by newly generated networks which have fewer hidden nodes and better generalization performance. We then prove that such an AG-ELM using Lebesgue p-integrable hidden activation functions can approximate any Lebesgue p-integrable function on a compact input set. Simulation results demonstrate and verify that this new approach can achieve a more compact network architecture than the I-ELM.


Asunto(s)
Algoritmos , Modelos Estadísticos , Redes Neurales de la Computación , Dinámicas no Lineales , Reconocimiento de Normas Patrones Automatizadas/métodos , Inteligencia Artificial , Simulación por Computador , Retroalimentación
13.
IEEE Trans Neural Netw ; 20(10): 1529-39, 2009 Oct.
Artículo en Inglés | MEDLINE | ID: mdl-19695997

RESUMEN

The backpropogation (BP) neural networks have been widely applied in scientific research and engineering. The success of the application, however, relies upon the convergence of the training procedure involved in the neural network learning. We settle down the convergence analysis issue through proving two fundamental theorems on the convergence of the online BP training procedure. One theorem claims that under mild conditions, the gradient sequence of the error function will converge to zero (the weak convergence), and another theorem concludes the convergence of the weight sequence defined by the procedure to a fixed value at which the error function attains its minimum (the strong convergence). The weak convergence theorem sharpens and generalizes the existing convergence analysis conducted before, while the strong convergence theorem provides new analysis results on convergence of the online BP training procedure. The results obtained reveal that with any analytic sigmoid activation function, the online BP training procedure is always convergent, which then underlies successful application of the BP neural networks.


Asunto(s)
Algoritmos , Modelos Teóricos , Redes Neurales de la Computación , Simulación por Computador
14.
Neural Netw ; 11(5): 877-884, 1998 Jul.
Artículo en Inglés | MEDLINE | ID: mdl-12662790

RESUMEN

In this paper, optimal encoding schemes for linear associative memories are derived for biased association under both the white-noise and colored-noise situations. Analysis and simulation results all show that the biased encodings thus derived are optimal and superior to existing models in their performance. Together with the Wee-Kohonen unbiased encoding, the study settles the optimality issue of linear associative memories and enhances their practicalities.

SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA