Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 12 de 12
Filtrar
1.
Ann Bot ; 127(3): 281-295, 2021 02 09.
Artículo en Inglés | MEDLINE | ID: mdl-32969464

RESUMEN

BACKGROUND: With up to 200 published contributions, the GreenLab mathematical model of plant growth, developed since 2000 under Sino-French co-operation for agronomic applications, is descended from the structural models developed in the AMAP unit that characterize the development of plants and encompass them in a conceptual mathematical framework. The model also incorporates widely recognized crop model concepts (thermal time, light use efficiency and light interception), adapting them to the level of the individual plant. SCOPE: Such long-term research work calls for an overview at some point. That is the objective of this review paper, which retraces the main history of the model's development and its current status, highlighting three aspects. (1) What are the key features of the GreenLab model? (2) How can the model be a guide for defining relevant measurement strategies and experimental protocols? (3) What kind of applications can such a model address? This last question is answered using case studies as illustrations, and through the Discussion. CONCLUSIONS: The results obtained over several decades illustrate a key feature of the GreenLab model: owing to its concise mathematical formulation based on the factorization of plant structure, it comes along with dedicated methods and experimental protocols for its parameter estimation, in the deterministic or stochastic cases, at single-plant or population levels. Besides providing a reliable statistical framework, this intense and long-term research effort has provided new insights into the internal trophic regulations of many plant species and new guidelines for genetic improvement or optimization of crop systems.


Asunto(s)
Modelos Teóricos , Desarrollo de la Planta , Simulación por Computador , Estructuras de las Plantas
2.
Neural Comput ; 29(5): 1151-1203, 2017 05.
Artículo en Inglés | MEDLINE | ID: mdl-28181880

RESUMEN

This review examines the relevance of parameter identifiability for statistical models used in machine learning. In addition to defining main concepts, we address several issues of identifiability closely related to machine learning, showing the advantages and disadvantages of state-of-the-art research and demonstrating recent progress. First, we review criteria for determining the parameter structure of models from the literature. This has three related issues: parameter identifiability, parameter redundancy, and reparameterization. Second, we review the deep influence of identifiability on various aspects of machine learning from theoretical and application viewpoints. In addition to illustrating the utility and influence of identifiability, we emphasize the interplay among identifiability theory, machine learning, mathematical statistics, information theory, optimization theory, information geometry, Riemann geometry, symbolic computation, Bayesian inference, algebraic geometry, and others. Finally, we present a new perspective together with the associated challenges.

3.
AoB Plants ; 15(2): plac061, 2023 Feb.
Artículo en Inglés | MEDLINE | ID: mdl-36751366

RESUMEN

The rapid increases of the global population and climate change pose major challenges to a sustainable production of food to meet consumer demands. Process-based models (PBMs) have long been used in agricultural crop production for predicting yield and understanding the environmental regulation of plant physiological processes and its consequences for crop growth and development. In recent years, with the increasing use of sensor and communication technologies for data acquisition in agriculture, machine learning (ML) has become a popular tool in yield prediction (especially on a large scale) and phenotyping. Both PBMs and ML are frequently used in studies on major challenges in crop production and each has its own advantages and drawbacks. We propose to combine PBMs and ML given their intrinsic complementarity, to develop knowledge- and data-driven modelling (KDDM) with high prediction accuracy as well as good interpretability. Parallel, serial and modular structures are three main modes can be adopted to develop KDDM for agricultural applications. The KDDM approach helps to simplify model parameterization by making use of sensor data and improves the accuracy of yield prediction. Furthermore, the KDDM approach has great potential to expand the boundary of current crop models to allow upscaling towards a farm, regional or global level and downscaling to the gene-to-cell level. The KDDM approach is a promising way of combining simulation models in agriculture with the fast developments in data science while mechanisms of many genetic and physiological processes are still under investigation, especially at the nexus of increasing food production, mitigating climate change and achieving sustainability.

4.
IEEE Trans Pattern Anal Mach Intell ; 44(1): 76-86, 2022 01.
Artículo en Inglés | MEDLINE | ID: mdl-32750797

RESUMEN

In this work, we introduce the average top- k ( ATk) loss, which is the average over the k largest individual losses over a training data, as a new aggregate loss for supervised learning. We show that the ATk loss is a natural generalization of the two widely used aggregate losses, namely the average loss and the maximum loss. Yet, the ATk loss can better adapt to different data distributions because of the extra flexibility provided by the different choices of k. Furthermore, it remains a convex function over all individual losses and can be combined with different types of individual loss without significant increase in computation. We then provide interpretations of the ATk loss from the perspective of the modification of individual loss and robustness to training data distributions. We further study the classification calibration of the ATk loss and the error bounds of ATk-SVM model. We demonstrate the applicability of minimum average top- k learning for supervised learning problems including binary/multi-class classification and regression, using experiments on both synthetic and real datasets.


Asunto(s)
Algoritmos , Aprendizaje Automático Supervisado
5.
IEEE Trans Neural Netw Learn Syst ; 32(7): 3206-3216, 2021 07.
Artículo en Inglés | MEDLINE | ID: mdl-32759086

RESUMEN

The ability to learn more concepts from incrementally arriving data over time is essential for the development of a lifelong learning system. However, deep neural networks often suffer from forgetting previously learned concepts when continually learning new concepts, which is known as the catastrophic forgetting problem. The main reason for catastrophic forgetting is that past concept data are not available, and neural weights are changed during incrementally learning new concepts. In this article, we propose an incremental concept learning framework that includes two components, namely, ICLNet and RecallNet. ICLNet, which consists of a trainable feature extractor and a dynamic concept memory matrix, aims to learn new concepts incrementally. We propose a concept-contrastive loss to alleviate the magnitude of neural weight changes and mitigate the catastrophic forgetting problems. RecallNet aims to consolidate old concepts memory and recall pseudo samples, whereas ICLNet learns new concepts. We propose a balanced online memory recall strategy to reduce the information loss of old concept memory. We evaluate the proposed approach on the MNIST, Fashion-MNIST, and SVHN data sets and compare it with other pseudorehearsal-based approaches. Extensive experiments demonstrate the effectiveness of our approach.


Asunto(s)
Aprendizaje Automático , Recuerdo Mental , Redes Neurales de la Computación , Algoritmos , Formación de Concepto , Humanos , Sistemas en Línea
6.
IEEE Trans Neural Netw Learn Syst ; 29(3): 510-522, 2018 03.
Artículo en Inglés | MEDLINE | ID: mdl-28055924

RESUMEN

The correntropy-induced loss (C-loss) function has the nice property of being robust to outliers. In this paper, we study the C-loss kernel classifier with the Tikhonov regularization term, which is used to avoid overfitting. After using the half-quadratic optimization algorithm, which converges much faster than the gradient optimization algorithm, we find out that the resulting C-loss kernel classifier is equivalent to an iterative weighted least square support vector machine (LS-SVM). This relationship helps explain the robustness of iterative weighted LS-SVM from the correntropy and density estimation perspectives. On the large-scale data sets which have low-rank Gram matrices, we suggest to use incomplete Cholesky decomposition to speed up the training process. Moreover, we use the representer theorem to improve the sparseness of the resulting C-loss kernel classifier. Experimental results confirm that our methods are more robust to outliers than the existing common classifiers.

7.
IEEE Trans Neural Netw Learn Syst ; 25(2): 249-64, 2014 Feb.
Artículo en Inglés | MEDLINE | ID: mdl-24807026

RESUMEN

In this paper, both Bayesian and mutual-information classifiers are examined for binary classifications with or without a reject option. The general decision rules are derived for Bayesian classifiers with distinctions on error types and reject types. A formal analysis is conducted to reveal the parameter redundancy of cost terms when abstaining classifications are enforced. The redundancy implies an intrinsic problem of nonconsistency for interpreting cost terms. If no data are given to the cost terms, we demonstrate the weakness of Bayesian classifiers in class-imbalanced classifications. On the contrary, mutual-information classifiers are able to provide an objective solution from the given data, which shows a reasonable balance among error types and reject types. Numerical examples of using two types of classifiers are given for confirming the differences, including the extremely class-imbalanced cases. Finally, we briefly summarize the Bayesian and mutual-information classifiers in terms of their application advantages and disadvantages, respectively.

8.
IEEE Trans Neural Netw Learn Syst ; 24(1): 35-46, 2013 Jan.
Artículo en Inglés | MEDLINE | ID: mdl-24808205

RESUMEN

This paper proposes a novel nonnegative sparse representation approach, called two-stage sparse representation (TSR), for robust face recognition on a large-scale database. Based on the divide and conquer strategy, TSR decomposes the procedure of robust face recognition into outlier detection stage and recognition stage. In the first stage, we propose a general multisubspace framework to learn a robust metric in which noise and outliers in image pixels are detected. Potential loss functions, including L1 , L2,1, and correntropy are studied. In the second stage, based on the learned metric and collaborative representation, we propose an efficient nonnegative sparse representation algorithm to find an approximation solution of sparse representation. According to the L1 ball theory in sparse representation, the approximated solution is unique and can be optimized efficiently. Then a filtering strategy is developed to avoid the computation of the sparse representation on the whole large-scale dataset. Moreover, theoretical analysis also gives the necessary condition for nonnegative least squares technique to find a sparse solution. Extensive experiments on several public databases have demonstrated that the proposed TSR approach, in general, achieves better classification accuracy than the state-of-the-art sparse representation methods. More importantly, a significant reduction of computational costs is reached in comparison with sparse representation classifier; this enables the TSR to be more suitable for robust face recognition on a large-scale dataset.


Asunto(s)
Algoritmos , Inteligencia Artificial/normas , Cara/anatomía & histología , Aumento de la Imagen/métodos , Interpretación de Imagen Asistida por Computador/métodos , Reconocimiento de Normas Patrones Automatizadas/normas
9.
IEEE Trans Neural Netw ; 22(12): 2447-59, 2011 Dec.
Artículo en Inglés | MEDLINE | ID: mdl-21965200

RESUMEN

This paper is reports an extension of our previous investigations on adding transparency to neural networks. We focus on a class of linear priors (LPs), such as symmetry, ranking list, boundary, monotonicity, etc., which represent either linear-equality or linear-inequality priors. A generalized constraint neural network-LPs (GCNN-LPs) model is studied. Unlike other existing modeling approaches, the GCNN-LP model exhibits its advantages. First, any LP is embedded by an explicitly structural mode, which may add a higher degree of transparency than using a pure algorithm mode. Second, a direct elimination and least squares approach is adopted to study the model, which produces better performances in both accuracy and computational cost over the Lagrange multiplier techniques in experiments. Specific attention is paid to both "hard (strictly satisfied)" and "soft (weakly satisfied)" constraints for regression problems. Numerical investigations are made on synthetic examples as well as on the real-world datasets. Simulation results demonstrate the effectiveness of the proposed modeling approach in comparison with other existing approaches.


Asunto(s)
Algoritmos , Inteligencia Artificial , Modelos Lineales , Análisis de Regresión , Simulación por Computador
10.
IEEE Trans Pattern Anal Mach Intell ; 33(8): 1561-76, 2011 Aug.
Artículo en Inglés | MEDLINE | ID: mdl-21135440

RESUMEN

In this paper, we present a sparse correntropy framework for computing robust sparse representations of face images for recognition. Compared with the state-of-the-art l(1)norm-based sparse representation classifier (SRC), which assumes that noise also has a sparse representation, our sparse algorithm is developed based on the maximum correntropy criterion, which is much more insensitive to outliers. In order to develop a more tractable and practical approach, we in particular impose nonnegativity constraint on the variables in the maximum correntropy criterion and develop a half-quadratic optimization technique to approximately maximize the objective function in an alternating way so that the complex optimization problem is reduced to learning a sparse representation through a weighted linear least squares problem with nonnegativity constraint at each iteration. Our extensive experiments demonstrate that the proposed method is more robust and efficient in dealing with the occlusion and corruption problems in face recognition as compared to the related state-of-the-art methods. In particular, it shows that the proposed method can improve both recognition accuracy and receiver operator characteristic (ROC) curves, while the computational cost is much lower than the SRC algorithms.


Asunto(s)
Identificación Biométrica/métodos , Cara/anatomía & histología , Algoritmos , Femenino , Humanos , Masculino , Curva ROC
11.
IEEE Trans Image Process ; 20(6): 1485-94, 2011 Jun.
Artículo en Inglés | MEDLINE | ID: mdl-21216713

RESUMEN

Principal component analysis (PCA) minimizes the mean square error (MSE) and is sensitive to outliers. In this paper, we present a new rotational-invariant PCA based on maximum correntropy criterion (MCC). A half-quadratic optimization algorithm is adopted to compute the correntropy objective. At each iteration, the complex optimization problem is reduced to a quadratic problem that can be efficiently solved by a standard optimization method. The proposed method exhibits the following benefits: 1) it is robust to outliers through the mechanism of MCC which can be more theoretically solid than a heuristic rule based on MSE; 2) it requires no assumption about the zero-mean of data for processing and can estimate data mean during optimization; and 3) its optimal solution consists of principal eigenvectors of a robust covariance matrix corresponding to the largest eigenvalues. In addition, kernel techniques are further introduced in the proposed method to deal with nonlinearly distributed data. Numerical results demonstrate that the proposed method can outperform robust rotational-invariant PCAs based on L(1) norm when outliers occur.


Asunto(s)
Algoritmos , Interpretación Estadística de Datos , Aumento de la Imagen/métodos , Interpretación de Imagen Asistida por Computador/métodos , Reconocimiento de Normas Patrones Automatizadas/métodos , Análisis de Componente Principal , Entropía , Modelos Estadísticos , Reproducibilidad de los Resultados , Sensibilidad y Especificidad
12.
IEEE Trans Neural Netw ; 20(4): 715-21, 2009 Apr.
Artículo en Inglés | MEDLINE | ID: mdl-19258200

RESUMEN

This brief presents a two-phase construction approach for pruning both input and hidden units of multilayer perceptrons (MLPs) based on mutual information (MI). First, all features of input vectors are ranked according to their relevance to target outputs through a forward strategy. The salient input units of an MLP are thus determined according to the order of the ranking result and by considering their contributions to the network's performance. Then, the irrelevant features of input vectors can be identified and eliminated. Second, the redundant hidden units are removed from the trained MLP one after another according to a novel relevance measure. Compared with its related work, the proposed strategy exhibits better performance. Moreover, experimental results show that the proposed method is comparable or even superior to support vector machine (SVM) and support vector regression (SVR). Finally, the advantages of the MI-based method are investigated in comparison with the sensitivity analysis (SA)-based method.

SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA