Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 13 de 13
Filtrar
1.
Neural Comput ; 29(8): 2203-2291, 2017 08.
Artigo em Inglês | MEDLINE | ID: mdl-28562221

RESUMO

Optimal control theory and machine learning techniques are combined to formulate and solve in closed form an optimal control formulation of online learning from supervised examples with regularization of the updates. The connections with the classical linear quadratic gaussian (LQG) optimal control problem, of which the proposed learning paradigm is a nontrivial variation as it involves random matrices, are investigated. The obtained optimal solutions are compared with the Kalman filter estimate of the parameter vector to be learned. It is shown that the proposed algorithm is less sensitive to outliers with respect to the Kalman estimate (thanks to the presence of the regularization term), thus providing smoother estimates with respect to time. The basic formulation of the proposed online learning framework refers to a discrete-time setting with a finite learning horizon and a linear model. Various extensions are investigated, including the infinite learning horizon and, via the so-called kernel trick, the case of nonlinear models.

2.
Neural Comput ; 27(2): 388-480, 2015 Feb.
Artigo em Inglês | MEDLINE | ID: mdl-25380338

RESUMO

The mathematical foundations of a new theory for the design of intelligent agents are presented. The proposed learning paradigm is centered around the concept of constraint, representing the interactions with the environment, and the parsimony principle. The classical regularization framework of kernel machines is naturally extended to the case in which the agents interact with a richer environment, where abstract granules of knowledge, compactly described by different linguistic formalisms, can be translated into the unified notion of constraint for defining the hypothesis set. Constrained variational calculus is exploited to derive general representation theorems that provide a description of the optimal body of the agent (i.e., the functional structure of the optimal solution to the learning problem), which is the basis for devising new learning algorithms. We show that regardless of the kind of constraints, the optimal body of the agent is a support constraint machine (SCM) based on representer theorems that extend classical results for kernel machines and provide new representations. In a sense, the expressiveness of constraints yields a semantic-based regularization theory, which strongly restricts the hypothesis set of classical regularization. Some guidelines to unify continuous and discrete computational mechanisms are given so as to accommodate in the same framework various kinds of stimuli, for example, supervised examples and logic predicates. The proposed view of learning from constraints incorporates classical learning from examples and extends naturally to the case in which the examples are subsets of the input space, which is related to learning propositional logic clauses.

3.
Sci Rep ; 14(1): 19676, 2024 Aug 24.
Artigo em Inglês | MEDLINE | ID: mdl-39181926

RESUMO

Despite the negative externalities on the environment and human health, today's economies still produce excessive carbon dioxide emissions. As a result, governments are trying to shift production and consumption to more sustainable models that reduce the environmental impact of carbon dioxide emissions. The European Union, in particular, has implemented an innovative policy to reduce carbon dioxide emissions by creating a market for emission rights, the emissions trading system. The objective of this paper is to perform a counterfactual analysis to measure the impact of the emissions trading system on the reduction of carbon dioxide emissions. For this purpose, a recently-developed statistical machine learning method called matrix completion with fixed effects estimation is used and compared to traditional econometric techniques. We apply matrix completion with fixed effects estimation to the prediction of missing counterfactual entries of a carbon dioxide emissions matrix whose elements (indexed row-wise by country and column-wise by year) represent emissions without the emissions trading system for country-year pairs. The results obtained, confirmed by robust diagnostic tests, show a significant effect of the emissions trading system on the reduction of carbon dioxide emissions: the majority of European Union countries included in our analysis reduced their total carbon dioxide emissions (associated with selected industries) by about 15.4% during the emissions trading system treatment period 2005-2020, compared to the total carbon dioxide emissions (associated with the same industries) that would have been achieved in the absence of the emissions trading system policy. Finally, several managerial/practical implications of the study are discussed, together with its possible extensions.

4.
Neural Comput ; 25(4): 1029-106, 2013 Apr.
Artigo em Inglês | MEDLINE | ID: mdl-23339616

RESUMO

Kernel machines traditionally arise from an elegant formulation based on measuring the smoothness of the admissible solutions by the norm in the reproducing kernel Hilbert space (RKHS) generated by the chosen kernel. It was pointed out that they can be formulated in a related functional framework, in which the Green's function of suitable differential operators is thought of as a kernel. In this letter, our own picture of this intriguing connection is given by emphasizing some relevant distinctions between these different ways of measuring the smoothness of admissible solutions. In particular, we show that for some kernels, there is no associated differential operator. The crucial relevance of boundary conditions is especially emphasized, which is in fact the truly distinguishing feature of the approach based on differential operators. We provide a general solution to the problem of learning from data and boundary conditions and illustrate the significant role played by boundary conditions with examples. It turns out that the degree of freedom that arises in the traditional formulation of kernel machines is indeed a limitation, which is partly overcome when incorporating the boundary conditions. This likely holds true in many real-world applications in which there is prior knowledge about the expected behavior of classifiers and regressors on the boundary.

5.
Qual Quant ; : 1-34, 2023 Apr 17.
Artigo em Inglês | MEDLINE | ID: mdl-37359962

RESUMO

Social soft skills are crucial for workers to perform their tasks, yet it is hard to train people on them and to readapt their skill set when needed. In the present work, we analyze the possible effects of the COVID-19 pandemic on social soft skills in the context of Italian occupations related to 88 economic sectors and 14 age groups. We leverage detailed information coming from ICP (i.e. the Italian equivalent of O*Net), provided by the Italian National Institute for the Analysis of Public Policy, from the microdata for research on the continuous detection of labor force, provided by the Italian National Institute of Statistics (ISTAT), and from ISTAT data on the Italian population. Based on these data, we simulate the impact of COVID-19 on workplace characteristics and working styles that were more severely affected by the lockdown measures and the sanitary dispositions during the pandemic (e.g. physical proximity, face-to-face discussions, working remotely). We then apply matrix completion-a machine-learning technique often used in the context of recommender systems-to predict the average variation in the social soft skills importance levels required for each occupation when working conditions change, as some changes might be persistent in the near future. Professions, sectors, and age groups showing negative average variations are exposed to a deficit in their social soft-skills endowment, which might ultimately lead to lower productivity.

6.
Sci Rep ; 12(1): 9639, 2022 06 10.
Artigo em Inglês | MEDLINE | ID: mdl-35689004

RESUMO

This work applies Matrix Completion (MC) - a class of machine-learning methods commonly used in recommendation systems - to analyze economic complexity. In this paper MC is applied to reconstruct the Revealed Comparative Advantage (RCA) matrix, whose elements express the relative advantage of countries in given classes of products, as evidenced by yearly trade flows. A high-accuracy binary classifier is derived from the MC application to discriminate between elements of the RCA matrix that are, respectively, higher/lower than one. We introduce a novel Matrix cOmpletion iNdex of Economic complexitY (MONEY) based on MC and related to the degree of predictability of the RCA entries of different countries (the lower the predictability, the higher the complexity). Differently from previously-developed economic complexity indices, MONEY takes into account several singular vectors of the matrix reconstructed by MC. In contrast, other indices are based only on one/two eigenvectors of a suitable symmetric matrix derived from the RCA matrix. Finally, MC is compared with state-of-the-art economic complexity indices, showing that the MC-based classifier achieves better performance than previous methods based on the application of machine learning to economic complexity.


Assuntos
Aprendizado de Máquina
7.
Sci Rep ; 12(1): 20019, 2022 Nov 21.
Artigo em Inglês | MEDLINE | ID: mdl-36414664

RESUMO

This paper formalizes smooth curve coloring (i.e., curve identification) in the presence of curve intersections as an optimization problem, and investigates theoretically properties of its optimal solution. Moreover, it presents a novel automatic technique for solving such a problem. Formally, the proposed algorithm aims at minimizing the summation of the total variations over a given interval of the first derivatives of all the labeled curves, written as functions of a scalar parameter. The algorithm is based on a first-order finite difference approximation of the curves and a sequence of prediction/correction steps. At each step, the predicted points are attributed to the subsequently observed points of the curves by solving an Euclidean bipartite matching subproblem. A comparison with a more computationally expensive dynamic programming technique is presented. The proposed algorithm is applied with success to elastic periodic metamaterials for the realization of high-performance mechanical metafilters. Its output is shown to be in excellent agreement with desirable smoothness and periodicity properties of the metafilter dispersion curves. Possible developments, including those based on machine-learning techniques, are pointed out.

8.
Neural Comput ; 22(3): 793-829, 2010 Mar.
Artigo em Inglês | MEDLINE | ID: mdl-19922296

RESUMO

Various regularization techniques are investigated in supervised learning from data. Theoretical features of the associated optimization problems are studied, and sparse suboptimal solutions are searched for. Rates of approximate optimization are estimated for sequences of suboptimal solutions formed by linear combinations of n-tuples of computational units, and statistical learning bounds are derived. As hypothesis sets, reproducing kernel Hilbert spaces and their subsets are considered.


Assuntos
Inteligência Artificial , Algoritmos , Modelos Lineares , Fatores de Tempo
9.
Parkinsonism Relat Disord ; 47: 64-70, 2018 02.
Artigo em Inglês | MEDLINE | ID: mdl-29208345

RESUMO

BACKGROUND AND PURPOSE: In this study we attempt to automatically classify individual patients with different parkinsonian disorders, making use of pattern recognition techniques to distinguish among several forms of parkinsonisms (multi-class classification), based on a set of binary classifiers that discriminate each disorder from all others. METHODS: We combine diffusion tensor imaging, proton spectroscopy and morphometric-volumetric data to obtain MR quantitative markers, which are provided to support vector machines with the aim of recognizing the different parkinsonian disorders. Feature selection is used to find the most important features for classification. We also exploit a graph-based technique on the set of quantitative markers to extract additional features from the dataset, and increase classification accuracy. RESULTS: When graph-based features are not used, the MR markers that are most frequently automatically extracted by the feature selection procedure reflect alterations in brain regions that are also usually considered to discriminate parkinsonisms in routine clinical practice. Graph-derived features typically increase the diagnostic accuracy, and reduce the number of features required. CONCLUSIONS: The results obtained in the work demonstrate that support vector machines applied to multimodal brain MR imaging and using graph-based features represent a novel and highly accurate approach to discriminate parkinsonisms, and a useful tool to assist the diagnosis.


Assuntos
Encéfalo/diagnóstico por imagem , Imageamento por Ressonância Magnética , Transtornos Parkinsonianos/classificação , Transtornos Parkinsonianos/diagnóstico por imagem , Máquina de Vetores de Suporte , Idoso , Encéfalo/metabolismo , Feminino , Humanos , Processamento de Imagem Assistida por Computador , Masculino , Pessoa de Meia-Idade , Espectroscopia de Prótons por Ressonância Magnética , Paralisia Supranuclear Progressiva/diagnóstico por imagem , Paralisia Supranuclear Progressiva/metabolismo
10.
IEEE Trans Neural Netw Learn Syst ; 26(9): 2019-32, 2015 Sep.
Artigo em Inglês | MEDLINE | ID: mdl-25389245

RESUMO

A learning paradigm is proposed and investigated, in which the classical framework of learning from examples is enhanced by the introduction of hard pointwise constraints, i.e., constraints imposed on a finite set of examples that cannot be violated. Such constraints arise, e.g., when requiring coherent decisions of classifiers acting on different views of the same pattern. The classical examples of supervised learning, which can be violated at the cost of some penalization (quantified by the choice of a suitable loss function) play the role of soft pointwise constraints. Constrained variational calculus is exploited to derive a representer theorem that provides a description of the functional structure of the optimal solution to the proposed learning paradigm. It is shown that such an optimal solution can be represented in terms of a set of support constraints, which generalize the concept of support vectors and open the doors to a novel learning paradigm, called support constraint machines. The general theory is applied to derive the representation of the optimal solution to the problem of learning from hard linear pointwise constraints combined with soft pointwise constraints induced by supervised examples. In some cases, closed-form optimal solutions are obtained.

11.
Comput Intell Neurosci ; 2015: 109029, 2015.
Artigo em Inglês | MEDLINE | ID: mdl-25960736

RESUMO

Most Active Contour Models (ACMs) deal with the image segmentation problem as a functional optimization problem, as they work on dividing an image into several regions by optimizing a suitable functional. Among ACMs, variational level set methods have been used to build an active contour with the aim of modeling arbitrarily complex shapes. Moreover, they can handle also topological changes of the contours. Self-Organizing Maps (SOMs) have attracted the attention of many computer vision scientists, particularly in modeling an active contour based on the idea of utilizing the prototypes (weights) of a SOM to control the evolution of the contour. SOM-based models have been proposed in general with the aim of exploiting the specific ability of SOMs to learn the edge-map information via their topology preservation property and overcoming some drawbacks of other ACMs, such as trapping into local minima of the image energy functional to be minimized in such models. In this survey, we illustrate the main concepts of variational level set-based ACMs, SOM-based ACMs, and their relationship and review in a comprehensive fashion the development of their state-of-the-art models from a machine learning perspective, with a focus on their strengths and weaknesses.


Assuntos
Modelos Teóricos , Redes Neurais de Computação , Reconhecimento Automatizado de Padrão/métodos , Enquadramento Psicológico , Humanos
12.
Neural Netw ; 24(2): 171-82, 2011 Mar.
Artigo em Inglês | MEDLINE | ID: mdl-21094023

RESUMO

Neural networks provide a more flexible approximation of functions than traditional linear regression. In the latter, one can only adjust the coefficients in linear combinations of fixed sets of functions, such as orthogonal polynomials or Hermite functions, while for neural networks, one may also adjust the parameters of the functions which are being combined. However, some useful properties of linear approximators (such as uniqueness, homogeneity, and continuity of best approximation operators) are not satisfied by neural networks. Moreover, optimization of parameters in neural networks becomes more difficult than in linear regression. Experimental results suggest that these drawbacks of neural networks are offset by substantially lower model complexity, allowing accuracy of approximation even in high-dimensional cases. We give some theoretical results comparing requirements on model complexity for two types of approximators, the traditional linear ones and so called variable-basis types, which include neural networks, radial, and kernel models. We compare upper bounds on worst-case errors in variable-basis approximation with lower bounds on such errors for any linear approximator. Using methods from nonlinear approximation and integral representations tailored to computational units, we describe some cases where neural networks outperform any linear approximator.


Assuntos
Dicionários como Assunto , Modelos Lineares , Redes Neurais de Computação , Biologia Computacional , Modelos Neurológicos , Estatísticas não Paramétricas
13.
Neural Netw ; 24(8): 881-7, 2011 Oct.
Artigo em Inglês | MEDLINE | ID: mdl-21704495

RESUMO

Approximation capabilities of two types of computational models are explored: dictionary-based models (i.e., linear combinations of n-tuples of basis functions computable by units belonging to a set called "dictionary") and linear ones (i.e., linear combinations of n fixed basis functions). The two models are compared in terms of approximation rates, i.e., speeds of decrease of approximation errors for a growing number n of basis functions. Proofs of upper bounds on approximation rates by dictionary-based models are inspected, to show that for individual functions they do not imply estimates for dictionary-based models that do not hold also for some linear models. Instead, the possibility of getting faster approximation rates by dictionary-based models is demonstrated for worst-case errors in approximation of suitable sets of functions. For such sets, even geometric upper bounds hold.


Assuntos
Simulação por Computador , Modelos Lineares , Algoritmos , Redes Neurais de Computação , Reprodutibilidade dos Testes
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA