Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 4 de 4
Filtrar
Mais filtros










Base de dados
Intervalo de ano de publicação
1.
IEEE Trans Neural Netw Learn Syst ; 33(7): 3079-3093, 2022 07.
Artigo em Inglês | MEDLINE | ID: mdl-33513112

RESUMO

In this article, we develop a framework for showing that neural networks can overcome the curse of dimensionality in different high-dimensional approximation problems. Our approach is based on the notion of a catalog network, which is a generalization of a standard neural network in which the nonlinear activation functions can vary from layer to layer as long as they are chosen from a predefined catalog of functions. As such, catalog networks constitute a rich family of continuous functions. We show that under appropriate conditions on the catalog, catalog networks can efficiently be approximated with rectified linear unit-type networks and provide precise estimates on the number of parameters needed for a given approximation accuracy. As special cases of the general results, we obtain different classes of functions that can be approximated with recitifed linear unit networks without the curse of dimensionality.


Assuntos
Redes Neurais de Computação
2.
Proc Math Phys Eng Sci ; 476(2244): 20190630, 2020 Dec.
Artigo em Inglês | MEDLINE | ID: mdl-33408553

RESUMO

For a long time it has been well-known that high-dimensional linear parabolic partial differential equations (PDEs) can be approximated by Monte Carlo methods with a computational effort which grows polynomially both in the dimension and in the reciprocal of the prescribed accuracy. In other words, linear PDEs do not suffer from the curse of dimensionality. For general semilinear PDEs with Lipschitz coefficients, however, it remained an open question whether these suffer from the curse of dimensionality. In this paper we partially solve this open problem. More precisely, we prove in the case of semilinear heat equations with gradient-independent and globally Lipschitz continuous nonlinearities that the computational effort of a variant of the recently introduced multilevel Picard approximations grows at most polynomially both in the dimension and in the reciprocal of the required accuracy.

3.
Proc Natl Acad Sci U S A ; 115(34): 8505-8510, 2018 08 21.
Artigo em Inglês | MEDLINE | ID: mdl-30082389

RESUMO

Developing algorithms for solving high-dimensional partial differential equations (PDEs) has been an exceedingly difficult task for a long time, due to the notoriously difficult problem known as the "curse of dimensionality." This paper introduces a deep learning-based approach that can handle general high-dimensional parabolic PDEs. To this end, the PDEs are reformulated using backward stochastic differential equations and the gradient of the unknown solution is approximated by neural networks, very much in the spirit of deep reinforcement learning with the gradient acting as the policy function. Numerical results on examples including the nonlinear Black-Scholes equation, the Hamilton-Jacobi-Bellman equation, and the Allen-Cahn equation suggest that the proposed algorithm is quite effective in high dimensions, in terms of both accuracy and cost. This opens up possibilities in economics, finance, operational research, and physics, by considering all participating agents, assets, resources, or particles together at the same time, instead of making ad hoc assumptions on their interrelationships.

4.
Proc Math Phys Eng Sci ; 473(2207): 20170104, 2017 Nov.
Artigo em Inglês | MEDLINE | ID: mdl-29225489

RESUMO

In a recent article (Jentzen et al. 2016 Commun. Math. Sci.14, 1477-1500 (doi:10.4310/CMS.2016.v14.n6.a1)), it has been established that, for every arbitrarily slow convergence speed and every natural number d∈{4,5,…}, there exist d-dimensional stochastic differential equations with infinitely often differentiable and globally bounded coefficients such that no approximation method based on finitely many observations of the driving Brownian motion can converge in absolute mean to the solution faster than the given speed of convergence. In this paper, we strengthen the above result by proving that this slow convergence phenomenon also arises in two (d=2) and three (d=3) space dimensions.

SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA
...