Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 6 de 6
Filtrar
Mais filtros

Base de dados
País/Região como assunto
Tipo de documento
País de afiliação
Intervalo de ano de publicação
1.
Nature ; 588(7839): 604-609, 2020 12.
Artigo em Inglês | MEDLINE | ID: mdl-33361790

RESUMO

Constructing agents with planning capabilities has long been one of the main challenges in the pursuit of artificial intelligence. Tree-based planning methods have enjoyed huge success in challenging domains, such as chess1 and Go2, where a perfect simulator is available. However, in real-world problems, the dynamics governing the environment are often complex and unknown. Here we present the MuZero algorithm, which, by combining a tree-based search with a learned model, achieves superhuman performance in a range of challenging and visually complex domains, without any knowledge of their underlying dynamics. The MuZero algorithm learns an iterable model that produces predictions relevant to planning: the action-selection policy, the value function and the reward. When evaluated on 57 different Atari games3-the canonical video game environment for testing artificial intelligence techniques, in which model-based planning approaches have historically struggled4-the MuZero algorithm achieved state-of-the-art performance. When evaluated on Go, chess and shogi-canonical environments for high-performance planning-the MuZero algorithm matched, without any knowledge of the game dynamics, the superhuman performance of the AlphaZero algorithm5 that was supplied with the rules of the game.

2.
Nature ; 550(7676): 354-359, 2017 10 18.
Artigo em Inglês | MEDLINE | ID: mdl-29052630

RESUMO

A long-standing goal of artificial intelligence is an algorithm that learns, tabula rasa, superhuman proficiency in challenging domains. Recently, AlphaGo became the first program to defeat a world champion in the game of Go. The tree search in AlphaGo evaluated positions and selected moves using deep neural networks. These neural networks were trained by supervised learning from human expert moves, and by reinforcement learning from self-play. Here we introduce an algorithm based solely on reinforcement learning, without human data, guidance or domain knowledge beyond game rules. AlphaGo becomes its own teacher: a neural network is trained to predict AlphaGo's own move selections and also the winner of AlphaGo's games. This neural network improves the strength of the tree search, resulting in higher quality move selection and stronger self-play in the next iteration. Starting tabula rasa, our new program AlphaGo Zero achieved superhuman performance, winning 100-0 against the previously published, champion-defeating AlphaGo.


Assuntos
Jogos Recreativos , Software , Aprendizado de Máquina não Supervisionado , Humanos , Redes Neurais de Computação , Reforço Psicológico , Aprendizado de Máquina Supervisionado
3.
Nature ; 529(7587): 484-9, 2016 Jan 28.
Artigo em Inglês | MEDLINE | ID: mdl-26819042

RESUMO

The game of Go has long been viewed as the most challenging of classic games for artificial intelligence owing to its enormous search space and the difficulty of evaluating board positions and moves. Here we introduce a new approach to computer Go that uses 'value networks' to evaluate board positions and 'policy networks' to select moves. These deep neural networks are trained by a novel combination of supervised learning from human expert games, and reinforcement learning from games of self-play. Without any lookahead search, the neural networks play Go at the level of state-of-the-art Monte Carlo tree search programs that simulate thousands of random games of self-play. We also introduce a new search algorithm that combines Monte Carlo simulation with value and policy networks. Using this search algorithm, our program AlphaGo achieved a 99.8% winning rate against other Go programs, and defeated the human European Go champion by 5 games to 0. This is the first time that a computer program has defeated a human professional player in the full-sized game of Go, a feat previously thought to be at least a decade away.


Assuntos
Jogos Recreativos , Redes Neurais de Computação , Software , Aprendizado de Máquina Supervisionado , Computadores , Europa (Continente) , Humanos , Método de Monte Carlo , Reforço Psicológico
4.
Int J Neural Syst ; 19(4): 227-40, 2009 Aug.
Artigo em Inglês | MEDLINE | ID: mdl-19731397

RESUMO

This paper presents a new methodology for automatically learning an optimal neurostimulation strategy for the treatment of epilepsy. The technical challenge is to automatically modulate neurostimulation parameters, as a function of the observed EEG signal, so as to minimize the frequency and duration of seizures. The methodology leverages recent techniques from the machine learning literature, in particular the reinforcement learning paradigm, to formalize this optimization problem. We present an algorithm which is able to automatically learn an adaptive neurostimulation strategy directly from labeled training data acquired from animal brain tissues. Our results suggest that this methodology can be used to automatically find a stimulation strategy which effectively reduces the incidence of seizures, while also minimizing the amount of stimulation applied. This work highlights the crucial role that modern machine learning techniques can play in the optimization of treatment strategies for patients with chronic disorders such as epilepsy.


Assuntos
Terapia por Estimulação Elétrica/métodos , Epilepsia/terapia , Aprendizagem/fisiologia , Reforço Psicológico , 4-Aminopiridina/farmacologia , Algoritmos , Animais , Biofísica , Modelos Animais de Doenças , Eletroencefalografia/métodos , Córtex Entorrinal/fisiopatologia , Epilepsia/induzido quimicamente , Epilepsia/patologia , Técnicas In Vitro , Masculino , Sistemas Homem-Máquina , Bloqueadores dos Canais de Potássio/farmacologia , Ratos , Ratos Sprague-Dawley
5.
Science ; 362(6419): 1140-1144, 2018 12 07.
Artigo em Inglês | MEDLINE | ID: mdl-30523106

RESUMO

The game of chess is the longest-studied domain in the history of artificial intelligence. The strongest programs are based on a combination of sophisticated search techniques, domain-specific adaptations, and handcrafted evaluation functions that have been refined by human experts over several decades. By contrast, the AlphaGo Zero program recently achieved superhuman performance in the game of Go by reinforcement learning from self-play. In this paper, we generalize this approach into a single AlphaZero algorithm that can achieve superhuman performance in many challenging games. Starting from random play and given no domain knowledge except the game rules, AlphaZero convincingly defeated a world champion program in the games of chess and shogi (Japanese chess), as well as Go.


Assuntos
Inteligência Artificial , Reforço Psicológico , Jogos de Vídeo , Algoritmos , Humanos , Software
6.
Exp Neurol ; 241: 179-83, 2013 Mar.
Artigo em Inglês | MEDLINE | ID: mdl-23313899

RESUMO

Deep brain stimulation (DBS) is a promising tool for treating drug-resistant epileptic patients. Currently, the most common approach is fixed-frequency stimulation (periodic pacing) by means of stimulating devices that operate under open-loop control. However, a drawback of this DBS strategy is the impossibility of tailoring a personalized treatment, which also limits the optimization of the stimulating apparatus. Here, we propose a novel DBS methodology based on a closed-loop control strategy, developed by exploiting statistical machine learning techniques, in which stimulation parameters are adapted to the current neural activity thus allowing for seizure suppression that is fine-tuned on the individual scale (adaptive stimulation). By means of field potential recording from adult rat hippocampus-entorhinal cortex (EC) slices treated with the convulsant drug 4-aminopyridine we determined the effectiveness of this approach compared to low-frequency periodic pacing, and found that the closed-loop stimulation strategy: (i) has similar efficacy as low-frequency periodic pacing in suppressing ictal-like events but (ii) is more efficient than periodic pacing in that it requires less electrical pulses. We also provide evidence that the closed-loop stimulation strategy can alternatively be employed to tune the frequency of a periodic pacing strategy. Our findings indicate that the adaptive stimulation strategy may represent a novel, promising approach to DBS for individually-tailored epilepsy treatment.


Assuntos
Adaptação Fisiológica/fisiologia , Potenciais Evocados/fisiologia , Sistema Límbico/fisiologia , Animais , Biofísica , Estimulação Elétrica/efeitos adversos , Técnicas In Vitro , Vias Neurais/fisiologia , Ratos , Ratos Sprague-Dawley
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA