Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 3 de 3
Filtrar
Más filtros




Base de datos
Intervalo de año de publicación
1.
Nature ; 630(8016): 493-500, 2024 Jun.
Artículo en Inglés | MEDLINE | ID: mdl-38718835

RESUMEN

The introduction of AlphaFold 21 has spurred a revolution in modelling the structure of proteins and their interactions, enabling a huge range of applications in protein modelling and design2-6. Here we describe our AlphaFold 3 model with a substantially updated diffusion-based architecture that is capable of predicting the joint structure of complexes including proteins, nucleic acids, small molecules, ions and modified residues. The new AlphaFold model demonstrates substantially improved accuracy over many previous specialized tools: far greater accuracy for protein-ligand interactions compared with state-of-the-art docking tools, much higher accuracy for protein-nucleic acid interactions compared with nucleic-acid-specific predictors and substantially higher antibody-antigen prediction accuracy compared with AlphaFold-Multimer v.2.37,8. Together, these results show that high-accuracy modelling across biomolecular space is possible within a single unified deep-learning framework.


Asunto(s)
Aprendizaje Profundo , Ligandos , Modelos Moleculares , Proteínas , Programas Informáticos , Humanos , Anticuerpos/química , Anticuerpos/metabolismo , Antígenos/metabolismo , Antígenos/química , Aprendizaje Profundo/normas , Iones/química , Iones/metabolismo , Simulación del Acoplamiento Molecular , Ácidos Nucleicos/química , Ácidos Nucleicos/metabolismo , Unión Proteica , Conformación Proteica , Proteínas/química , Proteínas/metabolismo , Reproducibilidad de los Resultados , Programas Informáticos/normas
2.
Nature ; 575(7782): 350-354, 2019 11.
Artículo en Inglés | MEDLINE | ID: mdl-31666705

RESUMEN

Many real-world applications require artificial agents to compete and coordinate with other agents in complex environments. As a stepping stone to this goal, the domain of StarCraft has emerged as an important challenge for artificial intelligence research, owing to its iconic and enduring status among the most difficult professional esports and its relevance to the real world in terms of its raw complexity and multi-agent challenges. Over the course of a decade and numerous competitions1-3, the strongest agents have simplified important aspects of the game, utilized superhuman capabilities, or employed hand-crafted sub-systems4. Despite these advantages, no previous agent has come close to matching the overall skill of top StarCraft players. We chose to address the challenge of StarCraft using general-purpose learning methods that are in principle applicable to other complex domains: a multi-agent reinforcement learning algorithm that uses data from both human and agent games within a diverse league of continually adapting strategies and counter-strategies, each represented by deep neural networks5,6. We evaluated our agent, AlphaStar, in the full game of StarCraft II, through a series of online games against human players. AlphaStar was rated at Grandmaster level for all three StarCraft races and above 99.8% of officially ranked human players.


Asunto(s)
Refuerzo en Psicología , Juegos de Video , Inteligencia Artificial , Humanos , Aprendizaje
3.
Science ; 364(6443): 859-865, 2019 May 31.
Artículo en Inglés | MEDLINE | ID: mdl-31147514

RESUMEN

Reinforcement learning (RL) has shown great success in increasingly complex single-agent environments and two-player turn-based games. However, the real world contains multiple agents, each learning and acting independently to cooperate and compete with other agents. We used a tournament-style evaluation to demonstrate that an agent can achieve human-level performance in a three-dimensional multiplayer first-person video game, Quake III Arena in Capture the Flag mode, using only pixels and game points scored as input. We used a two-tier optimization process in which a population of independent RL agents are trained concurrently from thousands of parallel matches on randomly generated environments. Each agent learns its own internal reward signal and rich representation of the world. These results indicate the great potential of multiagent reinforcement learning for artificial intelligence research.


Asunto(s)
Aprendizaje Automático , Refuerzo en Psicología , Juegos de Video , Recompensa
SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA