Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 5 de 5
Filtrar
1.
Proc Natl Acad Sci U S A ; 117(17): 9208-9215, 2020 04 28.
Artículo en Inglés | MEDLINE | ID: mdl-32291338

RESUMEN

What can humans compute in their heads? We are thinking of a variety of cryptographic protocols, games like sudoku, crossword puzzles, speed chess, and so on. For example, can a person compute a function in his or her head so that an eavesdropper with a powerful computer-who sees the responses to random inputs-still cannot infer responses to new inputs? To address such questions, we propose a rigorous model of human computation and associated measures of complexity. We apply the model and measures first and foremost to the problem of 1) humanly computable password generation and then, consider related problems of 2) humanly computable "one-way functions" and 3) humanly computable "pseudorandom generators." The theory of human computability developed here plays by different rules than standard computability; the polynomial vs. exponential time divide of modern computability theory is irrelevant to human computation. In human computability, the step counts for both humans and computers must be more concrete. As an application and running example, password generation schemas are humanly computable algorithms based on private keys. Humanly computable and/or humanly usable mean, roughly speaking, that any human needing-and capable of using-passwords can if sufficiently motivated generate and memorize a secret key in less than 1 h (including all rehearsals) and can subsequently use schema plus key to transform website names (challenges) into passwords (responses) in less than 1 min. Moreover, the schemas have precisely defined measures of security against all adversaries, human and/or machine.


Asunto(s)
Cognición/fisiología , Algoritmos , Humanos , Modelos Teóricos , Programas Informáticos
2.
Proc Natl Acad Sci U S A ; 117(25): 14464-14472, 2020 06 23.
Artículo en Inglés | MEDLINE | ID: mdl-32518114

RESUMEN

Assemblies are large populations of neurons believed to imprint memories, concepts, words, and other cognitive information. We identify a repertoire of operations on assemblies. These operations correspond to properties of assemblies observed in experiments, and can be shown, analytically and through simulations, to be realizable by generic, randomly connected populations of neurons with Hebbian plasticity and inhibition. Assemblies and their operations constitute a computational model of the brain which we call the Assembly Calculus, occupying a level of detail intermediate between the level of spiking neurons and synapses and that of the whole brain. The resulting computational system can be shown, under assumptions, to be, in principle, capable of carrying out arbitrary computations. We hypothesize that something like it may underlie higher human cognitive functions such as reasoning, planning, and language. In particular, we propose a plausible brain architecture based on assemblies for implementing the syntactic processing of language in cortex, which is consistent with recent experimental results.


Asunto(s)
Corteza Cerebral/fisiología , Cognición/fisiología , Modelos Neurológicos , Neuronas/fisiología , Sinapsis/fisiología , Corteza Cerebral/citología , Simulación por Computador , Humanos , Lenguaje
3.
Bioinformatics ; 33(11): 1741-1743, 2017 Jun 01.
Artículo en Inglés | MEDLINE | ID: mdl-28158334

RESUMEN

SUMMARY: In constraint-based metabolic modelling, physical and biochemical constraints define a polyhedral convex set of feasible flux vectors. Uniform sampling of this set provides an unbiased characterization of the metabolic capabilities of a biochemical network. However, reliable uniform sampling of genome-scale biochemical networks is challenging due to their high dimensionality and inherent anisotropy. Here, we present an implementation of a new sampling algorithm, coordinate hit-and-run with rounding (CHRR). This algorithm is based on the provably efficient hit-and-run random walk and crucially uses a preprocessing step to round the anisotropic flux set. CHRR provably converges to a uniform stationary sampling distribution. We apply it to metabolic networks of increasing dimensionality. We show that it converges several times faster than a popular artificial centering hit-and-run algorithm, enabling reliable and tractable sampling of genome-scale biochemical networks. AVAILABILITY AND IMPLEMENTATION: https://github.com/opencobra/cobratoolbox . CONTACT: ronan.mt.fleming@gmail.com or vempala@cc.gatech.edu. SUPPLEMENTARY INFORMATION: Supplementary data are available at Bioinformatics online.


Asunto(s)
Biología Computacional/métodos , Redes y Vías Metabólicas , Modelos Biológicos , Programas Informáticos , Algoritmos , Humanos
4.
Neural Comput ; 27(10): 2132-47, 2015 Oct.
Artículo en Inglés | MEDLINE | ID: mdl-26313600

RESUMEN

Humans learn categories of complex objects quickly and from a few examples. Random projection has been suggested as a means to learn and categorize efficiently. We investigate how random projection affects categorization by humans and by very simple neural networks on the same stimuli and categorization tasks, and how this relates to the robustness of categories. We find that (1) drastic reduction in stimulus complexity via random projection does not degrade performance in categorization tasks by either humans or simple neural networks, (2) human accuracy and neural network accuracy are remarkably correlated, even at the level of individual stimuli, and (3) the performance of both is strongly indicated by a natural notion of category robustness.


Asunto(s)
Red Nerviosa/fisiología , Reconocimiento Visual de Modelos/fisiología , Estimulación Luminosa/métodos , Corteza Visual/fisiología , Adolescente , Femenino , Humanos , Masculino , Distribución Aleatoria , Adulto Joven
5.
Nat Protoc ; 14(3): 639-702, 2019 03.
Artículo en Inglés | MEDLINE | ID: mdl-30787451

RESUMEN

Constraint-based reconstruction and analysis (COBRA) provides a molecular mechanistic framework for integrative analysis of experimental molecular systems biology data and quantitative prediction of physicochemically and biochemically feasible phenotypic states. The COBRA Toolbox is a comprehensive desktop software suite of interoperable COBRA methods. It has found widespread application in biology, biomedicine, and biotechnology because its functions can be flexibly combined to implement tailored COBRA protocols for any biochemical network. This protocol is an update to the COBRA Toolbox v.1.0 and v.2.0. Version 3.0 includes new methods for quality-controlled reconstruction, modeling, topological analysis, strain and experimental design, and network visualization, as well as network integration of chemoinformatic, metabolomic, transcriptomic, proteomic, and thermochemical data. New multi-lingual code integration also enables an expansion in COBRA application scope via high-precision, high-performance, and nonlinear numerical optimization solvers for multi-scale, multi-cellular, and reaction kinetic modeling, respectively. This protocol provides an overview of all these new features and can be adapted to generate and analyze constraint-based models in a wide variety of scenarios. The COBRA Toolbox v.3.0 provides an unparalleled depth of COBRA methods.


Asunto(s)
Modelos Biológicos , Programas Informáticos , Genoma , Redes y Vías Metabólicas , Biología de Sistemas
SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA