RESUMO
In 1943, McCulloch and Pitts introduced a discrete recurrent neural network as a model for computation in brains. The work inspired breakthroughs such as the first computer design and the theory of finite automata. We focus on learning in Hopfield networks, a special case with symmetric weights and fixed-point attractor dynamics. Specifically, we explore minimum energy flow (MEF) as a scalable convex objective for determining network parameters. We catalog various properties of MEF, such as biological plausibility, and then compare to classical approaches in the theory of learning. Trained Hopfield networks can perform unsupervised clustering and define novel error-correcting coding schemes. They also efficiently find hidden structures (cliques) in graph theory. We extend this known connection from graphs to hypergraphs and discover n-node networks with robust storage of 2Ω(n1-ϵ) memories for any ϵ>0. In the case of graphs, we also determine a critical ratio of training samples at which networks generalize completely.
RESUMO
In late December 1973, the United States enacted what some would come to call "the pitbull of environmental laws." In the 50 years since, the formidable regulatory teeth of the Endangered Species Act (ESA) have been credited with considerable successes, obliging agencies to draw upon the best available science to protect species and habitats. Yet human pressures continue to push the planet toward extinctions on a massive scale. With that prospect looming, and with scientific understanding ever changing, Science invited experts to discuss how the ESA has evolved and what its future might hold. -Brad Wible.
RESUMO
Neural datasets are increasing rapidly in both resolution and volume. In neuroanatomy, this trend has been accelerated by innovations in imaging technology. As full datasets are impractical and unnecessary for many applications, it is important to identify abstractions that distill useful features of neural structure, organization, and anatomy. In this review article, we discuss several such abstractions and highlight recent algorithmic advances in working with these models. In particular, we discuss the use of generative models in neuroanatomy; such models may be considered 'meta-abstractions' that capture distributions over other abstractions.