Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 60.976
Filtrar
Más filtros

Intervalo de año de publicación
1.
Nature ; 625(7995): 548-556, 2024 Jan.
Artículo en Inglés | MEDLINE | ID: mdl-38123685

RESUMEN

Considerable scholarly attention has been paid to understanding belief in online misinformation1,2, with a particular focus on social networks. However, the dominant role of search engines in the information environment remains underexplored, even though the use of online search to evaluate the veracity of information is a central component of media literacy interventions3-5. Although conventional wisdom suggests that searching online when evaluating misinformation would reduce belief in it, there is little empirical evidence to evaluate this claim. Here, across five experiments, we present consistent evidence that online search to evaluate the truthfulness of false news articles actually increases the probability of believing them. To shed light on this relationship, we combine survey data with digital trace data collected using a custom browser extension. We find that the search effect is concentrated among individuals for whom search engines return lower-quality information. Our results indicate that those who search online to evaluate misinformation risk falling into data voids, or informational spaces in which there is corroborating evidence from low-quality sources. We also find consistent evidence that searching online to evaluate news increases belief in true news from low-quality sources, but inconsistent evidence that it increases belief in true news from mainstream sources. Our findings highlight the need for media literacy programmes to ground their recommendations in empirically tested strategies and for search engines to invest in solutions to the challenges identified here.


Asunto(s)
Desinformación , Probabilidad , Motor de Búsqueda , Confianza , Humanos , Redes Sociales en Línea , Opinión Pública , Motor de Búsqueda/estadística & datos numéricos , Medios de Comunicación Sociales/estadística & datos numéricos
2.
Nature ; 629(8012): 624-629, 2024 May.
Artículo en Inglés | MEDLINE | ID: mdl-38632401

RESUMEN

The cost of drug discovery and development is driven primarily by failure1, with only about 10% of clinical programmes eventually receiving approval2-4. We previously estimated that human genetic evidence doubles the success rate from clinical development to approval5. In this study we leverage the growth in genetic evidence over the past decade to better understand the characteristics that distinguish clinical success and failure. We estimate the probability of success for drug mechanisms with genetic support is 2.6 times greater than those without. This relative success varies among therapy areas and development phases, and improves with increasing confidence in the causal gene, but is largely unaffected by genetic effect size, minor allele frequency or year of discovery. These results indicate we are far from reaching peak genetic insights to aid the discovery of targets for more effective drugs.


Asunto(s)
Ensayos Clínicos como Asunto , Aprobación de Drogas , Descubrimiento de Drogas , Resultado del Tratamiento , Humanos , Alelos , Ensayos Clínicos como Asunto/economía , Ensayos Clínicos como Asunto/estadística & datos numéricos , Aprobación de Drogas/economía , Descubrimiento de Drogas/economía , Descubrimiento de Drogas/métodos , Descubrimiento de Drogas/estadística & datos numéricos , Descubrimiento de Drogas/tendencias , Frecuencia de los Genes , Predisposición Genética a la Enfermedad , Terapia Molecular Dirigida , Probabilidad , Factores de Tiempo , Insuficiencia del Tratamiento
3.
Nature ; 622(7981): 87-92, 2023 Oct.
Artículo en Inglés | MEDLINE | ID: mdl-37794266

RESUMEN

Disaster losses are increasing and evidence is mounting that climate change is driving up the probability of extreme natural shocks1-3. Yet it has also proved politically expedient to invoke climate change as an exogenous force that supposedly places disasters beyond the influence of local and national authorities4,5. However, locally determined patterns of urbanization and spatial development are key factors to the exposure and vulnerability of people to climatic shocks6. Using high-resolution annual data, this study shows that, since 1985, human settlements around the world-from villages to megacities-have expanded continuously and rapidly into present-day flood zones. In many regions, growth in the most hazardous flood zones is outpacing growth in non-exposed zones by a large margin, particularly in East Asia, where high-hazard settlements have expanded 60% faster than flood-safe settlements. These results provide systematic evidence of a divergence in the exposure of countries to flood hazards. Instead of adapting their exposure, many countries continue to actively amplify their exposure to increasingly frequent climatic shocks.


Asunto(s)
Ciudades , Inundaciones , Migración Humana , Urbanización , Asia Oriental , Ciudades/estadística & datos numéricos , Cambio Climático/estadística & datos numéricos , Inundaciones/estadística & datos numéricos , Migración Humana/estadística & datos numéricos , Migración Humana/tendencias , Probabilidad , Urbanización/tendencias
4.
Cell ; 155(5): 1166-77, 2013 Nov 21.
Artículo en Inglés | MEDLINE | ID: mdl-24267895

RESUMEN

The Drosophila Dscam1 gene encodes a vast number of cell recognition molecules through alternative splicing. These exhibit isoform-specific homophilic binding and regulate self-avoidance, the tendency of neurites from the same cell to repel one another. Genetic experiments indicate that different cells must express different isoforms. How this is achieved is unknown, as expression of alternative exons in vivo has not been shown. Here, we modified the endogenous Dscam1 locus to generate splicing reporters for all variants of exon 4. We demonstrate that splicing does not occur in a cell-type-specific fashion, that cells sharing the same anatomical location in different individuals express different exon 4 variants, and that the splicing pattern in a given neuron can change over time. We conclude that splicing is probabilistic. This is compatible with a widespread role in neural circuit assembly through self-avoidance and is incompatible with models in which specific isoforms of Dscam1 mediate homophilic recognition between processes of different cells.


Asunto(s)
Empalme Alternativo , Moléculas de Adhesión Celular/genética , Proteínas de Drosophila/genética , Drosophila melanogaster/citología , Drosophila melanogaster/genética , Neuronas/metabolismo , Isoformas de Proteínas/genética , Animales , Drosophila melanogaster/metabolismo , Exones , Neuronas/clasificación , Probabilidad
5.
Annu Rev Cell Dev Biol ; 30: 23-37, 2014.
Artículo en Inglés | MEDLINE | ID: mdl-25000992

RESUMEN

The physicist Ernest Rutherford said, "If your experiment needs statistics, you ought to have done a better experiment." Although this aphorism remains true for much of today's research in cell biology, a basic understanding of statistics can be useful to cell biologists to help in monitoring the conduct of their experiments, in interpreting the results, in presenting them in publications, and when critically evaluating research by others. However, training in statistics is often focused on the sophisticated needs of clinical researchers, psychologists, and epidemiologists, whose conclusions depend wholly on statistics, rather than the practical needs of cell biologists, whose experiments often provide evidence that is not statistical in nature. This review describes some of the basic statistical principles that may be of use to experimental biologists, but it does not cover the sophisticated statistics needed for papers that contain evidence of no other kind.


Asunto(s)
Biología Celular , Estadística como Asunto , Causalidad , Interpretación Estadística de Datos , Probabilidad , Reproducibilidad de los Resultados , Proyectos de Investigación , Distribuciones Estadísticas
6.
Trends Biochem Sci ; 48(5): 428-436, 2023 05.
Artículo en Inglés | MEDLINE | ID: mdl-36759237

RESUMEN

The probability of a given receptor tyrosine kinase (RTK) triggering a defined cellular outcome is low because of the promiscuous nature of signalling, the randomness of molecular diffusion through the cell, and the ongoing nonfunctional submembrane signalling activity or noise. Signal transduction is therefore a 'numbers game', where enough cell surface receptors and effector proteins must initially be engaged to guarantee formation of a functional signalling complex against a background of redundant events. The presence of intracellular liquid-liquid phase separation (LLPS) at the plasma membrane provides a mechanism through which the probabilistic nature of signalling can be weighted in favour of the required, discrete cellular outcome and mutual exclusivity in signal initiation.


Asunto(s)
Proteínas Tirosina Quinasas Receptoras , Transducción de Señal , Proteínas Tirosina Quinasas Receptoras/metabolismo , Transducción de Señal/fisiología , Probabilidad , Sistemas de Liberación de Medicamentos
7.
Genome Res ; 34(8): 1165-1173, 2024 Sep 20.
Artículo en Inglés | MEDLINE | ID: mdl-39152037

RESUMEN

The main way of analyzing genetic sequences is by finding sequence regions that are related to each other. There are many methods to do that, usually based on this idea: Find an alignment of two sequence regions, which would be unlikely to exist between unrelated sequences. Unfortunately, it is hard to tell if an alignment is likely to exist by chance. Also, the precise alignment of related regions is uncertain. One alignment does not hold all evidence that they are related. We should consider alternative alignments too. This is rarely done, because we lack a simple and fast method that fits easily into practical sequence-search software. Described here is the simplest-conceivable change to standard sequence alignment, which sums probabilities of alternative alignments and makes it easier to tell if a similarity is likely to occur by chance. This approach is better than standard alignment at finding distant relationships, at least in a few tests. It can be used in practical sequence-search software, with minimal increase in implementation difficulty or run time. It generalizes to different kinds of alignment, for example, DNA-versus-protein with frameshifts. Thus, it can widely contribute to finding subtle relationships between sequences.


Asunto(s)
Alineación de Secuencia , Alineación de Secuencia/métodos , Programas Informáticos , Algoritmos , Probabilidad , Humanos , Análisis de Secuencia de ADN/métodos , Biología Computacional/métodos , Secuencia de Bases
8.
Nature ; 596(7873): 548-552, 2021 08.
Artículo en Inglés | MEDLINE | ID: mdl-34349266

RESUMEN

Globally, there has been a recent surge in 'citizens' assemblies'1, which are a form of civic participation in which a panel of randomly selected constituents contributes to questions of policy. The random process for selecting this panel should satisfy two properties. First, it must produce a panel that is representative of the population. Second, in the spirit of democratic equality, individuals would ideally be selected to serve on this panel with equal probability2,3. However, in practice these desiderata are in tension owing to differential participation rates across subpopulations4,5. Here we apply ideas from fair division to develop selection algorithms that satisfy the two desiderata simultaneously to the greatest possible extent: our selection algorithms choose representative panels while selecting individuals with probabilities as close to equal as mathematically possible, for many metrics of 'closeness to equality'. Our implementation of one such algorithm has already been used to select more than 40 citizens' assemblies around the world. As we demonstrate using data from ten citizens' assemblies, adopting our algorithm over a benchmark representing the previous state of the art leads to substantially fairer selection probabilities. By contributing a fairer, more principled and deployable algorithm, our work puts the practice of sortition on firmer foundations. Moreover, our work establishes citizens' assemblies as a domain in which insights from the field of fair division can lead to high-impact applications.


Asunto(s)
Personal Administrativo/organización & administración , Algoritmos , Democracia , Formulación de Políticas , Probabilidad , Conjuntos de Datos como Asunto , Femenino , Humanos , Masculino , Distribución Aleatoria
9.
Nature ; 592(7855): 564-570, 2021 Apr.
Artículo en Inglés | MEDLINE | ID: mdl-33883735

RESUMEN

The social cost of methane (SC-CH4) measures the economic loss of welfare caused by emitting one tonne of methane into the atmosphere. This valuation may in turn be used in cost-benefit analyses or to inform climate policies1-3. However, current SC-CH4 estimates have not included key scientific findings and observational constraints. Here we estimate the SC-CH4 by incorporating the recent upward revision of 25 per cent to calculations of the radiative forcing of methane4, combined with calibrated reduced-form global climate models and an ensemble of integrated assessment models (IAMs). Our multi-model mean estimate for the SC-CH4 is US$933 per tonne of CH4 (5-95 per cent range, US$471-1,570 per tonne of CH4) under a high-emissions scenario (Representative Concentration Pathway (RCP) 8.5), a 22 per cent decrease compared to estimates based on the climate uncertainty framework used by the US federal government5. Our ninety-fifth percentile estimate is 51 per cent lower than the corresponding figure from the US framework. Under a low-emissions scenario (RCP 2.6), our multi-model mean decreases to US$710 per tonne of CH4. Tightened equilibrium climate sensitivity estimates paired with the effect of previously neglected relationships between uncertain parameters of the climate model lower these estimates. We also show that our SC-CH4 estimates are sensitive to model combinations; for example, within one IAM, different methane cycle sub-models can induce variations of approximately 20 per cent in the estimated SC-CH4. But switching IAMs can more than double the estimated SC-CH4. Extending our results to account for societal concerns about equity produces SC-CH4 estimates that differ by more than an order of magnitude between low- and high-income regions. Our central equity-weighted estimate for the USA increases to US$8,290 per tonne of CH4 whereas our estimate for sub-Saharan Africa decreases to US$134 per tonne of CH4.


Asunto(s)
Cambio Climático/economía , Metano/economía , Justicia Social , Bienestar Social/economía , Incertidumbre , África del Sur del Sahara , Calibración , Modelos Climáticos , Justicia Ambiental , Humanos , Dinámicas no Lineales , Probabilidad , Justicia Social/economía , Temperatura , Estados Unidos
10.
Nature ; 595(7866): 250-254, 2021 07.
Artículo en Inglés | MEDLINE | ID: mdl-34234337

RESUMEN

Food supply shocks are increasing worldwide1,2, particularly the type of shock wherein food production or distribution loss in one location propagates through the food supply chain to other locations3,4. Analogous to biodiversity buffering ecosystems against external shocks5,6, ecological theory suggests that food supply chain diversity is crucial for managing the risk of food shock to human populations7,8. Here we show that boosting a city's food supply chain diversity increases the resistance of a city to food shocks of mild to moderate severity by up to 15 per cent. We develop an intensity-duration-frequency model linking food shock risk to supply chain diversity. The empirical-statistical model is based on annual food inflow observations from all metropolitan areas in the USA during the years 2012 to 2015, years when most of the country experienced moderate to severe droughts. The model explains a city's resistance to food shocks of a given frequency, intensity and duration as a monotonically declining function of the city's food inflow supply chain's Shannon diversity. This model is simple, operationally useful and addresses any kind of hazard. Using this method, cities can improve their resistance to food supply shocks with policies that increase the food supply chain's diversity.


Asunto(s)
Abastecimiento de Alimentos/métodos , Alimentos/estadística & datos numéricos , Gestión de Riesgos , Ciudades/estadística & datos numéricos , Humanos , Modelos Estadísticos , Probabilidad , Reproducibilidad de los Resultados , Estados Unidos
11.
Nature ; 596(7872): 428-432, 2021 08.
Artículo en Inglés | MEDLINE | ID: mdl-34321661

RESUMEN

Despite the existence of good catalogues of cancer genes1,2, identifying the specific mutations of those genes that drive tumorigenesis across tumour types is still a largely unsolved problem. As a result, most mutations identified in cancer genes across tumours are of unknown significance to tumorigenesis3. We propose that the mutations observed in thousands of tumours-natural experiments testing their oncogenic potential replicated across individuals and tissues-can be exploited to solve this problem. From these mutations, features that describe the mechanism of tumorigenesis of each cancer gene and tissue may be computed and used to build machine learning models that encapsulate these mechanisms. Here we demonstrate the feasibility of this solution by building and validating 185 gene-tissue-specific machine learning models that outperform experimental saturation mutagenesis in the identification of  driver and passenger mutations. The models and their assessment of each mutation are designed to be interpretable, thus avoiding a black-box prediction device. Using these models, we outline the blueprints of potential driver mutations in cancer genes, and demonstrate the role of mutation probability in shaping the landscape of observed driver mutations. These blueprints will support the interpretation of newly sequenced tumours in patients and the study of the mechanisms of tumorigenesis of cancer genes across tissues.


Asunto(s)
Simulación por Computador , Aprendizaje Automático , Mutagénesis , Mutación , Neoplasias/genética , Oncogenes/genética , Transformación Celular Neoplásica/genética , Humanos , Modelos Genéticos , Especificidad de Órganos/genética , Medicina de Precisión , Probabilidad , Reproducibilidad de los Resultados
12.
Nature ; 597(7875): 230-234, 2021 09.
Artículo en Inglés | MEDLINE | ID: mdl-34497394

RESUMEN

Parties to the 2015 Paris Agreement pledged to limit global warming to well below 2 °C and to pursue efforts to limit the temperature increase to 1.5 °C relative to pre-industrial times1. However, fossil fuels continue to dominate the global energy system and a sharp decline in their use must be realized to keep the temperature increase below 1.5 °C (refs. 2-7). Here we use a global energy systems model8 to assess the amount of fossil fuels that would need to be left in the ground, regionally and globally, to allow for a 50 per cent probability of limiting warming to 1.5 °C. By 2050, we find that nearly 60 per cent of oil and fossil methane gas, and 90 per cent of coal must remain unextracted to keep within a 1.5 °C carbon budget. This is a large increase in the unextractable estimates for a 2 °C carbon budget9, particularly for oil, for which an additional 25 per cent of reserves must remain unextracted. Furthermore, we estimate that oil and gas production must decline globally by 3 per cent each year until 2050. This implies that most regions must reach peak production now or during the next decade, rendering many operational and planned fossil fuel projects unviable. We probably present an underestimate of the production changes required, because a greater than 50 per cent probability of limiting warming to 1.5 °C requires more carbon to stay in the ground and because of uncertainties around the timely deployment of negative emission technologies at scale.


Asunto(s)
Conservación de los Recursos Energéticos/legislación & jurisprudencia , Combustibles Fósiles/análisis , Combustibles Fósiles/provisión & distribución , Calentamiento Global/prevención & control , Cooperación Internacional/legislación & jurisprudencia , Modelos Teóricos , Temperatura , Aceites Combustibles/análisis , Aceites Combustibles/provisión & distribución , Mapeo Geográfico , Calentamiento Global/legislación & jurisprudencia , Metano/análisis , Metano/provisión & distribución , Paris , Probabilidad , Factores de Tiempo , Incertidumbre
13.
Proc Natl Acad Sci U S A ; 121(5): e2314215121, 2024 Jan 30.
Artículo en Inglés | MEDLINE | ID: mdl-38261621

RESUMEN

The competition-colonization (CC) trade-off is a well-studied coexistence mechanism for metacommunities. In this setting, it is believed that the coexistence of all species requires their traits to satisfy restrictive conditions limiting their similarity. To investigate whether diverse metacommunities can assemble in a CC trade-off model, we study their assembly from a probabilistic perspective. From a pool of species with parameters (corresponding to traits) sampled at random, we compute the probability that any number of species coexist and characterize the set of species that emerges through assembly. Remarkably, almost exactly half of the species in a large pool typically coexist, with no saturation as the size of the pool grows, and with little dependence on the underlying distribution of traits. Through a mix of analytical results and simulations, we show that this unlimited niche packing emerges as assembly actively moves communities toward overdispersed configurations in niche space. Our findings also apply to a realistic assembly scenario where species invade one at a time from a fixed regional pool. When diversity arises de novo in the metacommunity, richness still grows without bound, but more slowly. Together, our results suggest that the CC trade-off can support the robust emergence of diverse communities, even when coexistence of the full species pool is exceedingly unlikely.


Asunto(s)
Vendajes , Fenotipo , Probabilidad
14.
Proc Natl Acad Sci U S A ; 121(5): e2313708120, 2024 Jan 30.
Artículo en Inglés | MEDLINE | ID: mdl-38277438

RESUMEN

We present an approach to computing the probability of epidemic "burnout," i.e., the probability that a newly emergent pathogen will go extinct after a major epidemic. Our analysis is based on the standard stochastic formulation of the Susceptible-Infectious-Removed (SIR) epidemic model including host demography (births and deaths) and corresponds to the standard SIR ordinary differential equations (ODEs) in the infinite population limit. Exploiting a boundary layer approximation to the ODEs and a birth-death process approximation to the stochastic dynamics within the boundary layer, we derive convenient, fully analytical approximations for the burnout probability. We demonstrate-by comparing with computationally demanding individual-based stochastic simulations and with semi-analytical approximations derived previously-that our fully analytical approximations are highly accurate for biologically plausible parameters. We show that the probability of burnout always decreases with increased mean infectious period. However, for typical biological parameters, there is a relevant local minimum in the probability of persistence as a function of the basic reproduction number [Formula: see text]. For the shortest infectious periods, persistence is least likely if [Formula: see text]; for longer infectious periods, the minimum point decreases to [Formula: see text]. For typical acute immunizing infections in human populations of realistic size, our analysis of the SIR model shows that burnout is almost certain in a well-mixed population, implying that susceptible recruitment through births is insufficient on its own to explain disease persistence.


Asunto(s)
Enfermedades Transmisibles , Epidemias , Humanos , Procesos Estocásticos , Modelos Epidemiológicos , Modelos Biológicos , Enfermedades Transmisibles/epidemiología , Probabilidad , Susceptibilidad a Enfermedades , Agotamiento Psicológico
15.
Proc Natl Acad Sci U S A ; 121(18): e2306901121, 2024 Apr 30.
Artículo en Inglés | MEDLINE | ID: mdl-38669186

RESUMEN

RNA velocity estimation is a potentially powerful tool to reveal the directionality of transcriptional changes in single-cell RNA-sequencing data, but it lacks accuracy, absent advanced metabolic labeling techniques. We developed an approach, TopicVelo, that disentangles simultaneous, yet distinct, dynamics by using a probabilistic topic model, a highly interpretable form of latent space factorization, to infer cells and genes associated with individual processes, thereby capturing cellular pluripotency or multifaceted functionality. Focusing on process-associated cells and genes enables accurate estimation of process-specific velocities via a master equation for a transcriptional burst model accounting for intrinsic stochasticity. The method obtains a global transition matrix by leveraging cell topic weights to integrate process-specific signals. In challenging systems, this method accurately recovers complex transitions and terminal states, while our use of first-passage time analysis provides insights into transient transitions. These results expand the limits of RNA velocity, empowering future studies of cell fate and functional responses.


Asunto(s)
Diferenciación Celular , Análisis de Clases Latentes , Análisis de Expresión Génica de una Sola Célula , Transcripción Genética , Animales , Humanos , Ratones , Diferenciación Celular/genética , Conjuntos de Datos como Asunto , Biología Evolutiva , Hematopoyesis/genética , Inmunidad Innata/genética , Inflamación/genética , Linfocitos/citología , Linfocitos/inmunología , Probabilidad , Reproducibilidad de los Resultados , Análisis de Expresión Génica de una Sola Célula/métodos , Piel/inmunología , Piel/patología , Procesos Estocásticos , Factores de Tiempo
16.
Nature ; 584(7821): 393-397, 2020 08.
Artículo en Inglés | MEDLINE | ID: mdl-32814886

RESUMEN

The rate of global-mean sea-level rise since 1900 has varied over time, but the contributing factors are still poorly understood1. Previous assessments found that the summed contributions of ice-mass loss, terrestrial water storage and thermal expansion of the ocean could not be reconciled with observed changes in global-mean sea level, implying that changes in sea level or some contributions to those changes were poorly constrained2,3. Recent improvements to observational data, our understanding of the main contributing processes to sea-level change and methods for estimating the individual contributions, mean another attempt at reconciliation is warranted. Here we present a probabilistic framework to reconstruct sea level since 1900 using independent observations and their inherent uncertainties. The sum of the contributions to sea-level change from thermal expansion of the ocean, ice-mass loss and changes in terrestrial water storage is consistent with the trends and multidecadal variability in observed sea level on both global and basin scales, which we reconstruct from tide-gauge records. Ice-mass loss-predominantly from glaciers-has caused twice as much sea-level rise since 1900 as has thermal expansion. Mass loss from glaciers and the Greenland Ice Sheet explains the high rates of global sea-level rise during the 1940s, while a sharp increase in water impoundment by artificial reservoirs is the main cause of the lower-than-average rates during the 1970s. The acceleration in sea-level rise since the 1970s is caused by the combination of thermal expansion of the ocean and increased ice-mass loss from Greenland. Our results reconcile the magnitude of observed global-mean sea-level rise since 1900 with estimates based on the underlying processes, implying that no additional processes are required to explain the observed changes in sea level since 1900.


Asunto(s)
Calor , Cubierta de Hielo/química , Agua de Mar/análisis , Agua de Mar/química , Monitoreo del Ambiente , Calentamiento Global/estadística & datos numéricos , Groenlandia , Historia del Siglo XX , Historia del Siglo XXI , Probabilidad , Incertidumbre
17.
Nature ; 577(7792): 671-675, 2020 01.
Artículo en Inglés | MEDLINE | ID: mdl-31942076

RESUMEN

Since its introduction, the reward prediction error theory of dopamine has explained a wealth of empirical phenomena, providing a unifying framework for understanding the representation of reward and value in the brain1-3. According to the now canonical theory, reward predictions are represented as a single scalar quantity, which supports learning about the expectation, or mean, of stochastic outcomes. Here we propose an account of dopamine-based reinforcement learning inspired by recent artificial intelligence research on distributional reinforcement learning4-6. We hypothesized that the brain represents possible future rewards not as a single mean, but instead as a probability distribution, effectively representing multiple future outcomes simultaneously and in parallel. This idea implies a set of empirical predictions, which we tested using single-unit recordings from mouse ventral tegmental area. Our findings provide strong evidence for a neural realization of distributional reinforcement learning.


Asunto(s)
Dopamina/metabolismo , Aprendizaje/fisiología , Modelos Neurológicos , Refuerzo en Psicología , Recompensa , Animales , Inteligencia Artificial , Neuronas Dopaminérgicas/metabolismo , Neuronas GABAérgicas/metabolismo , Ratones , Optimismo , Pesimismo , Probabilidad , Distribuciones Estadísticas , Área Tegmental Ventral/citología , Área Tegmental Ventral/fisiología
18.
Nature ; 587(7834): 432-436, 2020 11.
Artículo en Inglés | MEDLINE | ID: mdl-33029013

RESUMEN

Perceptual sensitivity varies from moment to moment. One potential source of this variability is spontaneous fluctuations in cortical activity that can travel as waves1. Spontaneous travelling waves have been reported during anaesthesia2-7, but it is not known whether they have a role during waking perception. Here, using newly developed analytic techniques to characterize the moment-to-moment dynamics of noisy multielectrode data, we identify spontaneous waves of activity in the extrastriate visual cortex of awake, behaving marmosets (Callithrix jacchus). In monkeys trained to detect faint visual targets, the timing and position of spontaneous travelling waves before target onset predicted the magnitude of target-evoked activity and the likelihood of target detection. By contrast, spatially disorganized fluctuations of neural activity were much less predictive. These results reveal an important role for spontaneous travelling waves in sensory processing through the modulation of neural and perceptual sensitivity.


Asunto(s)
Ondas Encefálicas , Corteza Visual/fisiología , Percepción Visual/fisiología , Vigilia/fisiología , Potenciales de Acción , Animales , Conducta Animal , Callithrix/fisiología , Electrodos , Potenciales Evocados Visuales , Femenino , Masculino , Estimulación Luminosa , Probabilidad , Retina/fisiología
19.
Nature ; 583(7815): 242-248, 2020 07.
Artículo en Inglés | MEDLINE | ID: mdl-32641817

RESUMEN

Enhanced silicate rock weathering (ERW), deployable with croplands, has potential use for atmospheric carbon dioxide (CO2) removal (CDR), which is now necessary to mitigate anthropogenic climate change1. ERW also has possible co-benefits for improved food and soil security, and reduced ocean acidification2-4. Here we use an integrated performance modelling approach to make an initial techno-economic assessment for 2050, quantifying how CDR potential and costs vary among nations in relation to business-as-usual energy policies and policies consistent with limiting future warming to 2 degrees Celsius5. China, India, the USA and Brazil have great potential to help achieve average global CDR goals of 0.5 to 2 gigatonnes of carbon dioxide (CO2) per year with extraction costs of approximately US$80-180 per tonne of CO2. These goals and costs are robust, regardless of future energy policies. Deployment within existing croplands offers opportunities to align agriculture and climate policy. However, success will depend upon overcoming political and social inertia to develop regulatory and incentive frameworks. We discuss the challenges and opportunities of ERW deployment, including the potential for excess industrial silicate materials (basalt mine overburden, concrete, and iron and steel slag) to obviate the need for new mining, as well as uncertainties in soil weathering rates and land-ocean transfer of weathered products.


Asunto(s)
Agricultura , Dióxido de Carbono/aislamiento & purificación , Productos Agrícolas , Sedimentos Geológicos/química , Calentamiento Global/prevención & control , Objetivos , Silicatos/química , Atmósfera/química , Brasil , China , Política Ambiental/economía , Política Ambiental/legislación & jurisprudencia , Calentamiento Global/economía , India , Hierro/aislamiento & purificación , Minería , Política , Probabilidad , Silicatos/aislamiento & purificación , Acero/aislamiento & purificación , Temperatura , Factores de Tiempo , Estados Unidos
20.
Proc Natl Acad Sci U S A ; 120(1): e2215667120, 2023 01 03.
Artículo en Inglés | MEDLINE | ID: mdl-36580594

RESUMEN

In semiarid regions, vegetated ecosystems can display abrupt and unexpected changes, i.e., transitions to different states, due to drifting or time-varying parameters, with severe consequences for the ecosystem and the communities depending on it. Despite intensive research, the early identification of an approaching critical point from observations is still an open challenge. Many data analysis techniques have been proposed, but their performance depends on the system and on the characteristics of the observed data (the resolution, the level of noise, the existence of unobserved variables, etc.). Here, we propose an entropy-based approach to identify an upcoming transition in spatiotemporal data. We apply this approach to observational vegetation data and simulations from two models of vegetation dynamics to infer the arrival of an abrupt shift to an arid state. We show that the permutation entropy (PE) computed from the probabilities of two-dimensional ordinal patterns may provide an early warning indicator of an approaching tipping point, as it may display a maximum (or minimum) before decreasing (or increasing) as the transition approaches. Like other spatial early warning indicators, the spatial permutation entropy does not need a time series of the system dynamics, and it is suited for spatially extended systems evolving on long time scales, like vegetation plots. We quantify its performance and show that, depending on the system and data, the performance can be better, similar or worse than the spatial correlation. Hence, we propose the spatial PE as an additional indicator to try to anticipate regime shifts in vegetated ecosystems.


Asunto(s)
Ecosistema , Entropía , Probabilidad , Factores de Tiempo
SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA