Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 2.911
Filtrar
Más filtros

Intervalo de año de publicación
1.
Proc Natl Acad Sci U S A ; 121(14): e2314231121, 2024 Apr 02.
Artículo en Inglés | MEDLINE | ID: mdl-38527197

RESUMEN

Despite experimental and observational studies demonstrating that biodiversity enhances primary productivity, the best metric for predicting productivity at broad geographic extents-functional trait diversity, phylogenetic diversity, or species richness-remains unknown. Using >1.8 million tree measurements from across eastern US forests, we quantified relationships among functional trait diversity, phylogenetic diversity, species richness, and productivity. Surprisingly, functional trait and phylogenetic diversity explained little variation in productivity that could not be explained by tree species richness. This result was consistent across the entire eastern United States, within ecoprovinces, and within data subsets that controlled for biomass or stand age. Metrics of functional trait and phylogenetic diversity that were independent of species richness were negatively correlated with productivity. This last result suggests that processes that determine species sorting and packing are likely important for the relationships between productivity and biodiversity. This result also demonstrates the potential confusion that can arise when interdependencies among different diversity metrics are ignored. Our findings show the value of species richness as a predictive tool and highlight gaps in knowledge about linkages between functional diversity and ecosystem functioning.


Asunto(s)
Biodiversidad , Bosques , Biomasa , Ecosistema , Filogenia , Estados Unidos
2.
Proc Natl Acad Sci U S A ; 121(8): e2312527121, 2024 Feb 20.
Artículo en Inglés | MEDLINE | ID: mdl-38363864

RESUMEN

Graph representation learning is a fundamental technique for machine learning (ML) on complex networks. Given an input network, these methods represent the vertices by low-dimensional real-valued vectors. These vectors can be used for a multitude of downstream ML tasks. We study one of the most important such task, link prediction. Much of the recent literature on graph representation learning has shown remarkable success in link prediction. On closer investigation, we observe that the performance is measured by the AUC (area under the curve), which suffers biases. Since the ground truth in link prediction is sparse, we design a vertex-centric measure of performance, called the VCMPR@k plots. Under this measure, we show that link predictors using graph representations show poor scores. Despite having extremely high AUC scores, the predictors miss much of the ground truth. We identify a mathematical connection between this performance, the sparsity of the ground truth, and the low-dimensional geometry of the node embeddings. Under a formal theoretical framework, we prove that low-dimensional vectors cannot capture sparse ground truth using dot product similarities (the standard practice in the literature). Our results call into question existing results on link prediction and pose a significant scientific challenge for graph representation learning. The VCMPR plots identify specific scientific challenges for link prediction using low-dimensional node embeddings.

3.
Proc Natl Acad Sci U S A ; 120(21): e2301287120, 2023 May 23.
Artículo en Inglés | MEDLINE | ID: mdl-37186865

RESUMEN

We investigate signal propagation in a quantum field simulator of the Klein-Gordon model realized by two strongly coupled parallel one-dimensional quasi-condensates. By measuring local phononic fields after a quench, we observe the propagation of correlations along sharp light-cone fronts. If the local atomic density is inhomogeneous, these propagation fronts are curved. For sharp edges, the propagation fronts are reflected at the system's boundaries. By extracting the space-dependent variation of the front velocity from the data, we find agreement with theoretical predictions based on curved geodesics of an inhomogeneous metric. This work extends the range of quantum simulations of nonequilibrium field dynamics in general space-time metrics.

4.
Proc Natl Acad Sci U S A ; 120(24): e2218828120, 2023 Jun 13.
Artículo en Inglés | MEDLINE | ID: mdl-37276416

RESUMEN

The foundations of today's societies are provided by manufactured capital accumulation driven by investment decisions through time. Reconceiving how the manufactured assets are harnessed in the production-consumption system is at the heart of the paradigm shifts necessary for long-term sustainability. Our research integrates 50 years of economic and environmental data to provide the global legacy environmental footprint (LEF) and unveil the historical material extractions, greenhouse gas emissions, and health impacts accrued in today's manufactured capital. We show that between 1995 and 2019, global LEF growth outpaced GDP and population growth, and the current high level of national capital stocks has been heavily relying on global supply chains in metals. The LEF shows a larger or growing gap between developed economies (DEs) and less-developed economies (LDEs) while economic returns from global asset supply chains disproportionately flow to DEs, resulting in a double burden for LDEs. Our results show that ensuring best practice in asset production while prioritizing well-being outcomes is essential in addressing global inequalities and protecting the environment. Achieving this requires a paradigm shift in sustainability science and policy, as well as in green finance decision-making, to move beyond the focus on the resource use and emissions of daily operations of the assets and instead take into account the long-term environmental footprints of capital accumulation.

5.
Brief Bioinform ; 24(1)2023 01 19.
Artículo en Inglés | MEDLINE | ID: mdl-36502371

RESUMEN

Deoxyribonucleic acid(DNA) N6-methyladenine plays a vital role in various biological processes, and the accurate identification of its site can provide a more comprehensive understanding of its biological effects. There are several methods for 6mA site prediction. With the continuous development of technology, traditional techniques with the high costs and low efficiencies are gradually being replaced by computer methods. Computer methods that are widely used can be divided into two categories: traditional machine learning and deep learning methods. We first list some existing experimental methods for predicting the 6mA site, then analyze the general process from sequence input to results in computer methods and review existing model architectures. Finally, the results were summarized and compared to facilitate subsequent researchers in choosing the most suitable method for their work.


Asunto(s)
Metilación de ADN , Aprendizaje Automático , Proyectos de Investigación , ADN/genética
6.
Brief Bioinform ; 24(3)2023 05 19.
Artículo en Inglés | MEDLINE | ID: mdl-37099694

RESUMEN

Studies have found that human microbiome is associated with and predictive of human health and diseases. Many statistical methods developed for microbiome data focus on different distance metrics that can capture various information in microbiomes. Prediction models were also developed for microbiome data, including deep learning methods with convolutional neural networks that consider both taxa abundance profiles and taxonomic relationships among microbial taxa from a phylogenetic tree. Studies have also suggested that a health outcome could associate with multiple forms of microbiome profiles. In addition to the abundance of some taxa that are associated with a health outcome, the presence/absence of some taxa is also associated with and predictive of the same health outcome. Moreover, associated taxa may be close to each other on a phylogenetic tree or spread apart on a phylogenetic tree. No prediction models currently exist that use multiple forms of microbiome-outcome associations. To address this, we propose a multi-kernel machine regression (MKMR) method that is able to capture various types of microbiome signals when doing predictions. MKMR utilizes multiple forms of microbiome signals through multiple kernels being transformed from multiple distance metrics for microbiomes and learn an optimal conic combination of these kernels, with kernel weights helping us understand contributions of individual microbiome signal types. Simulation studies suggest a much-improved prediction performance over competing methods with mixture of microbiome signals. Real data applicants to predict multiple health outcomes using throat and gut microbiome data also suggest a better prediction of MKMR than that of competing methods.


Asunto(s)
Microbiota , Humanos , Filogenia , Simulación por Computador , Redes Neurales de la Computación , Evaluación de Resultado en la Atención de Salud
7.
J Proteome Res ; 23(1): 418-429, 2024 01 05.
Artículo en Inglés | MEDLINE | ID: mdl-38038272

RESUMEN

The inherent diversity of approaches in proteomics research has led to a wide range of software solutions for data analysis. These software solutions encompass multiple tools, each employing different algorithms for various tasks such as peptide-spectrum matching, protein inference, quantification, statistical analysis, and visualization. To enable an unbiased comparison of commonly used bottom-up label-free proteomics workflows, we introduce WOMBAT-P, a versatile platform designed for automated benchmarking and comparison. WOMBAT-P simplifies the processing of public data by utilizing the sample and data relationship format for proteomics (SDRF-Proteomics) as input. This feature streamlines the analysis of annotated local or public ProteomeXchange data sets, promoting efficient comparisons among diverse outputs. Through an evaluation using experimental ground truth data and a realistic biological data set, we uncover significant disparities and a limited overlap in the quantified proteins. WOMBAT-P not only enables rapid execution and seamless comparison of workflows but also provides valuable insights into the capabilities of different software solutions. These benchmarking metrics are a valuable resource for researchers in selecting the most suitable workflow for their specific data sets. The modular architecture of WOMBAT-P promotes extensibility and customization. The software is available at https://github.com/wombat-p/WOMBAT-Pipelines.


Asunto(s)
Benchmarking , Proteómica , Flujo de Trabajo , Programas Informáticos , Proteínas , Análisis de Datos
8.
J Proteome Res ; 23(2): 532-549, 2024 02 02.
Artículo en Inglés | MEDLINE | ID: mdl-38232391

RESUMEN

Since 2010, the Human Proteome Project (HPP), the flagship initiative of the Human Proteome Organization (HUPO), has pursued two goals: (1) to credibly identify the protein parts list and (2) to make proteomics an integral part of multiomics studies of human health and disease. The HPP relies on international collaboration, data sharing, standardized reanalysis of MS data sets by PeptideAtlas and MassIVE-KB using HPP Guidelines for quality assurance, integration and curation of MS and non-MS protein data by neXtProt, plus extensive use of antibody profiling carried out by the Human Protein Atlas. According to the neXtProt release 2023-04-18, protein expression has now been credibly detected (PE1) for 18,397 of the 19,778 neXtProt predicted proteins coded in the human genome (93%). Of these PE1 proteins, 17,453 were detected with mass spectrometry (MS) in accordance with HPP Guidelines and 944 by a variety of non-MS methods. The number of neXtProt PE2, PE3, and PE4 missing proteins now stands at 1381. Achieving the unambiguous identification of 93% of predicted proteins encoded from across all chromosomes represents remarkable experimental progress on the Human Proteome parts list. Meanwhile, there are several categories of predicted proteins that have proved resistant to detection regardless of protein-based methods used. Additionally there are some PE1-4 proteins that probably should be reclassified to PE5, specifically 21 LINC entries and ∼30 HERV entries; these are being addressed in the present year. Applying proteomics in a wide array of biological and clinical studies ensures integration with other omics platforms as reported by the Biology and Disease-driven HPP teams and the antibody and pathology resource pillars. Current progress has positioned the HPP to transition to its Grand Challenge Project focused on determining the primary function(s) of every protein itself and in networks and pathways within the context of human health and disease.


Asunto(s)
Anticuerpos , Proteoma , Humanos , Proteoma/genética , Proteoma/análisis , Bases de Datos de Proteínas , Espectrometría de Masas/métodos , Proteómica/métodos
9.
Clin Infect Dis ; 79(3): 588-595, 2024 Sep 26.
Artículo en Inglés | MEDLINE | ID: mdl-38658348

RESUMEN

BACKGROUND: Antibiotic overuse at hospital discharge is common, but there is no metric to evaluate hospital performance at this transition of care. We built a risk-adjusted metric for comparing hospitals on their overall post-discharge antibiotic use. METHODS: This was a retrospective study across all acute-care admissions within the Veterans Health Administration during 2018-2021. For patients discharged to home, we collected data on antibiotics and relevant covariates. We built a zero-inflated, negative, binomial mixed model with 2 random intercepts for each hospital to predict post-discharge antibiotic exposure and length of therapy (LOT). Data were split into training and testing sets to evaluate model performance using absolute error. Hospital performance was determined by the predicted random intercepts. RESULTS: 1 804 300 patient-admissions across 129 hospitals were included. Antibiotics were prescribed to 41.5% while hospitalized and 19.5% at discharge. Median LOT among those prescribed post-discharge antibiotics was 7 (IQR, 4-10) days. The predictive model detected post-discharge antibiotic use with fidelity, including accurate identification of any exposure (area under the precision-recall curve = 0.97) and reliable prediction of post-discharge LOT (mean absolute error = 1.48). Based on this model, 39 (30.2%) hospitals prescribed antibiotics less often than expected at discharge and used shorter LOT than expected. Twenty-eight (21.7%) hospitals prescribed antibiotics more often at discharge and used longer LOT. CONCLUSIONS: A model using electronically available data was able to predict antibiotic use prescribed at hospital discharge and showed that some hospitals were more successful in reducing antibiotic overuse at this transition of care. This metric may help hospitals identify opportunities for improved antibiotic stewardship at discharge.


Asunto(s)
Antibacterianos , Hospitales , Alta del Paciente , Humanos , Antibacterianos/uso terapéutico , Alta del Paciente/estadística & datos numéricos , Estudios Retrospectivos , Femenino , Masculino , Hospitales/estadística & datos numéricos , Anciano , Persona de Mediana Edad , Estados Unidos , Programas de Optimización del Uso de los Antimicrobianos , Ajuste de Riesgo/métodos , Pautas de la Práctica en Medicina/estadística & datos numéricos , United States Department of Veterans Affairs , Prescripción Inadecuada/estadística & datos numéricos
10.
Neuroimage ; 290: 120567, 2024 Apr 15.
Artículo en Inglés | MEDLINE | ID: mdl-38471597

RESUMEN

Non-invasive and effective differentiation along with determining the degree of deviations compared to the healthy cohort is important in the case of various brain disorders, including multiple sclerosis (MS). Evaluation of the effectiveness of diffusion tensor metrics (DTM) in 3T DTI for recording MS-related deviations was performed using a time-acceptable MRI protocol with unique comprehensive detection of systematic errors related to spatial heterogeneity of magnetic field gradients. In a clinical study, DTMs were acquired in segmented regions of interest (ROIs) for 50 randomly selected healthy controls (HC) and 50 multiple sclerosis patients. Identical phantom imaging was performed for each clinical measurement to estimate and remove the influence of systematic errors using the b-matrix spatial distribution in the DTI (BSD-DTI) technique. In the absence of statistically significant differences due to age in healthy volunteers and patients with multiple sclerosis, the existence of significant differences between groups was proven using DTM. Moreover, a statistically significant impact of spatial systematic errors occurs for all ROIs and DTMs in the phantom and for approximately 90 % in the HC and MS groups. In the case of a single patient measurement, this appears for all the examined ROIs and DTMs. The obtained DTMs effectively discriminate healthy volunteers from multiple sclerosis patients with a low mean score on the Expanded Disability Status Scale. The magnitude of the group differences is typically significant, with an effect size of approximately 0.5, and similar in both the standard approach and after elimination of systematic errors. Differences were also observed between metrics obtained using these two approaches. Despite a small alterations in mean DTMs values for groups and ROIs (1-3 %), these differences were characterized by a huge effect (effect size ∼0.8 or more). These findings indicate the importance of determining the spatial distribution of systematic errors specific to each MR scanner and DTI acquisition protocol in order to assess their impact on DTM in the ROIs examined. This is crucial to establish accurate DTM values for both individual patients and mean values for a healthy population as a reference. This approach allows for an initial reliable diagnosis based on DTI metrics.


Asunto(s)
Encefalopatías , Esclerosis Múltiple , Humanos , Imagen de Difusión Tensora/métodos , Esclerosis Múltiple/diagnóstico por imagen , Imagen por Resonancia Magnética/métodos
SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA