Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 147
Filtrar
1.
JMIR Med Inform ; 12: e49542, 2024 Aug 13.
Artículo en Inglés | MEDLINE | ID: mdl-39140273

RESUMEN

Background: Patient-monitoring software generates a large amount of data that can be reused for clinical audits and scientific research. The Observational Health Data Sciences and Informatics (OHDSI) consortium developed the Observational Medical Outcomes Partnership (OMOP) Common Data Model (CDM) to standardize electronic health record data and promote large-scale observational and longitudinal research. Objective: This study aimed to transform primary care data into the OMOP CDM format. Methods: We extracted primary care data from electronic health records at a multidisciplinary health center in Wattrelos, France. We performed structural mapping between the design of our local primary care database and the OMOP CDM tables and fields. Local French vocabularies concepts were mapped to OHDSI standard vocabularies. To validate the implementation of primary care data into the OMOP CDM format, we applied a set of queries. A practical application was achieved through the development of a dashboard. Results: Data from 18,395 patients were implemented into the OMOP CDM, corresponding to 592,226 consultations over a period of 20 years. A total of 18 OMOP CDM tables were implemented. A total of 17 local vocabularies were identified as being related to primary care and corresponded to patient characteristics (sex, location, year of birth, and race), units of measurement, biometric measures, laboratory test results, medical histories, and drug prescriptions. During semantic mapping, 10,221 primary care concepts were mapped to standard OHDSI concepts. Five queries were used to validate the OMOP CDM by comparing the results obtained after the completion of the transformations with the results obtained in the source software. Lastly, a prototype dashboard was developed to visualize the activity of the health center, the laboratory test results, and the drug prescription data. Conclusions: Primary care data from a French health care facility have been implemented into the OMOP CDM format. Data concerning demographics, units, measurements, and primary care consultation steps were already available in OHDSI vocabularies. Laboratory test results and drug prescription data were mapped to available vocabularies and structured in the final model. A dashboard application provided health care professionals with feedback on their practice.

2.
AoB Plants ; 16(4): plae035, 2024 Jul.
Artículo en Inglés | MEDLINE | ID: mdl-39040093

RESUMEN

The analysis of photosynthetic traits has become an integral part of plant (eco-)physiology. Many of these characteristics are not directly measured, but calculated from combinations of several, more direct, measurements. The calculations of such derived variables are based on underlying physical models and may use additional constants or assumed values. Commercially available gas-exchange instruments typically report such derived variables, but the available implementations use different definitions and assumptions. Moreover, no software is currently available to allow a fully scripted and reproducible workflow that includes importing data, pre-processing and recalculating derived quantities. The R package gasanalyzer aims to address these issues by providing methods to import data from different instruments, by translating photosynthetic variables to a standardized nomenclature, and by optionally recalculating derived quantities using standardized equations. In addition, the package facilitates performing sensitivity analyses on variables or assumptions used in the calculations to allow researchers to better assess the robustness of the results. The use of the package and how to perform sensitivity analyses are demonstrated using three different examples.

3.
Gels ; 10(7)2024 Jul 20.
Artículo en Inglés | MEDLINE | ID: mdl-39057505

RESUMEN

Hyaluronic acid, in the form of a gel or viscoelastic colloidal solution, is currently used for the viscosupplementation of joints affected by osteoarthritis, but its effectiveness is under debate in relation to newer alternatives. Based on meta-analytical arguments, the present article reinforces the opinion that there are still no decisive arguments for its complete replacement but for its use adapted to the peculiarities of the disease manifestation and of the patients. A "broad" comparison is first made with almost all alternatives studied in the last decade, and then a meta-regression study is performed to compare and predict the effect size induced by viscosupplementation therapy and its main challenger of clinical interest, the platelet-rich plasma treatment. If they are computerized, the developed models can represent tools for clinicians in determining the appropriateness of the option or not for viscosupplementation in a manner adapted to the pain felt by the patients, to their age, or to other clinical circumstances. The models were generated using algorithms implemented in the R language and assembled in different R packages. All primary data and necessary R scripts are provided in accordance with the philosophy of reproducible research. Finally, we adhere in a documented way to the opinion that HA-based products, currently under circumspection, are still clinically useful.

4.
Appl Plant Sci ; 12(3): e11573, 2024.
Artículo en Inglés | MEDLINE | ID: mdl-38912123

RESUMEN

Premise: Species distribution models (SDMs) are widely utilized to guide conservation decisions. The complexity of available data and SDM methodologies necessitates considerations of how data are chosen and processed for modeling to enhance model accuracy and support biological interpretations and ecological applications. Methods: We built SDMs for the invasive aquatic plant European frog-bit using aggregated and field data that span multiple scales, data sources, and data types. We tested how model results were affected by five modeler decision points: the exclusion of (1) missing and (2) correlated data and the (3) scale (large-scale aggregated data or systematic field data), (4) source (specimens or observations), and (5) type (presence-background or presence-absence) of occurrence data. Results: Decisions about the exclusion of missing and correlated data, as well as the scale and type of occurrence data, significantly affected metrics of model performance. The source and type of occurrence data led to differences in the importance of specific explanatory variables as drivers of species distribution and predicted probability of suitable habitat. Discussion: Our findings relative to European frog-bit illustrate how specific data selection and processing decisions can influence the outcomes and interpretation of SDMs. Data-centric protocols that incorporate data exploration into model building can help ensure models are reproducible and can be accurately interpreted in light of biological questions.

5.
Comput Biol Med ; 173: 108320, 2024 May.
Artículo en Inglés | MEDLINE | ID: mdl-38531250

RESUMEN

Brain age is an estimate of chronological age obtained from T1-weighted magnetic resonance images (T1w MRI), representing a straightforward diagnostic biomarker of brain aging and associated diseases. While the current best accuracy of brain age predictions on T1w MRIs of healthy subjects ranges from two to three years, comparing results across studies is challenging due to differences in the datasets, T1w preprocessing pipelines, and evaluation protocols used. This paper investigates the impact of T1w image preprocessing on the performance of four deep learning brain age models from recent literature. Four preprocessing pipelines, which differed in terms of registration transform, grayscale correction, and software implementation, were evaluated. The results showed that the choice of software or preprocessing steps could significantly affect the prediction error, with a maximum increase of 0.75 years in mean absolute error (MAE) for the same model and dataset. While grayscale correction had no significant impact on MAE, using affine rather than rigid registration to brain atlas statistically significantly improved MAE. Models trained on 3D images with isotropic 1mm3 resolution exhibited less sensitivity to the T1w preprocessing variations compared to 2D models or those trained on downsampled 3D images. Our findings indicate that extensive T1w preprocessing improves MAE, especially when predicting on a new dataset. This runs counter to prevailing research literature, which suggests that models trained on minimally preprocessed T1w scans are better suited for age predictions on MRIs from unseen scanners. We demonstrate that, irrespective of the model or T1w preprocessing used during training, applying some form of offset correction is essential to enable the model's performance to generalize effectively on datasets from unseen sites, regardless of whether they have undergone the same or different T1w preprocessing as the training set.


Asunto(s)
Encéfalo , Imagen por Resonancia Magnética , Humanos , Preescolar , Encéfalo/diagnóstico por imagen , Imagen por Resonancia Magnética/métodos , Imagenología Tridimensional , Envejecimiento , Programas Informáticos , Procesamiento de Imagen Asistido por Computador/métodos
6.
Stat Biosci ; 16(1): 250-264, 2024.
Artículo en Inglés | MEDLINE | ID: mdl-38495080

RESUMEN

Teaching statistics through engaging applications to contemporary large-scale datasets is essential to attracting students to the field. To this end, we developed a hands-on, week-long workshop for senior high-school or junior undergraduate students, without prior knowledge in statistical genetics but with some basic knowledge in data science, to conduct their own genome-wide association study (GWAS). The GWAS was performed for open source gene expression data, using publicly available human genetics data. Assisted by a detailed instruction manual, students were able to obtain ∼1.4 million p-values from a real scientific study, within several days. This early motivation kept students engaged in learning the theories that support their results, including regression, data visualization, results interpretation, and large-scale multiple hypothesis testing. To further their learning motivation by emphasizing the personal connection to this type of data analysis, students were encouraged to make short presentations about how GWAS has provided insights into the genetic basis of diseases that are present in their friends or families. The appended open source, step-by-step instruction manual includes descriptions of the datasets used, the software needed, and results from the workshop. Additionally, scripts used in the workshop are archived on Github and Zenodo to further enhance reproducible research and training.

7.
Mod Pathol ; 37(4): 100439, 2024 Apr.
Artículo en Inglés | MEDLINE | ID: mdl-38286221

RESUMEN

This work puts forth and demonstrates the utility of a reporting framework for collecting and evaluating annotations of medical images used for training and testing artificial intelligence (AI) models in assisting detection and diagnosis. AI has unique reporting requirements, as shown by the AI extensions to the Consolidated Standards of Reporting Trials (CONSORT) and Standard Protocol Items: Recommendations for Interventional Trials (SPIRIT) checklists and the proposed AI extensions to the Standards for Reporting Diagnostic Accuracy (STARD) and Transparent Reporting of a Multivariable Prediction model for Individual Prognosis or Diagnosis (TRIPOD) checklists. AI for detection and/or diagnostic image analysis requires complete, reproducible, and transparent reporting of the annotations and metadata used in training and testing data sets. In an earlier work by other researchers, an annotation workflow and quality checklist for computational pathology annotations were proposed. In this manuscript, we operationalize this workflow into an evaluable quality checklist that applies to any reader-interpreted medical images, and we demonstrate its use for an annotation effort in digital pathology. We refer to this quality framework as the Collection and Evaluation of Annotations for Reproducible Reporting of Artificial Intelligence (CLEARR-AI).


Asunto(s)
Inteligencia Artificial , Lista de Verificación , Humanos , Pronóstico , Procesamiento de Imagen Asistido por Computador , Proyectos de Investigación
8.
Magn Reson Med ; 91(4): 1464-1477, 2024 Apr.
Artículo en Inglés | MEDLINE | ID: mdl-38044680

RESUMEN

PURPOSE: The reproducibility of scientific reports is crucial to advancing human knowledge. This paper is a summary of our experience in replicating a balanced SSFP half-radial dual-echo imaging technique (bSTAR) using open-source frameworks as a response to the 2023 ISMRM "repeat it with me" Challenge. METHODS: We replicated the bSTAR technique for thoracic imaging at 0.55T. The bSTAR pulse sequence is implemented in Pulseq, a vendor neutral open-source rapid sequence prototyping environment. Image reconstruction is performed with the open-source Berkeley Advanced Reconstruction Toolbox (BART). The replication of bSTAR, termed open-source bSTAR, is tested by replicating several figures from the published literature. Original bSTAR, using the pulse sequence and image reconstruction developed by the original authors, and open-source bSTAR, with pulse sequence and image reconstruction developed in this work, were performed in healthy volunteers. RESULTS: Both echo images obtained from open-source bSTAR contain no visible artifacts and show identical spatial resolution and image quality to those in the published literature. A direct head-to-head comparison between open-source bSTAR and original bSTAR on a healthy volunteer indicates that open-source bSTAR provides adequate SNR, spatial resolution, level of artifacts, and conspicuity of pulmonary vessels comparable to original bSTAR. CONCLUSION: We have successfully replicated bSTAR lung imaging at 0.55T using two open-source frameworks. Full replication of a research method solely relying on information on a research paper is unfortunately rare in research, but our success gives greater confidence that a research methodology can be indeed replicated as described.


Asunto(s)
Artefactos , Imagen por Resonancia Magnética , Humanos , Reproducibilidad de los Resultados , Imagen por Resonancia Magnética/métodos
9.
Elife ; 122023 Nov 23.
Artículo en Inglés | MEDLINE | ID: mdl-37994903

RESUMEN

Reproducible research and open science practices have the potential to accelerate scientific progress by allowing others to reuse research outputs, and by promoting rigorous research that is more likely to yield trustworthy results. However, these practices are uncommon in many fields, so there is a clear need for training that helps and encourages researchers to integrate reproducible research and open science practices into their daily work. Here, we outline eleven strategies for making training in these practices the norm at research institutions. The strategies, which emerged from a virtual brainstorming event organized in collaboration with the German Reproducibility Network, are concentrated in three areas: (i) adapting research assessment criteria and program requirements; (ii) training; (iii) building communities. We provide a brief overview of each strategy, offer tips for implementation, and provide links to resources. We also highlight the importance of allocating resources and monitoring impact. Our goal is to encourage researchers - in their roles as scientists, supervisors, mentors, instructors, and members of curriculum, hiring or evaluation committees - to think creatively about the many ways they can promote reproducible research and open science practices in their institutions.


Asunto(s)
Mentores , Médicos , Humanos , Reproducibilidad de los Resultados , Selección de Personal , Investigadores
10.
J Appl Biomech ; 39(6): 421-431, 2023 Dec 01.
Artículo en Inglés | MEDLINE | ID: mdl-37793655

RESUMEN

A muscle's architecture, defined as the geometric arrangement of its fibers with respect to its mechanical line of action, impacts its abilities to produce force and shorten or lengthen under load. Ultrasound and other noninvasive imaging methods have contributed significantly to our understanding of these structure-function relationships. The goal of this work was to develop a MATLAB toolbox for tracking and mathematically representing muscle architecture at the fascicle scale, based on brightness-mode ultrasound imaging data. The MuscleUS_Toolbox allows user-performed segmentation of a region of interest and automated modeling of local fascicle orientation; calculation of streamlines between aponeuroses of origin and insertion; and quantification of fascicle length, pennation angle, and curvature. A method is described for optimizing the fascicle orientation modeling process, and the capabilities of the toolbox for quantifying and visualizing fascicle architecture are illustrated in the human tibialis anterior muscle. The toolbox is freely available.


Asunto(s)
Músculo Esquelético , Humanos , Músculo Esquelético/diagnóstico por imagen , Músculo Esquelético/fisiología , Ultrasonografía
11.
Wellcome Open Res ; 8: 286, 2023.
Artículo en Inglés | MEDLINE | ID: mdl-37829674

RESUMEN

Crosslinking and immunoprecipitation (CLIP) technologies have become a central component of the molecular biologists' toolkit to study protein-RNA interactions and thus to uncover core principles of RNA biology. There has been a proliferation of CLIP-based experimental protocols, as well as computational tools, especially for peak-calling. Consequently, there is an urgent need for a well-documented bioinformatic pipeline that enshrines the principles of robustness, reproducibility, scalability, portability and flexibility while embracing the diversity of experimental and computational CLIP tools. To address this, we present nf-core/clipseq - a robust Nextflow pipeline for quality control and analysis of CLIP sequencing data. It is part of the international nf-core community effort to develop and curate a best-practice, gold-standard set of pipelines for data analysis. The standards enabled by Nextflow and nf-core, including workflow management, version control, continuous integration and containerisation ensure that these key needs are met. Furthermore, multiple tools are implemented ( e.g. for peak-calling), alongside visualisation of quality control metrics to empower the user to make their own informed decisions based on their data. nf-core/clipseq remains under active development, with plans to incorporate newly released tools to ensure that pipeline remains up-to-date and relevant for the community. Engagement with users and developers is encouraged through the nf-core GitHub repository and Slack channel to promote collaboration. It is available at https://nf-co.re/clipseq.

12.
PeerJ ; 11: e16318, 2023.
Artículo en Inglés | MEDLINE | ID: mdl-37876906

RESUMEN

Transcription factor binding to a gene regulatory region induces or represses its expression. Binding and expression target analysis (BETA) integrates the binding and gene expression data to predict this function. First, the regulatory potential of the factor is modeled based on the distance of its binding sites from the transcription start sites in a decay function. Then the differential expression statistics from an experiment where this factor was perturbed represent the binding effect. The rank product of the two values is employed to order in importance. This algorithm was originally implemented in Python. We reimplemented the algorithm in R to take advantage of existing data structures and other tools for downstream analyses. Here, we attempted to replicate the findings in the original BETA paper. We applied the new implementation to the same datasets using default and varying inputs and cutoffs. We successfully replicated the original results. Moreover, we showed that the method was appropriately influenced by varying the input and was robust to choices of cutoffs in statistical testing.


Asunto(s)
Secuenciación de Inmunoprecipitación de Cromatina , Transcriptoma , Factores de Transcripción/genética , Inmunoprecipitación de Cromatina/métodos , Algoritmos
13.
J Proteome Res ; 22(9): 2775-2784, 2023 09 01.
Artículo en Inglés | MEDLINE | ID: mdl-37530557

RESUMEN

Missing values are a notable challenge when analyzing mass spectrometry-based proteomics data. While the field is still actively debating the best practices, the challenge increased with the emergence of mass spectrometry-based single-cell proteomics and the dramatic increase in missing values. A popular approach to deal with missing values is to perform imputation. Imputation has several drawbacks for which alternatives exist, but currently, imputation is still a practical solution widely adopted in single-cell proteomics data analysis. This perspective discusses the advantages and drawbacks of imputation. We also highlight 5 main challenges linked to missing value management in single-cell proteomics. Future developments should aim to solve these challenges, whether it is through imputation or data modeling. The perspective concludes with recommendations for reporting missing values, for reporting methods that deal with missing values, and for proper encoding of missing values.


Asunto(s)
Proteómica , Análisis de la Célula Individual , Proteómica/métodos , Espectrometría de Masas/métodos , Algoritmos
14.
bioRxiv ; 2023 Oct 30.
Artículo en Inglés | MEDLINE | ID: mdl-37214863

RESUMEN

Brain age is an estimate of chronological age obtained from T1-weighted magnetic resonance images (T1w MRI) and represents a simple diagnostic biomarker of brain ageing and associated diseases. While the current best accuracy of brain age predictions on T1w MRIs of healthy subjects ranges from two to three years, comparing results from different studies is challenging due to differences in the datasets, T1w preprocessing pipelines, and performance metrics used. This paper investigates the impact of T1w image preprocessing on the performance of four deep learning brain age models presented in recent literature. Four preprocessing pipelines were evaluated, differing in terms of registration, grayscale correction, and software implementation. The results showed that the choice of software or preprocessing steps can significantly affect the prediction error, with a maximum increase of 0.7 years in mean absolute error (MAE) for the same model and dataset. While grayscale correction had no significant impact on MAE, the affine registration, compared to the rigid registration of T1w images to brain atlas was shown to statistically significantly improve MAE. Models trained on 3D images with isotropic 1 mm3 resolution exhibited less sensitivity to the T1w preprocessing variations compared to 2D models or those trained on downsampled 3D images. Some proved invariant to the preprocessing pipeline, however only after offset correction. Our findings generally indicate that extensive T1w preprocessing enhances the MAE, especially when applied to a new dataset. This runs counter to prevailing research literature which suggests that models trained on minimally preprocessed T1w scans are better poised for age predictions on MRIs from unseen scanners. Regardless of model or T1w preprocessing used, we show that to enable generalization of model's performance on a new dataset with either the same or different T1w preprocessing than the one applied in model training, some form of offset correction should be applied.

15.
Methods Mol Biol ; 2649: 339-357, 2023.
Artículo en Inglés | MEDLINE | ID: mdl-37258872

RESUMEN

Handling and manipulating tabular datasets is a critical step in every metagenomics analysis pipeline. The R statistical programming language offers a variety of versatile tools for working with tabular data that allow for the development of computationally efficient and reproducible workflows. Here we outline the basics of the R programming language and showcase a number of tools for data manipulation and basic analysis of metagenomics datasets.


Asunto(s)
Metagenómica , Programas Informáticos , Lenguajes de Programación , Flujo de Trabajo
16.
Front Genet ; 14: 1106631, 2023.
Artículo en Inglés | MEDLINE | ID: mdl-37065493

RESUMEN

The human genome project galvanized the scientific community around an ambitious goal. Upon completion, the project delivered several discoveries, and a new era of research commenced. More importantly, novel technologies and analysis methods materialized during the project period. The cost reduction allowed many more labs to generate high-throughput datasets. The project also served as a model for other extensive collaborations that generated large datasets. These datasets were made public and continue to accumulate in repositories. As a result, the scientific community should consider how these data can be utilized effectively for the purposes of research and the public good. A dataset can be re-analyzed, curated, or integrated with other forms of data to enhance its utility. We highlight three important areas to achieve this goal in this brief perspective. We also emphasize the critical requirements for these strategies to be successful. We draw on our own experience and others in using publicly available datasets to support, develop, and extend our research interest. Finally, we underline the beneficiaries and discuss some risks involved in data reuse.

17.
Genome Biol ; 24(1): 77, 2023 04 17.
Artículo en Inglés | MEDLINE | ID: mdl-37069586

RESUMEN

We present RCRUNCH, an end-to-end solution to CLIP data analysis for identification of binding sites and sequence specificity of RNA-binding proteins. RCRUNCH can analyze not only reads that map uniquely to the genome but also those that map to multiple genome locations or across splice boundaries and can consider various types of background in the estimation of read enrichment. By applying RCRUNCH to the eCLIP data from the ENCODE project, we have constructed a comprehensive and homogeneous resource of in-vivo-bound RBP sequence motifs. RCRUNCH automates the reproducible analysis of CLIP data, enabling studies of post-transcriptional control of gene expression.


Asunto(s)
Proteínas de Unión al ARN , ARN , ARN/metabolismo , Análisis de Secuencia de ARN , Sitios de Unión/genética , Unión Proteica , Proteínas de Unión al ARN/genética , Proteínas de Unión al ARN/metabolismo
18.
Front Psychol ; 14: 940961, 2023.
Artículo en Inglés | MEDLINE | ID: mdl-36936015

RESUMEN

The Myers-Briggs Type Indicator (MBTI) is a popular tool used by psychologists working as managers' coaches in organizational contexts. Despite its popularity, few studies provide empirical evidence on the role of the MBTI as a predictor of managers' leadership-related behaviors. This article is written based on research that answers the question of how good the MBTI is to prove leadership behavior. It does so by comparing goodness-of-fit indexes of two confirmatory factor analysis models and two structural models on the personality-leadership relationship, following standards of reproducible research principles. We sampled 529 participants who were graduate and undergraduate students enrolled in business administration programs from Colombian universities. Results show conclusive evidence of the psychometric measurement of both MBTI and leadership practices, even though the relationship between MBTI and the leadership practices inventory proved to be weak.

19.
BMC Med Inform Decis Mak ; 23(1): 8, 2023 01 16.
Artículo en Inglés | MEDLINE | ID: mdl-36647111

RESUMEN

BACKGROUND: The CVD-COVID-UK consortium was formed to understand the relationship between COVID-19 and cardiovascular diseases through analyses of harmonised electronic health records (EHRs) across the four UK nations. Beyond COVID-19, data harmonisation and common approaches enable analysis within and across independent Trusted Research Environments. Here we describe the reproducible harmonisation method developed using large-scale EHRs in Wales to accommodate the fast and efficient implementation of cross-nation analysis in England and Wales as part of the CVD-COVID-UK programme. We characterise current challenges and share lessons learnt. METHODS: Serving the scope and scalability of multiple study protocols, we used linked, anonymised individual-level EHR, demographic and administrative data held within the SAIL Databank for the population of Wales. The harmonisation method was implemented as a four-layer reproducible process, starting from raw data in the first layer. Then each of the layers two to four is framed by, but not limited to, the characterised challenges and lessons learnt. We achieved curated data as part of our second layer, followed by extracting phenotyped data in the third layer. We captured any project-specific requirements in the fourth layer. RESULTS: Using the implemented four-layer harmonisation method, we retrieved approximately 100 health-related variables for the 3.2 million individuals in Wales, which are harmonised with corresponding variables for > 56 million individuals in England. We processed 13 data sources into the first layer of our harmonisation method: five of these are updated daily or weekly, and the rest at various frequencies providing sufficient data flow updates for frequent capturing of up-to-date demographic, administrative and clinical information. CONCLUSIONS: We implemented an efficient, transparent, scalable, and reproducible harmonisation method that enables multi-nation collaborative research. With a current focus on COVID-19 and its relationship with cardiovascular outcomes, the harmonised data has supported a wide range of research activities across the UK.


Asunto(s)
COVID-19 , Registros Electrónicos de Salud , Humanos , COVID-19/epidemiología , Gales/epidemiología , Inglaterra
20.
Curr Protoc ; 3(1): e658, 2023 Jan.
Artículo en Inglés | MEDLINE | ID: mdl-36633424

RESUMEN

Sound data analysis is essential to retrieve meaningful biological information from single-cell proteomics experiments. This analysis is carried out by computational methods that are assembled into workflows, and their implementations influence the conclusions that can be drawn from the data. In this work, we explore and compare the computational workflows that have been used over the last four years and identify a profound lack of consensus on how to analyze single-cell proteomics data. We highlight the need for benchmarking of computational workflows and standardization of computational tools and data, as well as carefully designed experiments. Finally, we cover the current standardization efforts that aim to fill the gap, list the remaining missing pieces, and conclude with lessons learned from the replication of published single-cell proteomics analyses. © 2023 Wiley Periodicals LLC.


Asunto(s)
Proteómica , Programas Informáticos , Proteómica/métodos , Flujo de Trabajo , Análisis de Datos , Estándares de Referencia
SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA