Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 20 de 1.417
Filter
1.
BMC Bioinformatics ; 25(1): 213, 2024 Jun 13.
Article in English | MEDLINE | ID: mdl-38872097

ABSTRACT

BACKGROUND: Automated hypothesis generation (HG) focuses on uncovering hidden connections within the extensive information that is publicly available. This domain has become increasingly popular, thanks to modern machine learning algorithms. However, the automated evaluation of HG systems is still an open problem, especially on a larger scale. RESULTS: This paper presents a novel benchmarking framework Dyport for evaluating biomedical hypothesis generation systems. Utilizing curated datasets, our approach tests these systems under realistic conditions, enhancing the relevance of our evaluations. We integrate knowledge from the curated databases into a dynamic graph, accompanied by a method to quantify discovery importance. This not only assesses hypotheses accuracy but also their potential impact in biomedical research which significantly extends traditional link prediction benchmarks. Applicability of our benchmarking process is demonstrated on several link prediction systems applied on biomedical semantic knowledge graphs. Being flexible, our benchmarking system is designed for broad application in hypothesis generation quality verification, aiming to expand the scope of scientific discovery within the biomedical research community. CONCLUSIONS: Dyport is an open-source benchmarking framework designed for biomedical hypothesis generation systems evaluation, which takes into account knowledge dynamics, semantics and impact. All code and datasets are available at: https://github.com/IlyaTyagin/Dyport .


Subject(s)
Benchmarking , Benchmarking/methods , Algorithms , Biomedical Research/methods , Software , Machine Learning , Databases, Factual , Computational Biology/methods , Semantics
2.
Circ Cardiovasc Qual Outcomes ; : e010637, 2024 Jun 18.
Article in English | MEDLINE | ID: mdl-38887950

ABSTRACT

BACKGROUND: Cardiogenic shock is a morbid complication of heart disease that claims the lives of more than 1 in 3 patients presenting with this syndrome. Supporting a unique collaboration across clinical specialties, federal regulators, payors, and industry, the American Heart Association volunteers and staff have launched a quality improvement registry to better understand the clinical manifestations of shock phenotypes, and to benchmark the management patterns, and outcomes of patients presenting with cardiogenic shock to hospitals across the United States. METHODS: Participating hospitals will enroll consecutive hospitalized patients with cardiogenic shock, regardless of etiology or severity. Data are collected through individual reviews of medical records of sequential adult patients with cardiogenic shock. The electronic case record form was collaboratively designed with a core minimum data structure and aligned with Shock Academic Research Consortium definitions. This registry will allow participating health systems to evaluate patient-level data including diagnostic approaches, therapeutics, use of advanced monitoring and circulatory support, processes of care, complications, and in-hospital survival. Participating sites can leverage these data for onsite monitoring of outcomes and benchmarking versus other institutions. The registry was concomitantly designed to provide a high-quality longitudinal research infrastructure for pragmatic randomized trials as well as translational, clinical, and implementation research. An aggregate deidentified data set will be made available to the research community on the American Heart Association's Precision Medicine Platform. On March 31, 2022, the American Heart Association Cardiogenic Shock Registry received its first clinical records. At the time of this submission, 100 centers are participating. CONCLUSIONS: The American Heart Association Cardiogenic Shock Registry will serve as a resource using consistent data structure and definitions for the medical and research community to accelerate scientific advancement through shared learning and research resulting in improved quality of care and outcomes of shock patients.

3.
Front Radiol ; 4: 1386906, 2024.
Article in English | MEDLINE | ID: mdl-38836218

ABSTRACT

Introduction: This study is a retrospective evaluation of the performance of deep learning models that were developed for the detection of COVID-19 from chest x-rays, undertaken with the goal of assessing the suitability of such systems as clinical decision support tools. Methods: Models were trained on the National COVID-19 Chest Imaging Database (NCCID), a UK-wide multi-centre dataset from 26 different NHS hospitals and evaluated on independent multi-national clinical datasets. The evaluation considers clinical and technical contributors to model error and potential model bias. Model predictions are examined for spurious feature correlations using techniques for explainable prediction. Results: Models performed adequately on NHS populations, with performance comparable to radiologists, but generalised poorly to international populations. Models performed better in males than females, and performance varied across age groups. Alarmingly, models routinely failed when applied to complex clinical cases with confounding pathologies and when applied to radiologist defined "mild" cases. Discussion: This comprehensive benchmarking study examines the pitfalls in current practices that have led to impractical model development. Key findings highlight the need for clinician involvement at all stages of model development, from data curation and label definition, to model evaluation, to ensure that all clinical factors and disease features are appropriately considered during model design. This is imperative to ensure automated approaches developed for disease detection are fit-for-purpose in a clinical setting.

4.
Res Sq ; 2024 May 21.
Article in English | MEDLINE | ID: mdl-38826386

ABSTRACT

Detecting very minor (< 1%) subpopulations using next-generation sequencing is a critical need for multiple applications including detection of drug resistant pathogens and somatic variant detection in oncology. To enable these applications, wet lab enhancements and bioinformatic error correction methods have been developed for 'sequencing by synthesis' technology to reduce its inherent sequencing error rate. A recently available sequencing approach termed 'sequencing by binding' claims to have higher base calling accuracy data "out of the box." This paper evaluates the utility of using 'sequencing by binding' for the detection of ultra-rare subpopulations down to 0.001%.

5.
Sci Rep ; 14(1): 13406, 2024 Jun 11.
Article in English | MEDLINE | ID: mdl-38862672

ABSTRACT

This article investigates an inventive methodology for precisely and efficiently controlling photovoltaic emulating (PVE) prototypes, which are employed in the assessment of solar systems. A modification to the Shift controller (SC), which is regarded as a leading PVE controller, is proposed. In addition to efficiency and accuracy, the novel controller places a high emphasis on improving transient performance. The novel piecewise linear-logarithmic adaptation utilized by the Modified-Shift controller (M-SC) enables the controller to linearly adapt to the load burden within a specified operating range. At reduced load resistances, the transient sped of the PVE can be increased through the implementation of this scheme. An exceedingly short settling time of the PVE is ensured by a logarithmic modification of the control action beyond the critical point. In order to analyze the M-SC in the context of PVE control, numerical investigations implemented in MATLAB/Simulink (Version: Simulink 10.4, URL: https://in.mathworks.com/products/simulink.html ) were utilized. To assess the effectiveness of the suggested PVE, three benchmarking profiles are presented: eight scenarios involving irradiance/PVE load, continuously varying irradiance/temperature, and rapidly changing loads. These profiles include metrics such as settling time, efficiency, Integral of Absolute Error (IAE), and percentage error (epve). As suggested, the M-SC attains an approximate twofold increase in speed over the conventional SC, according to the findings. This is substantiated by an efficiency increase of 2.2%, an expeditiousness enhancement of 5.65%, and an IAE rise of 5.65%. Based on the results of this research, the new M-SC enables the PVE to experience perpetual dynamic operation enhancement, making it highly suitable for evaluating solar systems in ever-changing environments.

6.
Data Brief ; 54: 110543, 2024 Jun.
Article in English | MEDLINE | ID: mdl-38868385

ABSTRACT

Conifer shoots exhibit intricate geometries at an exceptionally detailed spatial scale. Describing the complete structure of a conifer shoot, which contributes to a radiation scattering pattern, has been difficult, and the previous respective components of radiative transfer models for conifer stands were rather coarse. This paper presents a dataset aimed at models and applications requiring detailed 3D representations of needle shoots. The data collection was conducted in the Järvselja RAdiation transfer Model Intercomparison (RAMI) pine stand in Estonia. The dataset includes 3-dimensional surface information on 10 shoots of two conifer species present in the stand (5 shoots per species) - Scots pine (Pinus sylvestris L.) and Norway spruce (Picea abies L. Karst.). The samples were collected on 26th July 2022, and subsequently blue light 3D photogrammetry scanning technique was used to obtain their high-resolution 3D point cloud representations. For each of these samples, the dataset comprises of a photo of the sampled shoot and its obtained 3-dimensional surface reconstruction. Scanned shoots may replace previous, artificially generated models and contribute to the more realistic representation of 3D forest representations and, consequently, more accurate estimates of related parameters and processes by radiative transfer models.

7.
Am J Epidemiol ; 2024 Jun 18.
Article in English | MEDLINE | ID: mdl-38896054

ABSTRACT

Cardiovascular disease (CVD) is a leading cause of death globally. Angiotensin-converting enzyme inhibitors (ACEi) and angiotensin receptor blockers (ARB), compared in the ONTARGET trial, each prevent CVD. However, trial results may not be generalisable and their effectiveness in underrepresented groups is unclear. Using trial emulation methods within routine-care data to validate findings, we explored generalisability of ONTARGET results. For people prescribed an ACEi/ARB in the UK Clinical Practice Research Datalink GOLD from 1/1/2001-31/7/2019, we applied trial criteria and propensity-score methods to create an ONTARGET trial-eligible cohort. Comparing ARB to ACEi, we estimated hazard ratios for the primary composite trial outcome (cardiovascular death, myocardial infarction, stroke, or hospitalisation for heart failure), and secondary outcomes. As the pre-specified criteria were met confirming trial emulation, we then explored treatment heterogeneity among three trial-underrepresented subgroups: females, those aged ≥75 years and those with chronic kidney disease (CKD). In the trial-eligible population (n=137,155), results for the primary outcome demonstrated similar effects of ARB and ACEi, (HR 0.97 [95% CI: 0.93, 1.01]), meeting the pre-specified validation criteria. When extending this outcome to trial-underrepresented groups, similar treatment effects were observed by sex, age and CKD. This suggests that ONTARGET trial findings are generalisable to trial-underrepresented subgroups.

8.
Genome Biol ; 25(1): 159, 2024 Jun 17.
Article in English | MEDLINE | ID: mdl-38886757

ABSTRACT

BACKGROUND: The advent of single-cell RNA-sequencing (scRNA-seq) has driven significant computational methods development for all steps in the scRNA-seq data analysis pipeline, including filtering, normalization, and clustering. The large number of methods and their resulting parameter combinations has created a combinatorial set of possible pipelines to analyze scRNA-seq data, which leads to the obvious question: which is best? Several benchmarking studies compare methods but frequently find variable performance depending on dataset and pipeline characteristics. Alternatively, the large number of scRNA-seq datasets along with advances in supervised machine learning raise a tantalizing possibility: could the optimal pipeline be predicted for a given dataset? RESULTS: Here, we begin to answer this question by applying 288 scRNA-seq analysis pipelines to 86 datasets and quantifying pipeline success via a range of measures evaluating cluster purity and biological plausibility. We build supervised machine learning models to predict pipeline success given a range of dataset and pipeline characteristics. We find that prediction performance is significantly better than random and that in many cases pipelines predicted to perform well provide clustering outputs similar to expert-annotated cell type labels. We identify characteristics of datasets that correlate with strong prediction performance that could guide when such prediction models may be useful. CONCLUSIONS: Supervised machine learning models have utility for recommending analysis pipelines and therefore the potential to alleviate the burden of choosing from the near-infinite number of possibilities. Different aspects of datasets influence the predictive performance of such models which will further guide users.


Subject(s)
Benchmarking , RNA-Seq , Single-Cell Analysis , Single-Cell Analysis/methods , RNA-Seq/methods , Humans , Supervised Machine Learning , Sequence Analysis, RNA/methods , Cluster Analysis , Computational Biology/methods , Machine Learning , Animals , Single-Cell Gene Expression Analysis
9.
J Health Organ Manag ; ahead-of-print(ahead-of-print)2024 Jun 18.
Article in English | MEDLINE | ID: mdl-38880981

ABSTRACT

PURPOSE: This study investigates how a hospital can increase the flow of patients through its emergency department by using benchmarking and process improvement techniques borrowed from the manufacturing sector. DESIGN/METHODOLOGY/APPROACH: An in-depth case study of an Australasian public hospital utilises rigorous, multi-method data collection procedures with systems thinking to benchmark an emergency department (ED) value stream and identify the performance inhibitors. FINDINGS: High levels of value stream uncertainty result from inefficient processes and weak controls. Reduced patient flow arises from senior management's commitment to simplistic government targets, clinical staff that lack basic operations management skills, and fragmented information systems. High junior/senior staff ratios aggravate the lack of inter-functional integration and poor use of time and material resources, increasing the risk of a critical patient incident. RESEARCH LIMITATIONS/IMPLICATIONS: This research is limited to a single case; hence, further research should assess value stream maturity and associated performance enablers and inhibitors in other emergency departments experiencing patient flow delays. PRACTICAL IMPLICATIONS: This study illustrates how hospital managers can use systems thinking and a context-free performance benchmarking measure to identify needed interventions and transferable best practices for achieving seamless patient flow. ORIGINALITY/VALUE: This study is the first to operationalise the theoretical concept of the seamless healthcare system to acute care as defined by Parnaby and Towill (2008). It is also the first to use the uncertainty circle model in an Australasian public healthcare setting to objectively benchmark an emergency department's value stream maturity.


Subject(s)
Benchmarking , Efficiency, Organizational , Emergency Service, Hospital , Organizational Case Studies , Humans , Hospitals, Public , Australasia
10.
JMIR AI ; 3: e55957, 2024 Jun 07.
Article in English | MEDLINE | ID: mdl-38875592

ABSTRACT

Clinical decision-making is a crucial aspect of health care, involving the balanced integration of scientific evidence, clinical judgment, ethical considerations, and patient involvement. This process is dynamic and multifaceted, relying on clinicians' knowledge, experience, and intuitive understanding to achieve optimal patient outcomes through informed, evidence-based choices. The advent of generative artificial intelligence (AI) presents a revolutionary opportunity in clinical decision-making. AI's advanced data analysis and pattern recognition capabilities can significantly enhance the diagnosis and treatment of diseases, processing vast medical data to identify patterns, tailor treatments, predict disease progression, and aid in proactive patient management. However, the incorporation of AI into clinical decision-making raises concerns regarding the reliability and accuracy of AI-generated insights. To address these concerns, 11 "verification paradigms" are proposed in this paper, with each paradigm being a unique method to verify the evidence-based nature of AI in clinical decision-making. This paper also frames the concept of "clinically explainable, fair, and responsible, clinician-, expert-, and patient-in-the-loop AI." This model focuses on ensuring AI's comprehensibility, collaborative nature, and ethical grounding, advocating for AI to serve as an augmentative tool, with its decision-making processes being transparent and understandable to clinicians and patients. The integration of AI should enhance, not replace, the clinician's judgment and should involve continuous learning and adaptation based on real-world outcomes and ethical and legal compliance. In conclusion, while generative AI holds immense promise in enhancing clinical decision-making, it is essential to ensure that it produces evidence-based, reliable, and impactful knowledge. Using the outlined paradigms and approaches can help the medical and patient communities harness AI's potential while maintaining high patient care standards.

12.
EJIFCC ; 35(1): 34-43, 2024 Apr.
Article in English | MEDLINE | ID: mdl-38706734

ABSTRACT

A business intelligence (BI) tool in a laboratory workflow offers various benefits, including data consolidation, real-time monitoring, process optimization, cost analysis, performance benchmarking (quality indicators), predictive analytics, compliance reporting, and decision support. These tools improve operational efficiency, quality control, inventory management, cost analysis, and clinical decision-making. This write up reveals the workflow process and implementation of BI in a private hospital laboratory. By identifying challenges and overcoming them, laboratories can utilize the power of BI and analytics solutions to accelerate healthcare performance, lower costs, and improve care quality. We used navify (Viewics) as a BI platform which relies on an infinity data warehouse for analytics and dashboards. We applied it for pre-analytic, analytic and post-analytic phases in laboratory. We conclude, digitalization is crucial for innovation and competitiveness, enhancing productivity, efficiency, and flexibility in future laboratories.

13.
Front Psychiatry ; 15: 1080235, 2024.
Article in English | MEDLINE | ID: mdl-38707617

ABSTRACT

Objective: In 2016, the SUicide PRevention Action NETwork (SUPRANET) was launched. The SUPRANET intervention aims at better implementing the suicide prevention guideline. An implementation study was developed to evaluate the impact of SUPRANET over time on three outcomes: 1) suicides, 2) registration of suicide attempts, and 3) professionals' knowledge and adherence to the guideline. Methods: This study included 13 institutions, and used an uncontrolled longitudinal prospective design, collecting biannual data on a 2-level structure (institutional and team level). Suicides and suicide attempts were extracted from data systems. Professionals' knowledge and adherence were measured using a self-report questionnaire. A three-step interrupted time series analysis (ITSA) was performed for the first two outcomes. Step 1 assessed whether institutions executed the SUPRANET intervention as intended. Step 2 examined if institutions complied with the four guideline recommendations. Based on steps 1 and 2, institutions were classified as below or above average and after that, included as moderators in step 3 to examine the effect of SUPRANET over time compared to the baseline. The third outcome was analyzed with a longitudinal multilevel regression analysis, and tested for moderation. Results: After institutions were labeled based on their efforts and investments made (below average vs above average), we found no statistically significant difference in suicides (standardized mortality ratio) between the two groups relative to the baseline. Institutions labeled as above average did register significantly more suicide attempts directly after the start of the intervention (78.8 per 100,000 patients, p<0.001, 95%CI=(51.3 per 100,000, 106.4 per 100,000)), and as the study progressed, they continued to report a significantly greater improvement in the number of registered attempts compared with institutions assigned as below average (8.7 per 100,000 patients per half year, p=0.004, 95%CI=(3.3 per 100,000, 14.1 per 100,000)). Professionals working at institutions that invested more in the SUPRANET activities adhered significantly better to the guideline over time (b=1.39, 95%CI=(0.12,2.65), p=0.032). Conclusion: Institutions labeled as above average registered significantly more suicide attempts and also better adhered to the guideline compared with institutions that had performed less well. Although no convincing intervention effect on suicides was found within the study period, we do think that this network is potentially able to reduce suicides. Continuous investments and fully implementing as many guideline recommendations as possible are essential to achieve the biggest drop in suicides.

16.
BMC Genom Data ; 25(1): 45, 2024 May 07.
Article in English | MEDLINE | ID: mdl-38714942

ABSTRACT

OBJECTIVES: Cellular deconvolution is a valuable computational process that can infer the cellular composition of heterogeneous tissue samples from bulk RNA-sequencing data. Benchmark testing is a crucial step in the development and evaluation of new cellular deconvolution algorithms, and also plays a key role in the process of building and optimizing deconvolution pipelines for specific experimental applications. However, few in vivo benchmarking datasets exist, particularly for whole blood, which is the single most profiled human tissue. Here, we describe a unique dataset containing whole blood gene expression profiles and matched circulating leukocyte counts from a large cohort of human donors with utility for benchmarking cellular deconvolution pipelines. DATA DESCRIPTION: To produce this dataset, venous whole blood was sampled from 138 total donors recruited at an academic medical center. Genome-wide expression profiling was subsequently performed via next-generation RNA sequencing, and white blood cell differentials were collected in parallel using flow cytometry. The resultant final dataset contains donor-level expression data for over 45,000 protein coding and non-protein coding genes, as well as matched neutrophil, lymphocyte, monocyte, and eosinophil counts.


Subject(s)
Benchmarking , Humans , Leukocyte Count , Gene Expression Profiling/methods , Transcriptome , Sequence Analysis, RNA/methods , Leukocytes/metabolism , High-Throughput Nucleotide Sequencing , Algorithms
17.
Brief Bioinform ; 25(3)2024 Mar 27.
Article in English | MEDLINE | ID: mdl-38706320

ABSTRACT

The advent of rapid whole-genome sequencing has created new opportunities for computational prediction of antimicrobial resistance (AMR) phenotypes from genomic data. Both rule-based and machine learning (ML) approaches have been explored for this task, but systematic benchmarking is still needed. Here, we evaluated four state-of-the-art ML methods (Kover, PhenotypeSeeker, Seq2Geno2Pheno and Aytan-Aktug), an ML baseline and the rule-based ResFinder by training and testing each of them across 78 species-antibiotic datasets, using a rigorous benchmarking workflow that integrates three evaluation approaches, each paired with three distinct sample splitting methods. Our analysis revealed considerable variation in the performance across techniques and datasets. Whereas ML methods generally excelled for closely related strains, ResFinder excelled for handling divergent genomes. Overall, Kover most frequently ranked top among the ML approaches, followed by PhenotypeSeeker and Seq2Geno2Pheno. AMR phenotypes for antibiotic classes such as macrolides and sulfonamides were predicted with the highest accuracies. The quality of predictions varied substantially across species-antibiotic combinations, particularly for beta-lactams; across species, resistance phenotyping of the beta-lactams compound, aztreonam, amoxicillin/clavulanic acid, cefoxitin, ceftazidime and piperacillin/tazobactam, alongside tetracyclines demonstrated more variable performance than the other benchmarked antibiotics. By organism, Campylobacter jejuni and Enterococcus faecium phenotypes were more robustly predicted than those of Escherichia coli, Staphylococcus aureus, Salmonella enterica, Neisseria gonorrhoeae, Klebsiella pneumoniae, Pseudomonas aeruginosa, Acinetobacter baumannii, Streptococcus pneumoniae and Mycobacterium tuberculosis. In addition, our study provides software recommendations for each species-antibiotic combination. It furthermore highlights the need for optimization for robust clinical applications, particularly for strains that diverge substantially from those used for training.


Subject(s)
Anti-Bacterial Agents , Phenotype , Anti-Bacterial Agents/pharmacology , Machine Learning , Drug Resistance, Bacterial/genetics , Computational Biology/methods , Genome, Bacterial , Genome, Microbial , Humans , Bacteria/genetics , Bacteria/drug effects
18.
J Adv Nurs ; 2024 May 27.
Article in English | MEDLINE | ID: mdl-38803125

ABSTRACT

AIM: To examine if and how selected German hospitals use nurse-sensitive clinical indicators and perspectives on national/international benchmarking. DESIGN: Qualitative study. METHODS: In 2020, 18 expert interviews were conducted with key informants from five purposively selected hospitals, being the first in Germany implementing Magnet® or Pathway®. Interviews were analyzed using content analysis with deductive-inductive coding. The study followed the COREQ guideline. RESULTS: Three major themes emerged: first, limited pre-existence of and necessity for nurse-sensitive data. Although most interviewees reported data collection for hospital-acquired pressure ulcers and falls with injuries, implementation varied and interviewees highlighted the necessity to develop additional nurse-sensitive indicators for the German context. Second, the theme creating an enabling data environment comprised building clinicians' acceptance, establishing a data culture, and reducing workload by using electronic health records. Third, challenges and opportunities in establishing benchmarking were identified but most interviewees called for a national or European benchmarking system. CONCLUSION: The need for further development of nurse-sensitive clinical indicators and its implementation in practice was highlighted. Several actions were suggested at hospital level to establish an enabling data environment in clinical care, including a nationwide or European benchmarking system. IMPLICATIONS FOR THE PROFESSION AND PATIENT CARE: Involving nurses in data collection, comparison and benchmarking of nurse-sensitive indicators and their use in practice can improve quality of patient care. IMPACT: Nurse-sensitive indicators were rarely collected, and a need for action was identified. The study results show research needs on nurse-sensitive indicators for Germany and Europe. Measures were identified to create an enabling data environment in hospitals. An initiative was started in Germany to establish a nurse-sensitive benchmarking capacity. PATIENT OR PUBLIC CONTRIBUTION: Clinical practitioners and nurse/clinical managers were interviewed.

19.
Article in English | MEDLINE | ID: mdl-38696030

ABSTRACT

We present a freely available data set of surgical case mixes and surgery process duration distributions based on processed data from the German Operating Room Benchmarking initiative. This initiative collects surgical process data from over 320 German, Austrian, and Swiss hospitals. The data exhibits high levels of quantity, quality, standardization, and multi-dimensionality, making it especially valuable for operating room planning in Operations Research. We consider detailed steps of the perioperative process and group the data with respect to the hospital's level of care, the surgery specialty, and the type of surgery patient. We compare case mixes for different subgroups and conclude that they differ significantly, demonstrating that it is necessary to test operating room planning methods in different settings, e.g., using data sets like ours. Further, we discuss limitations and future research directions. Finally, we encourage the extension and foundation of new operating room benchmarking initiatives and their usage for operating room planning.

20.
Int J Mol Sci ; 25(10)2024 May 09.
Article in English | MEDLINE | ID: mdl-38791181

ABSTRACT

The aim of this study was to compare filter-aided sample preparation (FASP) and protein aggregation capture (PAC) starting from a three-species protein mix (Human, Soybean and Pisum sativum) and two different starting amounts (1 and 10 µg). Peptide mixtures were analyzed by data-independent acquisition (DIA) and raw files were processed by three commonly used software: Spectronaut, MaxDIA and DIA-NN. Overall, the highest number of proteins (mean value of 5491) were identified by PAC (10 µg), while the lowest number (4855) was identified by FASP (1 µg). The latter experiment displayed the worst performance in terms of both specificity (0.73) and precision (0.24). Other tested conditions showed better diagnostic accuracy, with specificity values of 0.95-0.99 and precision values between 0.61 and 0.86. In order to provide guidance on the data analysis pipeline, the accuracy diagnostic of three software was investigated: (i) the highest sensitivity was obtained with Spectronaut (median of 0.67) highlighting the ability of Spectronaut to quantify low-abundance proteins, (ii) the best precision value was obtained by MaxDIA (median of 0.84), but with a reduced number of identifications compared to Spectronaut and DIA-NN data, and (iii) the specificity values were similar (between 0.93 and 0.99). The data are available on ProteomeXchange with the identifier PXD044349.


Subject(s)
Proteomics , Software , Proteomics/methods , Humans , Glycine max/metabolism , Glycine max/chemistry , Pisum sativum/chemistry , Pisum sativum/metabolism , Plant Proteins/analysis , Proteome/analysis
SELECTION OF CITATIONS
SEARCH DETAIL
...