Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 20 de 1.268
Filter
1.
Genome Biol ; 25(1): 169, 2024 07 01.
Article in English | MEDLINE | ID: mdl-38956606

ABSTRACT

BACKGROUND: Computational cell type deconvolution enables the estimation of cell type abundance from bulk tissues and is important for understanding tissue microenviroment, especially in tumor tissues. With rapid development of deconvolution methods, many benchmarking studies have been published aiming for a comprehensive evaluation for these methods. Benchmarking studies rely on cell-type resolved single-cell RNA-seq data to create simulated pseudobulk datasets by adding individual cells-types in controlled proportions. RESULTS: In our work, we show that the standard application of this approach, which uses randomly selected single cells, regardless of the intrinsic difference between them, generates synthetic bulk expression values that lack appropriate biological variance. We demonstrate why and how the current bulk simulation pipeline with random cells is unrealistic and propose a heterogeneous simulation strategy as a solution. The heterogeneously simulated bulk samples match up with the variance observed in real bulk datasets and therefore provide concrete benefits for benchmarking in several ways. We demonstrate that conceptual classes of deconvolution methods differ dramatically in their robustness to heterogeneity with reference-free methods performing particularly poorly. For regression-based methods, the heterogeneous simulation provides an explicit framework to disentangle the contributions of reference construction and regression methods to performance. Finally, we perform an extensive benchmark of diverse methods across eight different datasets and find BayesPrism and a hybrid MuSiC/CIBERSORTx approach to be the top performers. CONCLUSIONS: Our heterogeneous bulk simulation method and the entire benchmarking framework is implemented in a user friendly package https://github.com/humengying0907/deconvBenchmarking and https://doi.org/10.5281/zenodo.8206516 , enabling further developments in deconvolution methods.


Subject(s)
Benchmarking , Single-Cell Analysis , Single-Cell Analysis/methods , Humans , Computer Simulation , RNA-Seq/methods , Computational Biology/methods
2.
Methods Mol Biol ; 2780: 45-68, 2024.
Article in English | MEDLINE | ID: mdl-38987463

ABSTRACT

Proteins are the fundamental organic macromolecules in living systems that play a key role in a variety of biological functions including immunological detection, intracellular trafficking, and signal transduction. The docking of proteins has greatly advanced during recent decades and has become a crucial complement to experimental methods. Protein-protein docking is a helpful method for simulating protein complexes whose structures have not yet been solved experimentally. This chapter focuses on major search tactics along with various docking programs used in protein-protein docking algorithms, which include: direct search, exhaustive global search, local shape feature matching, randomized search, and broad category of post-docking approaches. As backbone flexibility predictions and interactions in high-resolution protein-protein docking remain important issues in the overall optimization context, we have put forward several methods and solutions used to handle backbone flexibility. In addition, various docking methods that are utilized for flexible backbone docking, including ATTRACT, FlexDock, FLIPDock, HADDOCK, RosettaDock, FiberDock, etc., along with their scoring functions, algorithms, advantages, and limitations are discussed. Moreover, what progress in search technology is expected, including not only the creation of new search algorithms but also the enhancement of existing ones, has been debated. As conformational flexibility is one of the most crucial factors affecting docking success, more work should be put into evaluating the conformational flexibility upon binding for a particular case in addition to developing new algorithms to replace the rigid body docking and scoring approach.


Subject(s)
Algorithms , Molecular Docking Simulation , Protein Binding , Proteins , Molecular Docking Simulation/methods , Proteins/chemistry , Proteins/metabolism , Software , Protein Conformation , Computational Biology/methods , Databases, Protein , Protein Interaction Mapping/methods
3.
J Proteomics ; : 105246, 2024 07 02.
Article in English | MEDLINE | ID: mdl-38964537

ABSTRACT

The 2023 European Bioinformatics Community for Mass Spectrometry (EuBIC-MS) Developers Meeting was held from January 15th to January 20th, 2023, in Congressi Stefano Franscin at Monte Verità in Ticino, Switzerland. The participants were scientists and developers working in computational mass spectrometry (MS), metabolomics, and proteomics. The 5-day program was split between introductory keynote lectures and parallel hackathon sessions focusing on "Artificial Intelligence in proteomics" to stimulate future directions in the MS-driven omics areas. During the latter, the participants developed bioinformatics tools and resources addressing outstanding needs in the community. The hackathons allowed less experienced participants to learn from more advanced computational MS experts and actively contribute to highly relevant research projects. We successfully produced several new tools applicable to the proteomics community by improving data analysis and facilitating future research.

4.
Brief Bioinform ; 25(4)2024 May 23.
Article in English | MEDLINE | ID: mdl-38985929

ABSTRACT

Recent advances in sequencing, mass spectrometry, and cytometry technologies have enabled researchers to collect multiple 'omics data types from a single sample. These large datasets have led to a growing consensus that a holistic approach is needed to identify new candidate biomarkers and unveil mechanisms underlying disease etiology, a key to precision medicine. While many reviews and benchmarks have been conducted on unsupervised approaches, their supervised counterparts have received less attention in the literature and no gold standard has emerged yet. In this work, we present a thorough comparison of a selection of six methods, representative of the main families of intermediate integrative approaches (matrix factorization, multiple kernel methods, ensemble learning, and graph-based methods). As non-integrative control, random forest was performed on concatenated and separated data types. Methods were evaluated for classification performance on both simulated and real-world datasets, the latter being carefully selected to cover different medical applications (infectious diseases, oncology, and vaccines) and data modalities. A total of 15 simulation scenarios were designed from the real-world datasets to explore a large and realistic parameter space (e.g. sample size, dimensionality, class imbalance, effect size). On real data, the method comparison showed that integrative approaches performed better or equally well than their non-integrative counterpart. By contrast, DIABLO and the four random forest alternatives outperform the others across the majority of simulation scenarios. The strengths and limitations of these methods are discussed in detail as well as guidelines for future applications.


Subject(s)
Computational Biology , Humans , Computational Biology/methods , Algorithms , Genomics/methods , Genomics/statistics & numerical data , Multiomics
5.
Toxicol Lett ; 2024 Jul 03.
Article in English | MEDLINE | ID: mdl-38969027

ABSTRACT

2-Methyl-4-nitroaniline (MNA), an intermediate in the synthesis of azo dyes, is widely distributed in various environmental media and organisms. Although there is speculation regarding MNA's potential to be hepatotoxic, the underlying mechanisms of its hepatotoxicity and its definitive diagnostic process remain largely unexplored. In this research. In the present study, we initially predicted the toxicity and possible toxic effect pathways of MNA using ProTox-II, and found that MNA binds to the PPARγ receptor (binding energy -6.118kcal/mol) with a potential PPARγ agonist effect. Subsequently, in vivo exposure evaluation was conducted on Wistar rats to assess the impact of MNA after a 90-day exposure period, by detecting serum biochemical indexes, hematological indexes, urinary indexes, inflammatory factors, liver histopathological observations and liver tissue PPARγ mRNA expression. The results showed that MNA causes liver function abnormalities, liver histopathological changes and inflammatory response, along with a pronounced increase in PPARγ mRNA levels. This study suggests that the hepatotoxic mechanism of MNA may be related to its possible upregulation of PPARγ expression, increased liver dysfunction and inflammatory responses. Based on these results, the benchmark dose lower limit (BMDL) of 1.503mg/kg for male Wistar rats was also established, providing a vital benchmark for determining the safety threshold of MNA. Our data highlight the hepatotoxic mechanism of MNA and contribute to a better understanding of its potential etiological diagnosis.

6.
Sci Total Environ ; : 174515, 2024 Jul 04.
Article in English | MEDLINE | ID: mdl-38971244

ABSTRACT

During the SARS-CoV-2 pandemic, genome-based wastewater surveillance sequencing has been a powerful tool for public health to monitor circulating and emerging viral variants. As a medium, wastewater is very complex because of its mixed matrix nature, which makes the deconvolution of wastewater samples more difficult. Here we introduce a gold standard dataset constructed from synthetic viral control mixtures of known composition, spiked into a wastewater RNA matrix and sequenced on the Oxford Nanopore Technologies platform. We compare the performance of eight of the most commonly used deconvolution tools in identifying SARS-CoV-2 variants present in these mixtures. The software evaluated was primarily chosen for its relevance to the CDC wastewater surveillance reporting protocol, which until recently employed a pipeline that incorporates results from four deconvolution methods: Freyja, kallisto, Kraken 2/Bracken, and LCS. We also tested Lollipop, a deconvolution method used by the Swiss SARS-CoV-2 Sequencing Consortium, and three additional methods not used in the C-WAP pipeline: lineagespot, Alcov, and VaQuERo. We found that the commonly used software Freyja outperformed the other CDC pipeline tools in correct identification of lineages present in the control mixtures, and that the VaQuERo method was similarly accurate, with minor differences in the ability of the two methods to avoid false negatives and suppress false positives. Our results also provide insight into the effect of the tiling primer scheme and wastewater RNA extract matrix on viral sequencing and data deconvolution outcomes.

7.
JMIR Med Inform ; 12: e57674, 2024 Jun 28.
Article in English | MEDLINE | ID: mdl-38952020

ABSTRACT

Background: Large language models (LLMs) have achieved great progress in natural language processing tasks and demonstrated the potential for use in clinical applications. Despite their capabilities, LLMs in the medical domain are prone to generating hallucinations (not fully reliable responses). Hallucinations in LLMs' responses create substantial risks, potentially threatening patients' physical safety. Thus, to perceive and prevent this safety risk, it is essential to evaluate LLMs in the medical domain and build a systematic evaluation. Objective: We developed a comprehensive evaluation system, MedGPTEval, composed of criteria, medical data sets in Chinese, and publicly available benchmarks. Methods: First, a set of evaluation criteria was designed based on a comprehensive literature review. Second, existing candidate criteria were optimized by using a Delphi method with 5 experts in medicine and engineering. Third, 3 clinical experts designed medical data sets to interact with LLMs. Finally, benchmarking experiments were conducted on the data sets. The responses generated by chatbots based on LLMs were recorded for blind evaluations by 5 licensed medical experts. The evaluation criteria that were obtained covered medical professional capabilities, social comprehensive capabilities, contextual capabilities, and computational robustness, with 16 detailed indicators. The medical data sets include 27 medical dialogues and 7 case reports in Chinese. Three chatbots were evaluated: ChatGPT by OpenAI; ERNIE Bot by Baidu, Inc; and Doctor PuJiang (Dr PJ) by Shanghai Artificial Intelligence Laboratory. Results: Dr PJ outperformed ChatGPT and ERNIE Bot in the multiple-turn medical dialogues and case report scenarios. Dr PJ also outperformed ChatGPT in the semantic consistency rate and complete error rate category, indicating better robustness. However, Dr PJ had slightly lower scores in medical professional capabilities compared with ChatGPT in the multiple-turn dialogue scenario. Conclusions: MedGPTEval provides comprehensive criteria to evaluate chatbots by LLMs in the medical domain, open-source data sets, and benchmarks assessing 3 LLMs. Experimental results demonstrate that Dr PJ outperforms ChatGPT and ERNIE Bot in social and professional contexts. Therefore, such an assessment system can be easily adopted by researchers in this community to augment an open-source data set.

8.
Food Chem Toxicol ; 191: 114846, 2024 Jul 01.
Article in English | MEDLINE | ID: mdl-38960084

ABSTRACT

2,4-dinitroaniline (2,4-D), a widely used dye intermediate, is one of the typical pollutants, and its potential health risks and toxicity are still largely unknown. To explore its subchronic oral toxicity, Wistar rats (equal numbers of males and females) were used as test animals, and a 90-day oral dosing experiment was conducted, divided into control group, low-dose group (0.055 mg/kg), medium-dose group (0.22 mg/kg), medium-high dose group (0.89 mg/kg), and high-dose group (3.56 mg/kg). The body weight data, clinical appearance, and drug reactions of each test rat within 90 days of dosing were recorded; morning urine samples were collected four times to test for eight urinary indicators; blood samples were collected to test for nineteen hematological indicators and sixteen biochemical indicators; tissue samples were collected for pathological analysis; moreover, the no-observed-adverse-effect level (NOAEL) was determined, and the benchmark dose method was used to support this determination and provide a statistical estimate of the dose corresponding. The results indicated that the chronic toxicity of 2,4-dinitroaniline showed certain gender differences, with the eyes, liver, and kidneys being the main potential target organs of toxicity. Moreover, the subchronic oral NOAEL for 2,4-dinitroaniline was determined to be 0.22 mg/kg body weight (0.22 mg/kg for males and 0.89 mg/kg for females), and a preliminary calculation of the safe exposure limit for human was 0.136 mg/kg. The research results greatly enriched the safety evaluation data of 2,4-dinitroaniline, contributing to a robust scientific foundation for the development of informed safety regulations and public health precautions.

9.
Ecotoxicol Environ Saf ; 281: 116582, 2024 Jun 20.
Article in English | MEDLINE | ID: mdl-38905934

ABSTRACT

Molecular docking, pivotal in predicting small-molecule ligand binding modes, struggles with accurately identifying binding conformations and affinities. This is particularly true for neonicotinoids, insecticides whose impacts on ecosystems require precise molecular interaction modeling. This study scrutinizes the effectiveness of prominent docking software (Ledock, ADFR, Autodock Vina, CDOCKER) in simulating interactions of environmental chemicals, especially neonicotinoid-like molecules with nicotinic acetylcholine receptors (nAChRs) and acetylcholine binding proteins (AChBPs). We aimed to assess the accuracy and reliability of these tools in reproducing crystallographic data, focusing on semi-flexible and flexible docking approaches. Our analysis identified Ledock as the most accurate in semi-flexible docking, while Autodock Vina with Vinardo scoring function proved most reliable. However, no software consistently excelled in both accuracy and reliability. Additionally, our evaluation revealed that none of the tools could establish a clear correlation between docking scores and experimental dissociation constants (Kd) for neonicotinoid-like compounds. In contrast, a strong correlation was found with drug-like compounds, bringing to light a bias in considered software towards pharmaceuticals, thus limiting their applicability to environmental chemicals. The comparison between semi-flexible and flexible docking revealed that the increased computational complexity of the latter did not result in enhanced accuracy. In fact, the higher computational cost of flexible docking with its lack of enhanced predictive accuracy, rendered this approach useless for this class of compounds. Conclusively, our findings emphasize the need for continued development of docking methodologies, particularly for environmental chemicals. This study not only illuminates current software capabilities but also underscores the urgency for advancements in computational molecular docking as it is a relevant tool to environmental sciences.

10.
Regul Toxicol Pharmacol ; 151: 105653, 2024 May 31.
Article in English | MEDLINE | ID: mdl-38825064

ABSTRACT

Despite two decades of research on silver nanoparticle (AgNP) toxicity, a safe threshold for exposure has not yet been established, albeit being critically needed for risk assessment and regulatory decision-making. Traditionally, a point-of-departure (PoD) value is derived from dose response of apical endpoints in animal studies using either the no-observed-adverse-effect level (NOAEL) approach, or benchmark dose (BMD) modeling. To develop new approach methodologies (NAMs) to inform human risk assessment of AgNPs, we conducted a concentration response modeling of the transcriptomic changes in hepatocytes derived from human induced pluripotent stem cells (iPSCs) after being exposed to a wide range concentration (0.01-25 µg/ml) of AgNPs for 24 h. A plausible transcriptomic PoD of 0.21 µg/ml was derived for a pathway related to the mode-of-action (MOA) of AgNPs, and a more conservative PoD of 0.10 µg/ml for a gene ontology (GO) term not apparently associated with the MOA of AgNPs. A reference dose (RfD) could be calculated from either of the PoDs as a safe threshold for AgNP exposure. The current study illustrates the usefulness of in vitro transcriptomic concentration response study using human cells as a NAM for toxicity study of chemicals that lack adequate toxicity data to inform human risk assessment.

11.
Front Genet ; 15: 1389095, 2024.
Article in English | MEDLINE | ID: mdl-38846964

ABSTRACT

Toxicological risk assessment increasingly utilizes transcriptomics to derive point of departure (POD) and modes of action (MOA) for chemicals. One essential biological process that allows a single gene to generate several different RNA isoforms is called alternative splicing. To comprehensively assess the role of splicing dysregulation in toxicological evaluation and elucidate its potential as a complementary endpoint, we performed RNA-seq on A549 cells treated with five oxidative stress modulators across a wide dose range. Differential gene expression (DGE) showed limited pathway enrichment except at high concentrations. However, alternative splicing analysis revealed variable intron retention events affecting diverse pathways for all chemicals in the absence of significant expression changes. For instance, diazinon elicited negligible gene expression changes but progressive increase in the number of intron retention events, suggesting splicing alterations precede expression responses. Benchmark dose modeling of intron retention data highlighted relevant pathways overlooked by expression analysis. Systematic integration of splicing datasets should be a useful addition to the toxicogenomic toolkit. Combining both modalities paint a more complete picture of transcriptomic dose-responses. Overall, evaluating intron retention dynamics afforded by toxicogenomics may provide biomarkers that can enhance chemical risk assessment and regulatory decision making. This work highlights splicing-aware toxicogenomics as a possible additional tool for examining cellular responses.

12.
Toxicol Sci ; 2024 Jun 14.
Article in English | MEDLINE | ID: mdl-38876971

ABSTRACT

Perfluorononanoic acid (PFNA) is a commercially relevant, long-chain (8 fully fluorinated carbon) perfluorinated carboxylic acid (PFCA). PFNA has limited terrestrial ecotoxicity data and is detected in humans, animals, and the environment. This study is the fourth in a series with the objective of investigating the toxicity of a suite of per- and polyfluoroalkyl substances (PFAS) detected on military installations in a mammal indigenous to North America. Peromyscus leucopus (white-footed mice, ∼25/sex/dose) were exposed via oral gavage to either 0, 0.03, 0.14, 1, or 3 mg PFNA/kg-day for 112 consecutive days (4 weeks pre-mating exposure followed by an additional 12 weeks of exposure after onset of mating). Parental generation animals were assessed for potential reproductive and developmental effects, organ weight changes, thyroid modulation, and immunotoxicity. Pup weight and survival were assessed at postnatal days 0, 1, 4, 7, and 10. Change in liver weight was determined to yield the most sensitive dose response according to benchmark dose analysis, and serves as the most protective point of departure (BMDL = 0.37 mg/kg-d PFNA). Other effects of PFNA exposure included reduced formation of plaque forming cells, which are indicative of functional immune deficits (BMDL = 2.31 mg/kg-d); decreased serum thyroxine (BMDL = 0.93 mg/kg-d) without changes in some other hormones; and increased stillbirths (BMDL = 0.61 mg/kg-d PFNA). Pup weight and survival were not affected by PFNA exposure. Combined with data from previous studies, data from Peromyscus provide a One Health perspective on health effects of PFAS.

13.
J Clin Virol ; 173: 105695, 2024 Aug.
Article in English | MEDLINE | ID: mdl-38823290

ABSTRACT

Metagenomics is gradually being implemented for diagnosing infectious diseases. However, in-depth protocol comparisons for viral detection have been limited to individual sets of experimental workflows and laboratories. In this study, we present a benchmark of metagenomics protocols used in clinical diagnostic laboratories initiated by the European Society for Clinical Virology (ESCV) Network on NGS (ENNGS). A mock viral reference panel was designed to mimic low biomass clinical specimens. The panel was used to assess the performance of twelve metagenomic wet lab protocols currently in use in the diagnostic laboratories of participating ENNGS member institutions. Both Illumina and Nanopore, shotgun and targeted capture probe protocols were included. Performance metrics sensitivity, specificity, and quantitative potential were assessed using a central bioinformatics pipeline. Overall, viral pathogens with loads down to 104 copies/ml (corresponding to CT values of 31 in our PCR assays) were detected by all the evaluated metagenomic wet lab protocols. In contrast, lower abundant mixed viruses of CT values of 35 and higher were detected only by a minority of the protocols. Considering the reference panel as the gold standard, optimal thresholds to define a positive result were determined per protocol, based on the horizontal genome coverage. Implementing these thresholds, sensitivity and specificity of the protocols ranged from 67 to 100 % and 87 to 100 %, respectively. A variety of metagenomic protocols are currently in use in clinical diagnostic laboratories. Detection of low abundant viral pathogens and mixed infections remains a challenge, implying the need for standardization of metagenomic analysis for use in clinical settings.


Subject(s)
Benchmarking , Metagenomics , Sensitivity and Specificity , Viruses , Metagenomics/methods , Metagenomics/standards , Humans , Viruses/genetics , Viruses/classification , Viruses/isolation & purification , High-Throughput Nucleotide Sequencing/methods , High-Throughput Nucleotide Sequencing/standards , Virus Diseases/diagnosis , Virus Diseases/virology , Computational Biology/methods
14.
Microbiology (Reading) ; 170(6)2024 Jun.
Article in English | MEDLINE | ID: mdl-38916949

ABSTRACT

Metagenome community analyses, driven by the continued development in sequencing technology, is rapidly providing insights in many aspects of microbiology and becoming a cornerstone tool. Illumina, Oxford Nanopore Technologies (ONT) and Pacific Biosciences (PacBio) are the leading technologies, each with their own advantages and drawbacks. Illumina provides accurate reads at a low cost, but their length is too short to close bacterial genomes. Long reads overcome this limitation, but these technologies produce reads with lower accuracy (ONT) or with lower throughput (PacBio high-fidelity reads). In a critical first analysis step, reads are assembled to reconstruct genomes or individual genes within the community. However, to date, the performance of existing assemblers has never been challenged with a complex mock metagenome. Here, we evaluate the performance of current assemblers that use short, long or both read types on a complex mock metagenome consisting of 227 bacterial strains with varying degrees of relatedness. We show that many of the current assemblers are not suited to handle such a complex metagenome. In addition, hybrid assemblies do not fulfil their potential. We conclude that ONT reads assembled with CANU and Illumina reads assembled with SPAdes offer the best value for reconstructing genomes and individual genes of complex metagenomes, respectively.


Subject(s)
Bacteria , Benchmarking , High-Throughput Nucleotide Sequencing , Metagenome , Metagenomics , Sequence Analysis, DNA , High-Throughput Nucleotide Sequencing/methods , Metagenomics/methods , Bacteria/genetics , Bacteria/classification , Bacteria/isolation & purification , Sequence Analysis, DNA/methods , Genome, Bacterial/genetics , Microbiota/genetics
15.
Int J Mol Sci ; 25(12)2024 Jun 14.
Article in English | MEDLINE | ID: mdl-38928289

ABSTRACT

Graph Neural Networks have proven to be very valuable models for the solution of a wide variety of problems on molecular graphs, as well as in many other research fields involving graph-structured data. Molecules are heterogeneous graphs composed of atoms of different species. Composite graph neural networks process heterogeneous graphs with multiple-state-updating networks, each one dedicated to a particular node type. This approach allows for the extraction of information from s graph more efficiently than standard graph neural networks that distinguish node types through a one-hot encoded type of vector. We carried out extensive experimentation on eight molecular graph datasets and on a large number of both classification and regression tasks. The results we obtained clearly show that composite graph neural networks are far more efficient in this setting than standard graph neural networks.


Subject(s)
Neural Networks, Computer , Algorithms
16.
J Hazard Mater ; 474: 134721, 2024 Aug 05.
Article in English | MEDLINE | ID: mdl-38843629

ABSTRACT

The new challenges in toxicology demand novel and innovative in vitro approaches for deriving points of departure (PODs) and determining the mode of action (MOA) of chemicals. Therefore, the aim of this original study was to couple in vitro studies with untargeted metabolomics to model the concentration-response of extra- and intracellular metabolome data on human HepaRG cells treated for 48 h with three pyrrolizidine alkaloids (PAs): heliotrine, retrorsine and lasiocarpine. Modeling revealed that the three PAs induced various monotonic and, importantly, biphasic curves of metabolite content. Based on unannotated metabolites, the endometabolome was more sensitive than the exometabolome in terms of metabolomic effects, and benchmark concentrations (BMCs) confirmed that lasiocarpine was the most hepatotoxic PA. Regarding its MOA, impairment of lipid metabolism was highlighted at a very low BMC (first quartile, 0.003 µM). Moreover, results confirmed that lasiocarpine targets bile acids, as well as amino acid and steroid metabolisms. Analysis of the endometabolome, based on coupling concentration-response and PODs, gave encouraging results for ranking toxins according to their hepatotoxic effects. Therefore, this novel approach is a promising tool for next-generation risk assessment, readily applicable to a broad range of compounds and toxic endpoints.


Subject(s)
Metabolome , Pyrrolizidine Alkaloids , Pyrrolizidine Alkaloids/toxicity , Pyrrolizidine Alkaloids/metabolism , Humans , Metabolome/drug effects , Cell Line , Metabolomics , Lipid Metabolism/drug effects
17.
Environ Geochem Health ; 46(7): 253, 2024 Jun 17.
Article in English | MEDLINE | ID: mdl-38884835

ABSTRACT

Urinary cadmium (U-Cd) values are indicators for determining chronic cadmium toxicity, and previous studies have calculated U-Cd indicators using renal injury biomarkers. However, most of these studies have been conducted in adult populations, and there is a lack of research on U-Cd thresholds in preschool children. We aimed to apply benchmark dose (BMD) analysis to estimate the U-Cd threshold level associated with renal impairment in preschool children in the cadmium-polluted area. 518 preschool children aged 3-5 years were selected by systematic sampling (275 boys, 243 girls). Urinary cadmium and three biomarkers of early renal injury (urinary N-acetyl-ß-D-glucosaminidase, UNAG; urinary ß2-microglobulin, Uß2-MG; urinary retinol-binding protein, URBP) were determined. Bayesian model averaging estimated the BMD and lower confidence interval limit (BMDL) of U-Cd. The medians U-Cd levels in both boys and girls exceeded the recommended national standard threshold (5 µg/g cr) and U-Cd levels were higher in girls than in boys. Urinary N-acetyl-ß-D-glucosaminidase (UNAG) was the most sensitive biomarker of renal effects in preschool children. The overall BMDL5 (BMDL at a benchmark response value of 5) was 2.76 µg/g cr. In the gender analysis, the BMDL5 values were 1.92 µg/g cr for boys and 4.12 µg/g cr for girls. This study shows that the U-Cd threshold (BMDL5) is lower than the national standard (5 µg/g cr) and boys' BMDL5 was lower than the limit set by the European Parliament and Council in 2019 (2 µg/g cr), which provides a reference point for making U-Cd thresholds for preschool children.


Subject(s)
Bayes Theorem , Biomarkers , Cadmium , Humans , Child, Preschool , Male , Female , Cadmium/urine , Biomarkers/urine , Environmental Pollutants/urine , Acetylglucosaminidase/urine , Benchmarking , Environmental Exposure , beta 2-Microglobulin/urine , Retinol-Binding Proteins/urine , Environmental Monitoring/methods
18.
Mol Biol Evol ; 41(6)2024 Jun 01.
Article in English | MEDLINE | ID: mdl-38860506

ABSTRACT

Phylogenetic inference based on protein sequence alignment is a widely used procedure. Numerous phylogenetic algorithms have been developed, most of which have many parameters and options. Choosing a program, options, and parameters can be a nontrivial task. No benchmark for comparison of phylogenetic programs on real protein sequences was publicly available. We have developed PhyloBench, a benchmark for evaluating the quality of phylogenetic inference, and used it to test a number of popular phylogenetic programs. PhyloBench is based on natural, not simulated, protein sequences of orthologous evolutionary domains. The measure of accuracy of an inferred tree is its distance to the corresponding species tree. A number of tree-to-tree distance measures were tested. The most reliable results were obtained using the Robinson-Foulds distance. Our results confirmed recent findings that distance methods are more accurate than maximum likelihood (ML) and maximum parsimony. We tested the bayesian program MrBayes on natural protein sequences and found that, on our datasets, it performs better than ML, but worse than distance methods. Of the methods we tested, the Balanced Minimum Evolution method implemented in FastME yielded the best results on our material. Alignments and reference species trees are available at https://mouse.belozersky.msu.ru/tools/phylobench/ together with a web-interface that allows for a semi-automatic comparison of a user's method with a number of popular programs.


Subject(s)
Algorithms , Phylogeny , Software , Benchmarking , Sequence Alignment/methods , Bayes Theorem , Evolution, Molecular , Computational Biology/methods
19.
Brief Bioinform ; 25(4)2024 May 23.
Article in English | MEDLINE | ID: mdl-38833322

ABSTRACT

Recent advances in tumor molecular subtyping have revolutionized precision oncology, offering novel avenues for patient-specific treatment strategies. However, a comprehensive and independent comparison of these subtyping methodologies remains unexplored. This study introduces 'Themis' (Tumor HEterogeneity analysis on Molecular subtypIng System), an evaluation platform that encapsulates a few representative tumor molecular subtyping methods, including Stemness, Anoikis, Metabolism, and pathway-based classifications, utilizing 38 test datasets curated from The Cancer Genome Atlas (TCGA) and significant studies. Our self-designed quantitative analysis uncovers the relative strengths, limitations, and applicability of each method in different clinical contexts. Crucially, Themis serves as a vital tool in identifying the most appropriate subtyping methods for specific clinical scenarios. It also guides fine-tuning existing subtyping methods to achieve more accurate phenotype-associated results. To demonstrate the practical utility, we apply Themis to a breast cancer dataset, showcasing its efficacy in selecting the most suitable subtyping methods for personalized medicine in various clinical scenarios. This study bridges a crucial gap in cancer research and lays a foundation for future advancements in individualized cancer therapy and patient management.


Subject(s)
Precision Medicine , Humans , Precision Medicine/methods , Neoplasms/genetics , Neoplasms/classification , Neoplasms/therapy , Biomarkers, Tumor/genetics , Computational Biology/methods , Medical Oncology/methods , Breast Neoplasms/genetics , Breast Neoplasms/classification , Breast Neoplasms/therapy , Female
20.
Sci Rep ; 14(1): 13053, 2024 Jun 06.
Article in English | MEDLINE | ID: mdl-38844488

ABSTRACT

Impulse waves are generated by rapid subaerial mass movements including landslides, avalanches and glacier break-offs, which pose a potential risk to public facilities and residents along the shore of natural lakes or engineered reservoirs. Therefore, the prediction and assessment of impulse waves are of considerable importance to practical engineering. Tsunami Squares, as a meshless numerical method based on a hybrid Eulerian-Lagrangian algorithm, have focused on the simulation of landslide-generated impulse waves. An updated numerical scheme referred to as Tsunami Squares Leapfrog, was developed which contains a new smooth function able to achieve space and time convergence tests as well as the Leapfrog time integration method enabling second-order accuracy. The updated scheme shows improved performance due to a lower wave decay rate per unit propagation distance compared to the original implementation of Tsunami Squares. A systematic benchmark testing of the updated scheme was conducted by simulating the run-up, reflection and overland flow of solitary waves along a slope for various initial wave amplitudes, water depths and slope angles. For run-up, the updated scheme shows good performance when the initial relative wave amplitude is smaller than 0.4. Otherwise, the model tends to underestimate the run-up height for mild slopes, while an overestimation is observed for steeper slopes. With respect to overland flow, the prediction error of the maximum flow height can be limited to ± 50% within a 90% confidence interval. However, the prediction of the front propagation velocity can only be controlled to ± 100% within a 90% confidence interval. Furthermore, a sensitivity analysis of the dynamic friction coefficient of water was performed and a suggested range from 0.01 to 0.1 was given for reference.

SELECTION OF CITATIONS
SEARCH DETAIL
...