Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 20 de 61.168
Filter
1.
Clin Chim Acta ; 564: 119928, 2025 Jan 01.
Article in English | MEDLINE | ID: mdl-39163897

ABSTRACT

BACKGROUND AND AIMS: Rheumatoid arthritis (RA) manifests through various symptoms and systemic manifestations. Diagnosis involves serological markers like rheumatoid factor (RF) and anti-citrullinated protein antibodies (ACPA). Past studies have shown the added value of likelihood ratios (LRs) in result interpretation. LRs can be combined with pretest probability to estimate posttest probability for RA. There is a lack of information on pretest probability. This study aimed to estimate pretest probabilities for RA. MATERIALS AND METHODS: This retrospective study included 133 consecutive RA patients and 651 consecutive disease controls presenting at a rheumatology outpatient clinic. Disease characteristics, risk factors associated with RA and laboratory parameters were documented for calculating pretest probabilities and LRs. RESULTS: Joint involvement, erosions, morning stiffness, and positive CRP, ESR tests significantly correlated with RA. Based on these factors, probabilities for RA were estimated. Besides, LRs for RA were established for RF and ACPA and combinations thereof. LRs increased with antibody levels and were highest for double high positivity. Posttest probabilities were estimated based on pretest probability and LR. CONCLUSION: By utilizing pretest probabilities for RA and LRs for RF and ACPA, posttest probabilities were estimated. Such approach enhances diagnostic accuracy, offering laboratory professionals and clinicians insights in the value of serological testing during the diagnostic process.


Subject(s)
Anti-Citrullinated Protein Antibodies , Arthritis, Rheumatoid , Rheumatoid Factor , Humans , Arthritis, Rheumatoid/diagnosis , Arthritis, Rheumatoid/blood , Arthritis, Rheumatoid/immunology , Rheumatoid Factor/blood , Female , Middle Aged , Retrospective Studies , Anti-Citrullinated Protein Antibodies/blood , Male , Likelihood Functions , Probability , Adult , Autoantibodies/blood , Aged
2.
Clin Chim Acta ; 564: 119941, 2025 Jan 01.
Article in English | MEDLINE | ID: mdl-39181294

ABSTRACT

BACKGROUND: In Alzheimer's disease (AD) diagnosis, a cerebrospinal fluid (CSF) biomarker panel is commonly interpreted with binary cutoff values. However, these values are not generic and do not reflect the disease continuum. We explored the use of interval-specific likelihood ratios (LRs) and probability-based models for AD using a CSF biomarker panel. METHODS: CSF biomarker (Aß1-42, tTau and pTau181) data for both a clinical discovery cohort of 241 patients (measured with INNOTEST) and a clinical validation cohort of 129 patients (measured with EUROIMMUN), both including AD and non-AD dementia/cognitive complaints were retrospectively retrieved in a single-center study. Interval-specific LRs for AD were calculated and validated for univariate and combined (Aß1-42/tTau and pTau181) biomarkers, and a continuous bivariate probability-based model for AD, plotting Aß1-42/tTau versus pTau181 was constructed and validated. RESULTS: LR for AD increased as individual CSF biomarker values deviated from normal. Interval-specific LRs of a combined biomarker model showed that once one biomarker became abnormal, LRs increased even further when another biomarker largely deviated from normal, as replicated in the validation cohort. A bivariate probability-based model predicted AD with a validated accuracy of 88% on a continuous scale. CONCLUSIONS: Interval-specific LRs in a combined biomarker model and prediction of AD using a continuous bivariate biomarker probability-based model, offer a more meaningful interpretation of CSF AD biomarkers on a (semi-)continuous scale with respect to the post-test probability of AD across different assays and cohorts.


Subject(s)
Alzheimer Disease , Amyloid beta-Peptides , Biomarkers , Probability , Alzheimer Disease/cerebrospinal fluid , Alzheimer Disease/diagnosis , Humans , Biomarkers/cerebrospinal fluid , Female , Male , Aged , Amyloid beta-Peptides/cerebrospinal fluid , Likelihood Functions , Middle Aged , tau Proteins/cerebrospinal fluid , Retrospective Studies , Peptide Fragments/cerebrospinal fluid , Cohort Studies
3.
Biometrics ; 80(3)2024 Jul 01.
Article in English | MEDLINE | ID: mdl-39248121

ABSTRACT

Recent years have witnessed a rise in the popularity of information integration without sharing of raw data. By leveraging and incorporating summary information from external sources, internal studies can achieve enhanced estimation efficiency and prediction accuracy. However, a noteworthy challenge in utilizing summary-level information is accommodating the inherent heterogeneity across diverse data sources. In this study, we delve into the issue of prior probability shift between two cohorts, wherein the difference of two data distributions depends on the outcome. We introduce a novel semi-parametric constrained optimization-based approach to integrate information within this framework, which has not been extensively explored in existing literature. Our proposed method tackles the prior probability shift by introducing the outcome-dependent selection function and effectively addresses the estimation uncertainty associated with summary information from the external source. Our approach facilitates valid inference even in the absence of a known variance-covariance estimate from the external source. Through extensive simulation studies, we observe the superiority of our method over existing ones, showcasing minimal estimation bias and reduced variance for both binary and continuous outcomes. We further demonstrate the utility of our method through its application in investigating risk factors related to essential hypertension, where the reduced estimation variability is observed after integrating summary information from an external data.


Subject(s)
Computer Simulation , Essential Hypertension , Probability , Humans , Models, Statistical , Risk Factors , Hypertension , Data Interpretation, Statistical , Biometry/methods
4.
Medicine (Baltimore) ; 103(22): e38238, 2024 May 31.
Article in English | MEDLINE | ID: mdl-39259105

ABSTRACT

Analyses using population-based health administrative data can return erroneous results if case identification is inaccurate ("misclassification bias"). An acetabular fracture (AF) prediction model using administrative data decreased misclassification bias compared to identifying AFs using diagnostic codes. This study measured the accuracy of this AF prediction model in another hospital. We calculated AF probability in all hospitalizations in the validation hospital between 2015 and 2020. A random sample of 1000 patients stratified by expected AF probability was selected. Patient imaging studies were reviewed to determine true AF status. The validation population included 1000 people. The AF prediction model was very discriminative (c-statistic 0.90, 95% CI: 0.87-0.92) and very well calibrated (integrated calibration index 0.056, 95% CI: 0.039-0.074). AF probability can be accurately determined using routinely collected health administrative data. This observation supports using the AF prediction model to minimize misclassification bias when studying AF using health administrative data.


Subject(s)
Acetabulum , Fractures, Bone , Humans , Acetabulum/injuries , Female , Male , Fractures, Bone/epidemiology , Fractures, Bone/classification , Middle Aged , Adult , Probability , Aged , Models, Statistical , Hospitalization/statistics & numerical data
5.
Radiology ; 312(3): e233435, 2024 09.
Article in English | MEDLINE | ID: mdl-39225600

ABSTRACT

Background It is increasingly recognized that interstitial lung abnormalities (ILAs) detected at CT have potential clinical implications, but automated identification of ILAs has not yet been fully established. Purpose To develop and test automated ILA probability prediction models using machine learning techniques on CT images. Materials and Methods This secondary analysis of a retrospective study included CT scans from patients in the Boston Lung Cancer Study collected between February 2004 and June 2017. Visual assessment of ILAs by two radiologists and a pulmonologist served as the ground truth. Automated ILA probability prediction models were developed that used a stepwise approach involving section inference and case inference models. The section inference model produced an ILA probability for each CT section, and the case inference model integrated these probabilities to generate the case-level ILA probability. For indeterminate sections and cases, both two- and three-label methods were evaluated. For the case inference model, we tested three machine learning classifiers (support vector machine [SVM], random forest [RF], and convolutional neural network [CNN]). Receiver operating characteristic analysis was performed to calculate the area under the receiver operating characteristic curve (AUC). Results A total of 1382 CT scans (mean patient age, 67 years ± 11 [SD]; 759 women) were included. Of the 1382 CT scans, 104 (8%) were assessed as having ILA, 492 (36%) as indeterminate for ILA, and 786 (57%) as without ILA according to ground-truth labeling. The cohort was divided into a training set (n = 96; ILA, n = 48), a validation set (n = 24; ILA, n = 12), and a test set (n = 1262; ILA, n = 44). Among the models evaluated (two- and three-label section inference models; two- and three-label SVM, RF, and CNN case inference models), the model using the three-label method in the section inference model and the two-label method and RF in the case inference model achieved the highest AUC, at 0.87. Conclusion The model demonstrated substantial performance in estimating ILA probability, indicating its potential utility in clinical settings. © RSNA, 2024 Supplemental material is available for this article. See also the editorial by Zagurovskaya in this issue.


Subject(s)
Lung Diseases, Interstitial , Lung Neoplasms , Machine Learning , Radiographic Image Interpretation, Computer-Assisted , Tomography, X-Ray Computed , Humans , Tomography, X-Ray Computed/methods , Lung Diseases, Interstitial/diagnostic imaging , Retrospective Studies , Female , Male , Lung Neoplasms/diagnostic imaging , Aged , Middle Aged , Radiographic Image Interpretation, Computer-Assisted/methods , Boston , Lung/diagnostic imaging , Probability
6.
Cognition ; 252: 105915, 2024 Nov.
Article in English | MEDLINE | ID: mdl-39151396

ABSTRACT

A severity effect has previously been documented, whereby numerical translations of verbal probability expressions are higher for severe outcomes than for non-severe outcomes. Recent work has additionally shown the same effect in the opposite direction (translating numerical probabilities into words). Here, we aimed to test whether these effects lead to an escalation of subjective probabilities across a communication chain. In four 'communication chain' studies, participants at each communication stage either translated a verbal probability expression into a number, or a number into a verbal expression (where the probability to be translated was yoked to a previous participant). Across these four studies, we found a general Probability Escalation Effect, whereby subjective probabilities increased with subsequent communications for severe, non-severe and positive events. Having ruled out some alternative explanations, we propose that the most likely explanation is in terms of communications directing attention towards an event's occurrence. Probability estimates of focal outcomes increase across communication stages.


Subject(s)
Communication , Probability , Humans , Male , Female , Adult , Young Adult
7.
Astrobiology ; 24(8): 813-823, 2024 Aug.
Article in English | MEDLINE | ID: mdl-39159441

ABSTRACT

The emergence of life from nonlife, or abiogenesis, remains a fundamental question in scientific inquiry. In this article, we investigate the probability of the origin of life (per conducive site) by leveraging insights from Earth's environments. If life originated endogenously on Earth, its existence is indeed endowed with informative value, although the interpretation of the attendant significance hinges critically upon prior assumptions. By adopting a Bayesian framework, for an agnostic prior, we establish a direct connection between the number of potential locations for abiogenesis on Earth and the probability of life's emergence per site. Our findings suggest that constraints on the availability of suitable environments for the origin(s) of life on Earth may offer valuable insights into the probability of abiogenesis and the frequency of life in the universe.


Subject(s)
Bayes Theorem , Origin of Life , Probability , Earth, Planet , Exobiology/methods
8.
NPJ Syst Biol Appl ; 10(1): 87, 2024 Aug 12.
Article in English | MEDLINE | ID: mdl-39134558

ABSTRACT

Network controllability is unifying the traditional control theory with the structural network information rooted in many large-scale biological systems of interest, from intracellular networks in molecular biology to brain neuronal networks. In controllability approaches, the set of minimum driver nodes is not unique, and critical nodes are the most important control elements because they appear in all possible solution sets. On the other hand, a common but largely unexplored feature in network control approaches is the probabilistic failure of edges or the uncertainty in the determination of interactions between molecules. This is particularly true when directed probabilistic interactions are considered. Until now, no efficient algorithm existed to determine critical nodes in probabilistic directed networks. Here we present a probabilistic control model based on a minimum dominating set framework that integrates the probabilistic nature of directed edges between molecules and determines the critical control nodes that drive the entire network functionality. The proposed algorithm, combined with the developed mathematical tools, offers practical efficiency in determining critical control nodes in large probabilistic networks. The method is then applied to the human intracellular signal transduction network revealing that critical control nodes are associated with important biological features and perturbed sets of genes in human diseases, including SARS-CoV-2 target proteins and rare disorders. We believe that the proposed methodology can be useful to investigate multiple biological systems in which directed edges are probabilistic in nature, both in natural systems or when determined with large uncertainties in-silico.


Subject(s)
Algorithms , COVID-19 , SARS-CoV-2 , Signal Transduction , Humans , Signal Transduction/physiology , Signal Transduction/genetics , Computational Biology/methods , Proteins/metabolism , Proteins/genetics , Probability , Models, Biological , Models, Statistical , Systems Biology/methods
9.
Can J Exp Psychol ; 78(3): 174-189, 2024 Sep.
Article in English | MEDLINE | ID: mdl-39101884

ABSTRACT

We examined the human ability to encode and utilize local and global uncertainty information during a navigational task. Participants were tasked with navigating a virtual maze in which wall locations were obscured. Local cues and a global direction provided guidance. The validities of the global and local cues were separately and jointly varied across the two experiments. The results demonstrated that participants effectively utilized both global and local cues for navigation with a stronger reliance on local cues and a heightened precision in estimating their reliability. Our findings suggest that the representation of uncertainty for proximate events can be dissociated from that of distal events. Furthermore, humans effectively integrate both forms of information when making decisions during navigation tasks. This research advances our understanding of uncertainty processing and its implications for decision making in complex environments. (PsycInfo Database Record (c) 2024 APA, all rights reserved).


Subject(s)
Cues , Spatial Navigation , Humans , Spatial Navigation/physiology , Adult , Male , Young Adult , Uncertainty , Female , Maze Learning/physiology , Space Perception/physiology , Probability , Decision Making/physiology
10.
J Psychiatr Res ; 177: 420-428, 2024 Sep.
Article in English | MEDLINE | ID: mdl-39098285

ABSTRACT

BACKGROUND: Accurately predicting the probability of aggressive behavior is crucial for guiding early intervention in patients with mood disorders. METHODS: Cox stepwise regression was conducted to identify potential influencing factors. Nomogram prediction models were constructed to predict the probabilities of aggressive behavior in patients with mood disorders, and their performance was assessed using consistency index (C-index) and calibration plots. RESULTS: Research findings on 321 patients with mood disorders indicated that being older (HR = 0.92, 95% CI: 0.86-0.98), single (HR = 0.11, 95% CI: 0.02-0.68), having children (one child, HR = 0.07, 95%CI: 0.01-0.87; more than one child, HR = 0.33, 95%CI: 0.04-2.48), living in dormitory (HR = 0.25, 95%CI: 0.08-0.77), non-student (employee, HR = 0.24, 95% CI: 0.07-0.88; non-employee, HR = 0.09, 95% CI: 0.02-0.35), and higher scores in subjective support (HR = 0.90, 95% CI: 0.82-0.99) were protective factors. On the contrary, minorities (HR = 5.26, 95% CI: 1.23-22.48), living alone (HR = 4.37, 95% CI: 1.60-11.94), having suicide history (HR = 2.51, 95% CI: 1.06-5.95), and having higher scores in EPQ-E (HR = 1.04, 95% CI: 1.00-1.08) and EPQ-P (HR = 1.03, 95% CI: 1.00-1.07) were identified as independent risk factors for aggressive behavior in patients with mood disorders. The nomogram prediction model demonstrated high discrimination and goodness-of-fit. CONCLUSIONS: A novel nomogram prediction model for the probability of aggressive behavior in patients with mood disorders was developed, effective in identifying at-risk populations and offering valuable insights for early intervention and proactive measures.


Subject(s)
Aggression , Mood Disorders , Nomograms , Humans , Male , Mood Disorders/epidemiology , Female , Adult , Middle Aged , Cohort Studies , Young Adult , Probability , Proportional Hazards Models
11.
BMC Med Res Methodol ; 24(1): 171, 2024 Aug 06.
Article in English | MEDLINE | ID: mdl-39107695

ABSTRACT

BACKGROUND: Dimension reduction methods do not always reduce their underlying indicators to a single composite score. Furthermore, such methods are usually based on optimality criteria that require discarding some information. We suggest, under some conditions, to use the joint probability density function (joint pdf or JPD) of p-dimensional random variable (the p indicators), as an index or a composite score. It is proved that this index is more informative than any alternative composite score. In two examples, we compare the JPD index with some alternatives constructed from traditional methods. METHODS: We develop a probabilistic unsupervised dimension reduction method based on the probability density of multivariate data. We show that the conditional distribution of the variables given JPD is uniform, implying that the JPD is the most informative scalar summary under the most common notions of information. B. We show under some widely plausible conditions, JPD can be used as an index. To use JPD as an index, in addition to having a plausible interpretation, all the random variables should have approximately the same direction(unidirectionality) as the density values (codirectionality). We applied these ideas to two data sets: first, on the 7 Brief Pain Inventory Interference scale (BPI-I) items obtained from 8,889 US Veterans with chronic pain and, second, on a novel measure based on administrative data for 912 US Veterans. To estimate the JPD in both examples, among the available JPD estimation methods, we used its conditional specifications, identified a well-fitted parametric model for each factored conditional (regression) specification, and, by maximizing the corresponding likelihoods, estimated their parameters. Due to the non-uniqueness of conditional specification, the average of all estimated conditional specifications was used as the final estimate. Since a prevalent common use of indices is ranking, we used measures of monotone dependence [e.g., Spearman's rank correlation (rho)] to assess the strength of unidirectionality and co-directionality. Finally, we cross-validate the JPD score against variance-covariance-based scores (factor scores in unidimensional models), and the "person's parameter" estimates of (Generalized) Partial Credit and Graded Response IRT models. We used Pearson Divergence as a measure of information and Shannon's entropy to compare uncertainties (informativeness) in these alternative scores. RESULTS: An unsupervised dimension reduction was developed based on the joint probability density (JPD) of the multi-dimensional data. The JPD, under regularity conditions, may be used as an index. For the well-established Brief Pain Interference Inventory (BPI-I (the short form with 7 Items) and for a new mental health severity index (MoPSI) with 6 indicators, we estimated the JPD scoring. We compared, assuming unidimensionality, factor scores, Person's scores of the Partial Credit model, the Generalized Partial Credit model, and the Graded Response model with JPD scoring. As expected, all scores' rankings in both examples were monotonically dependent with various strengths. Shannon entropy was the smallest for JPD scores. Pearson Divergence of the estimated densities of different indices against uniform distribution was maximum for JPD scoring. CONCLUSIONS: An unsupervised probabilistic dimension reduction is possible. When appropriate, the joint probability density function can be used as the most informative index. Model specification and estimation and steps to implement the scoring were demonstrated. As expected, when the required assumption in factor analysis and IRT models are satisfied, JPD scoring agrees with these established scores. However, when these assumptions are violated, JPD scores preserve all the information in the indicators with minimal assumption.


Subject(s)
Probability , Humans , Pain/diagnosis , Severity of Illness Index , Pain Measurement/methods , Pain Measurement/statistics & numerical data , Mental Disorders/diagnosis , Models, Statistical , Algorithms
12.
BMC Genomics ; 25(1): 819, 2024 Aug 30.
Article in English | MEDLINE | ID: mdl-39215209

ABSTRACT

BACKGROUND: Genes exist in a population in a variety of forms (alleles), as a consequence of multiple mutation events that have arisen over the course of time. In this work we consider a locus that is subject to either multiplicative or additive selection, and has n alleles, where n can take the values 2, 3, 4, … . We focus on determining the probability of fixation of each of the n alleles. For n = 2 alleles, analytical results, that are 'exact', under the diffusion approximation, can be found for the fixation probability. However generally there are no equally exact results for n ≥ 3 alleles. In the absence of such exact results, we proceed by finding results for the fixation probability, under the diffusion approximation, as a power series in scaled strengths of selection such as R i , j = 2 N e ( s i - s j ) , where N e is the effective population size, while s i and s j are the selection coefficients associated with alleles i and j, respectively. RESULTS: We determined the fixation probability when all terms up to second order in the R i , j are kept. The truncation of the power series requires that the R i , j cannot be indefinitely large. For magnitudes of the R i , j up to a value of approximately 1, numerical evidence suggests that the results work well. Additionally, results given for the particular case of n = 3 alleles illustrate a general feature that holds for n ≥ 3 alleles, that the fixation probability of a particular allele depends on that allele's initial frequency, but generally, this fixation probability also depends on the initial frequencies of other alleles at the locus, as well as their selective effects. CONCLUSIONS: We have analytically exposed the leading way the probability of fixation, at a locus with multiple alleles, is affected by selection. This result may offer important insights into CDCV traits that have extreme phenotypic variance due to numerous, low-penetrance susceptibility alleles.


Subject(s)
Alleles , Models, Genetic , Probability , Selection, Genetic , Gene Frequency , Genetic Loci , Humans
13.
PLoS One ; 19(8): e0307883, 2024.
Article in English | MEDLINE | ID: mdl-39208318

ABSTRACT

This study aimed to propose a novel method for dynamic risk assessment using a Bayesian network (BN) based on fuzzy data to decrease uncertainty compared to traditional methods by integrating Interval Type-2 Fuzzy Sets (IT2FS) and Z-numbers. A bow-tie diagram was constructed by employing the System Hazard Identification, Prediction, and Prevention (SHIPP) approach, the Top Event Fault Tree, and the Barriers Failure Fault Tree. The experts then provided their opinions and confidence levels on the prior probabilities of the basic events, which were then quantified utilizing the IT2FS and combined using the Z-number to reduce the uncertainty of the prior probability. The posterior probability of the critical basic events (CBEs) was obtained using the beta distribution based on recorded data on their requirements and failure rates over five years. This information was then fed into the BN. Updating the BN allowed calculating the posterior probability of barrier failure and consequences. Spherical tanks were used as a case study to demonstrate and confirm the significant benefits of the methodology. The results indicated that the overall posterior probability of Consequences after the failure probability of barriers displayed an upward trend over the 5-year period. This rise in IT2FS-Z calculation outcomes exhibited a shallower slope compared to the IT2FS mode, attributed to the impact of experts' confidence levels in the IT2FS-Z mode. These differences became more evident by considering the 10-4 variance compared to the 10-5. This study offers industry managers a more comprehensive and reliable understanding of achieving the most effective accident prevention performance.


Subject(s)
Bayes Theorem , Humans , Fuzzy Logic , Risk Assessment/methods , Probability , Accidents/statistics & numerical data
14.
J Acoust Soc Am ; 156(2): 1367-1379, 2024 Aug 01.
Article in English | MEDLINE | ID: mdl-39189786

ABSTRACT

Predictions of gradient degree of lenition of voiceless and voiced stops in a corpus of Argentine Spanish are evaluated using three acoustic measures (minimum and maximum intensity velocity and duration) and two recurrent neural network (Phonet) measures (posterior probabilities of sonorant and continuant phonological features). While mixed and inconsistent predictions were obtained across the acoustic metrics, sonorant and continuant probability values were consistently in the direction predicted by known factors of a stop's lenition with respect to its voicing, place of articulation, and surrounding contexts. The results suggest the effectiveness of Phonet as an additional or alternative method of lenition measurement. Furthermore, this study has enhanced the accessibility of Phonet by releasing the trained Spanish Phonet model used in this study and a pipeline with step-by-step instructions for training and inferencing new models.


Subject(s)
Neural Networks, Computer , Phonetics , Speech Acoustics , Humans , Speech Production Measurement/methods , Time Factors , Probability , Acoustics
16.
Math Biosci Eng ; 21(7): 6521-6538, 2024 Jul 08.
Article in English | MEDLINE | ID: mdl-39176406

ABSTRACT

We modeled the impact of local vaccine mandates on the spread of vaccine-preventable infectious diseases, which in the absence of vaccines will mainly affect children. Examples of such diseases are measles, rubella, mumps, and pertussis. To model the spread of the pathogen, we used a stochastic SIR (susceptible, infectious, recovered) model with two levels of mixing in a closed population, often referred to as the household model. In this model, individuals make local contacts within a specific small subgroup of the population (e.g., within a household or a school class), while they also make global contacts with random people in the population at a much lower rate than the rate of local contacts. We considered what would happen if schools were given freedom to impose vaccine mandates on all of their pupils, except for the pupils that were exempt from vaccination because of medical reasons. We investigated first how such a mandate affected the probability of an outbreak of a disease. Furthermore, we focused on the probability that a pupil that was medically exempt from vaccination, would get infected during an outbreak. We showed that if the population vaccine coverage was close to the herd-immunity level, then both probabilities may increase if local vaccine mandates were implemented. This was caused by unvaccinated pupils possibly being moved to schools without mandates.


Subject(s)
Communicable Diseases , Disease Outbreaks , Schools , Vaccination , Humans , Disease Outbreaks/prevention & control , Child , Communicable Diseases/epidemiology , Communicable Diseases/transmission , Vaccine-Preventable Diseases/prevention & control , Vaccine-Preventable Diseases/epidemiology , Stochastic Processes , Immunity, Herd , Vaccines/administration & dosage , Measles/prevention & control , Measles/epidemiology , Probability , Computer Simulation , Mumps/prevention & control , Mumps/epidemiology , Mandatory Programs , Communicable Disease Control/methods , Communicable Disease Control/legislation & jurisprudence , Rubella/prevention & control , Rubella/epidemiology , Mandatory Vaccination
17.
Math Biosci Eng ; 21(6): 6407-6424, 2024 Jun 27.
Article in English | MEDLINE | ID: mdl-39176432

ABSTRACT

This research focused its interest on the mathematical modeling of the demographic dynamics of semelparous biological species through branching processes. We continued the research line started in previous papers, providing new methodological contributions of biological and ecological interest. We determined the probability distribution associated with the number of generations elapsed before the possible extinction of the population in its natural habitat. We mathematically modeled the phenomenon of populating or repopulating habitats with semelparous species. We also proposed estimates for the offspring parameters governing the reproductive strategies of the species. To this purpose, we used the maximum likelihood and Bayesian estimation methodologies. The statistical results are illustrated through a simulated example contextualized with Labord chameleon (Furcifer labordi) species.


Subject(s)
Bayes Theorem , Computer Simulation , Ecosystem , Population Dynamics , Reproduction , Animals , Reproduction/physiology , Female , Male , Likelihood Functions , Lizards/physiology , Models, Biological , Algorithms , Probability
18.
Bull Math Biol ; 86(9): 114, 2024 Aug 05.
Article in English | MEDLINE | ID: mdl-39101994

ABSTRACT

Bayesian phylogenetic inference is powerful but computationally intensive. Researchers may find themselves with two phylogenetic posteriors on overlapping data sets and may wish to approximate a combined result without having to re-run potentially expensive Markov chains on the combined data set. This raises the question: given overlapping subsets of a set of taxa (e.g. species or virus samples), and given posterior distributions on phylogenetic tree topologies for each of these taxon sets, how can we optimize a probability distribution on phylogenetic tree topologies for the entire taxon set? In this paper we develop a variational approach to this problem and demonstrate its effectiveness. Specifically, we develop an algorithm to find a suitable support of the variational tree topology distribution on the entire taxon set, as well as a gradient-descent algorithm to minimize the divergence from the restrictions of the variational distribution to each of the given per-subset probability distributions, in an effort to approximate the posterior distribution on the entire taxon set.


Subject(s)
Algorithms , Bayes Theorem , Markov Chains , Mathematical Concepts , Models, Genetic , Phylogeny , Computer Simulation , Probability
19.
J Chem Inf Model ; 64(16): 6350-6360, 2024 Aug 26.
Article in English | MEDLINE | ID: mdl-39088689

ABSTRACT

Protein engineering through directed evolution and (semi)rational approaches is routinely applied to optimize protein properties for a broad range of applications in industry and academia. The multitude of possible variants, combined with limited screening throughput, hampers efficient protein engineering. Data-driven strategies have emerged as a powerful tool to model the protein fitness landscape that can be explored in silico, significantly accelerating protein engineering campaigns. However, such methods require a certain amount of data, which often cannot be provided, to generate a reliable model of the fitness landscape. Here, we introduce MERGE, a method that combines direct coupling analysis (DCA) and machine learning (ML). MERGE enables data-driven protein engineering when only limited data are available for training, typically ranging from 50 to 500 labeled sequences. Our method demonstrates remarkable performance in predicting a protein's fitness value and rank based on its sequence across diverse proteins and properties. Notably, MERGE outperforms state-of-the-art methods when only small data sets are available for modeling, requiring fewer computational resources, and proving particularly promising for protein engineers who have access to limited amounts of data.


Subject(s)
Machine Learning , Protein Engineering , Proteins , Protein Engineering/methods , Proteins/chemistry , Proteins/metabolism , Probability , Models, Molecular
20.
Indian J Pathol Microbiol ; 67(3): 607-610, 2024 Jul 01.
Article in English | MEDLINE | ID: mdl-39078980

ABSTRACT

INTRODUCTION: Risk management includes identifying various risks, assessing the probability of occurrence, and evaluating the severity of their consequences. As clinical laboratories are integrally involved in patient care, risks in the laboratories could present grave consequences in some instances. This study aimed to utilize simple techniques for risk management in a clinical laboratory. MATERIALS AND METHODS: All potential risks in the pathology laboratory of a tertiary-level hospital were identified and classified into natural calamity, environmental, manpower-related, pre-analytical, analytical, post-analytical, and laboratory hazard-related risks through a brainstorming session. The probability of occurrence of each risk was estimated from departmental and hospital records. The possible impact of risk (score 1-10) was categorized into catastrophic, critical, serious, minor negligible, and insignificant. The unweighted risk score was calculated by multiplying the probability of occurrence and impact score. RESULTS: Inadequate sample-to-anticoagulant ratio had the highest probability of occurrence (22.85%), followed by quantity insufficient for analysis (7.30%) and laboratory information system (LIS) breakdown (6.58%). The highest unweighted risk score in our study was inadequate sample-to-anticoagulant ratio (score 91.40), followed by improperly labeled samples (score 35.61), manpower competency issues (score 32.88), sample insufficient for analysis (score 29.20), and LIS breakdown (score 26.30). CONCLUSION: We found that among all the categories, risks involving the pre-analytical phase had the highest risk scores. The other important risks included manpower competency issues requiring continued on-the-job training of staff as a risk reduction strategy. Brainstorming and probability analysis could be easily used for risk management in a clinical laboratory.


Subject(s)
Probability , Risk Management , Humans , Risk Management/methods , Laboratories, Clinical , Pathology, Clinical , Tertiary Care Centers , Risk Reduction Behavior
SELECTION OF CITATIONS
SEARCH DETAIL