RESUMEN
Capturing rare yet pivotal events poses a significant challenge for molecular simulations. Path sampling provides a unique approach to tackle this issue without altering the potential energy landscape or dynamics, enabling recovery of both thermodynamic and kinetic information. However, despite its exponential acceleration compared to standard molecular dynamics, generating numerous trajectories can still require a long time. By harnessing our recent algorithmic innovations-particularly subtrajectory moves with high acceptance, coupled with asynchronous replica exchange featuring infinite swaps-we establish a highly parallelizable and rapidly converging path sampling protocol, compatible with diverse high-performance computing architectures. We demonstrate our approach on the liquid-vapor phase transition in superheated water, the unfolding of the chignolin protein, and water dissociation. The latter, performed at the ab initio level, achieves comparable statistical accuracy within days, in contrast to a previous study requiring over a year.
RESUMEN
This paper presents estimates of the prevalence of dementia in the United States from 2000 to 2016 by age, sex, race and ethnicity, education, and a measure of lifetime earnings, using data on 21,442 individuals aged 65 y and older and 97,629 person-year observations from a nationally representative survey, the Health and Retirement Study (HRS). The survey includes a range of cognitive tests, and a subsample underwent clinical assessment for dementia. We developed a longitudinal, latent-variable model of cognitive status, which we estimated using the Markov Chain Monte Carlo method. This model provides more accurate estimates of dementia prevalence in population subgroups than do previously used methods on the HRS. The age-adjusted prevalence of dementia decreased from 12.2% in 2000 (95% CI, 11.7 to 12.7%) to 8.5% in 2016 (7.9 to 9.1%) in the 65+ population, a statistically significant decline of 3.7 percentage points or 30.1%. Females are more likely to live with dementia, but the sex difference has narrowed. In the male subsample, we found a reduction in inequalities across education, earnings, and racial and ethnic groups; among females, those inequalities also declined, but less strongly. We observed a substantial increase in the level of education between 2000 and 2016 in the sample. This compositional change can explain, in a statistical sense, about 40% of the reduction in dementia prevalence among men and 20% among women, whereas compositional changes in the older population by age, race and ethnicity, and cardiovascular risk factors mattered less.
Asunto(s)
Demencia , Etnicidad , Estados Unidos/epidemiología , Humanos , Masculino , Femenino , Prevalencia , Escolaridad , Jubilación , Demencia/epidemiologíaRESUMEN
Epilepsy is a disorder characterized by paroxysmal transitions between multistable states. Dynamical systems have been useful for modeling the paroxysmal nature of seizures. At the same time, intracranial electroencephalography (EEG) recordings have recently discovered that an electrographic measure of epileptogenicity, interictal epileptiform activity, exhibits cycling patterns ranging from ultradian to multidien rhythmicity, with seizures phase-locked to specific phases of these latent cycles. However, many mechanistic questions about seizure cycles remain unanswered. Here, we provide a principled approach to recast the modeling of seizure chronotypes within a statistical dynamical systems framework by developing a Bayesian switching linear dynamical system (SLDS) with variable selection to estimate latent seizure cycles. We propose a Markov chain Monte Carlo algorithm that employs particle Gibbs with ancestral sampling to estimate latent cycles in epilepsy and apply unsupervised learning on spectral features of latent cycles to uncover clusters in cycling tendency. We analyze the largest database of patient-reported seizures in the world to comprehensively characterize multidien cycling patterns among 1,012 people with epilepsy, spanning from infancy to older adulthood. Our work advances knowledge of cycling in epilepsy by investigating how multidien seizure cycles vary in people with epilepsy, while demonstrating an application of an SLDS to frame seizure cycling within a nonlinear dynamical systems framework. It also lays the groundwork for future studies to pursue data-driven hypothesis generation regarding the mechanistic drivers of seizure cycles.
Asunto(s)
Electroencefalografía , Epilepsia , Humanos , Anciano , Teorema de Bayes , Convulsiones , Dinámicas no LinealesRESUMEN
We measured neutralizing antibodies (nAbs) against severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2) in a cohort of 235 convalescent patients (representing 384 analytic samples). They were followed for up to 588 days after the first report of onset in Taiwan. A proposed Bayesian approach was used to estimate nAb dynamics in patients postvaccination. This model revealed that the titer reached its peak (1819.70â IU/mL) by 161 days postvaccination and decreased to 154.18â IU/mL by day 360. Thus, the nAb titers declined in 6 months after vaccination. Protection, against variant B.1.1.529 (ie, Omicron) may only occur during the peak period.
Asunto(s)
COVID-19 , Humanos , COVID-19/prevención & control , SARS-CoV-2 , Teorema de Bayes , Vacunación , Anticuerpos Neutralizantes , Anticuerpos AntiviralesRESUMEN
We consider a problem of inferring contact network from nodal states observed during an epidemiological process. In a black-box Bayesian optimisation framework this problem reduces to a discrete likelihood optimisation over the set of possible networks. The cardinality of this set grows combinatorially with the number of network nodes, which makes this optimisation computationally challenging. For each network, its likelihood is the probability for the observed data to appear during the evolution of the epidemiological process on this network. This probability can be very small, particularly if the network is significantly different from the ground truth network, from which the observed data actually appear. A commonly used stochastic simulation algorithm struggles to recover rare events and hence to estimate small probabilities and likelihoods. In this paper we replace the stochastic simulation with solving the chemical master equation for the probabilities of all network states. Since this equation also suffers from the curse of dimensionality, we apply tensor train approximations to overcome it and enable fast and accurate computations. Numerical simulations demonstrate efficient black-box Bayesian inference of the network.
Asunto(s)
Algoritmos , Teorema de Bayes , Humanos , Simulación por ComputadorRESUMEN
This study investigates the impact of spatio- temporal correlation using four spatio-temporal models: Spatio-Temporal Poisson Linear Trend Model (SPLTM), Poisson Temporal Model (TMS), Spatio-Temporal Poisson Anova Model (SPAM), and Spatio-Temporal Poisson Separable Model (STSM) concerning food security and nutrition in Africa. Evaluating model goodness of fit using the Watanabe Akaike Information Criterion (WAIC) and assessing bias through root mean square error and mean absolute error values revealed a consistent monotonic pattern. SPLTM consistently demonstrates a propensity for overestimating food security, while TMS exhibits a diverse bias profile, shifting between overestimation and underestimation based on varying correlation settings. SPAM emerges as a beacon of reliability, showcasing minimal bias and WAIC across diverse scenarios, while STSM consistently underestimates food security, particularly in regions marked by low to moderate spatio-temporal correlation. SPAM consistently outperforms other models, making it a top choice for modeling food security and nutrition dynamics in Africa. This research highlights the impact of spatial and temporal correlations on food security and nutrition patterns and provides guidance for model selection and refinement. Researchers are encouraged to meticulously evaluate the biases and goodness of fit characteristics of models, ensuring their alignment with the specific attributes of their data and research goals. This knowledge empowers researchers to select models that offer reliability and consistency, enhancing the applicability of their findings.
Asunto(s)
Seguridad Alimentaria , África , Seguridad Alimentaria/métodos , Análisis Espacio-Temporal , Humanos , Simulación por Computador , Distribución de PoissonRESUMEN
BACKGROUND: Genomes are inherently inhomogeneous, with features such as base composition, recombination, gene density, and gene expression varying along chromosomes. Evolutionary, biological, and biomedical analyses aim to quantify this variation, account for it during inference procedures, and ultimately determine the causal processes behind it. Since sequential observations along chromosomes are not independent, it is unsurprising that autocorrelation patterns have been observed e.g., in human base composition. In this article, we develop a class of Hidden Markov Models (HMMs) called oHMMed (ordered HMM with emission densities, the corresponding R package of the same name is available on CRAN): They identify the number of comparably homogeneous regions within autocorrelated observed sequences. These are modelled as discrete hidden states; the observed data points are realisations of continuous probability distributions with state-specific means that enable ordering of these distributions. The observed sequence is labelled according to the hidden states, permitting only neighbouring states that are also neighbours within the ordering of their associated distributions. The parameters that characterise these state-specific distributions are inferred. RESULTS: We apply our oHMMed algorithms to the proportion of G and C bases (modelled as a mixture of normal distributions) and the number of genes (modelled as a mixture of poisson-gamma distributions) in windows along the human, mouse, and fruit fly genomes. This results in a partitioning of the genomes into regions by statistically distinguishable averages of these features, and in a characterisation of their continuous patterns of variation. In regard to the genomic G and C proportion, this latter result distinguishes oHMMed from segmentation algorithms based in isochore or compositional domain theory. We further use oHMMed to conduct a detailed analysis of variation of chromatin accessibility (ATAC-seq) and epigenetic markers H3K27ac and H3K27me3 (modelled as a mixture of poisson-gamma distributions) along the human chromosome 1 and their correlations. CONCLUSIONS: Our algorithms provide a biologically assumption free approach to characterising genomic landscapes shaped by continuous, autocorrelated patterns of variation. Despite this, the resulting genome segmentation enables extraction of compositionally distinct regions for further downstream analyses.
Asunto(s)
Genoma , Genómica , Animales , Humanos , Ratones , Cadenas de Markov , Composición de Base , Probabilidad , AlgoritmosRESUMEN
The clustering of proteins is of interest in cancer cell biology. This article proposes a hierarchical Bayesian model for protein (variable) clustering hinging on correlation structure. Starting from a multivariate normal likelihood, we enforce the clustering through prior modeling using angle-based unconstrained reparameterization of correlations and assume a truncated Poisson distribution (to penalize a large number of clusters) as prior on the number of clusters. The posterior distributions of the parameters are not in explicit form and we use a reversible jump Markov chain Monte Carlo based technique is used to simulate the parameters from the posteriors. The end products of the proposed method are estimated cluster configuration of the proteins (variables) along with the number of clusters. The Bayesian method is flexible enough to cluster the proteins as well as estimate the number of clusters. The performance of the proposed method has been substantiated with extensive simulation studies and one protein expression data with a hereditary disposition in breast cancer where the proteins are coming from different pathways.
Asunto(s)
Neoplasias de la Mama , Humanos , Femenino , Teorema de Bayes , Neoplasias de la Mama/genética , Modelos Genéticos , Análisis por Conglomerados , Cadenas de Markov , Método de MontecarloRESUMEN
Molecular evolutionary rate variation is a key aspect of the evolution of many organisms that can be modeled using molecular clock models. For example, fixed local clocks revealed the role of episodic evolution in the emergence of SARS-CoV-2 variants of concern. Like all statistical models, however, the reliability of such inferences is contingent on an assessment of statistical evidence. We present a novel Bayesian phylogenetic approach for detecting episodic evolution. It consists of computing Bayes factors, as the ratio of posterior and prior odds of evolutionary rate increases, effectively quantifying support for the effect size. We conducted an extensive simulation study to illustrate the power of this method and benchmarked it to formal model comparison of a range of molecular clock models using (log) marginal likelihood estimation, and to inference under a random local clock model. Quantifying support for the effect size has higher sensitivity than formal model testing and is straight-forward to compute, because it only needs samples from the posterior and prior distribution. However, formal model testing has the advantage of accommodating a wide range molecular clock models. We also assessed the ability of an automated approach, known as the random local clock, where branches under episodic evolution may be detected without their a priori definition. In an empirical analysis of a data set of SARS-CoV-2 genomes, we find "very strong" evidence for episodic evolution. Our results provide guidelines and practical methods for Bayesian detection of episodic evolution, as well as avenues for further research into this phenomenon.
RESUMEN
BACKGROUND: Patients with chronic hepatitis C (CHC) can be cured with the new highly effective interferon-free combination treatments (DAA) that were approved in 2014. However, CHC is a largely silent disease, and many individuals are unaware of their infections until the late stages of the disease. The impact of wider access to effective treatments and improved awareness of the disease on the number of infections and the number of patients who remain undiagnosed is not known in Canada. Such evidence can guide the development of strategies and interventions to reduce the burden of CHC and meet World Health Organization's (WHO) 2030 elimination targets. The purpose of this study is to use a back-calculation framework informed by provincial population-level health administrative data to estimate the prevalence of CHC and the proportion of cases that remain undiagnosed in the three most populated provinces in Canada: British Columbia (BC), Ontario and Quebec. METHODS: We have conducted a population-based retrospective analysis of health administrative data for the three provinces to generate the annual incidence of newly diagnosed CHC cases, decompensated cirrhosis (DC), hepatocellular carcinoma (HCC) and HCV treatment initiations. For each province, the data were stratified in three birth cohorts: individuals born prior to 1945, individuals born between 1945 and 1965 and individuals born after 1965. We used a back-calculation modelling approach to estimate prevalence and the undiagnosed proportion of CHC. The historical prevalence of CHC was inferred through a calibration process based on a Bayesian Markov chain Monte Carlo (MCMC) algorithm. The algorithm constructs the historical prevalence of CHC for each cohort by comparing the model-generated outcomes of the annual incidence of the CHC-related health events against the data set of observed diagnosed cases generated in the retrospective analysis. RESULTS: The results show a decreasing trend in both CHC prevalence and undiagnosed proportion in BC, Ontario and Quebec. In 2018, CHC prevalence was estimated to be 1.23% (95% CI: .96%-1.62%), .91% (95% CI: .82%-1.04%) and .57% (95% CI: .51%-.64%) in BC, Ontario and Quebec respectively. The CHC undiagnosed proportion was assessed to be 35.44% (95% CI: 27.07%-45.83%), 34.28% (95% CI: 26.74%-41.62%) and 46.32% (95% CI: 37.85%-52.80%) in BC, Ontario and Quebec, respectively, in 2018. Also, since the introduction of new DAA treatment in 2014, CHC prevalence decreased from 1.39% to 1.23%, .97% to .91% and .65% to .57% in BC, Ontario and Quebec respectively. Similarly, the CHC undiagnosed proportion decreased from 38.78% to 35.44%, 38.70% to 34.28% and 47.54% to 46.32% in BC, Ontario and Quebec, respectively, from 2014 to 2018. CONCLUSIONS: We estimated that the CHC prevalence and undiagnosed proportion have declined for all three provinces since the new DAA treatment has been approved in 2014. Yet, our findings show that a significant proportion of HCV cases remain undiagnosed across all provinces highlighting the need to increase investment in screening. Our findings provide essential evidence to guide decisions about current and future HCV strategies and help achieve the WHO goal of eliminating hepatitis C in Canada by 2030.
Asunto(s)
Antivirales , Carcinoma Hepatocelular , Hepatitis C Crónica , Humanos , Hepatitis C Crónica/epidemiología , Hepatitis C Crónica/tratamiento farmacológico , Hepatitis C Crónica/diagnóstico , Antivirales/uso terapéutico , Prevalencia , Masculino , Femenino , Estudios Retrospectivos , Persona de Mediana Edad , Carcinoma Hepatocelular/epidemiología , Anciano , Adulto , Quebec/epidemiología , Ontario/epidemiología , Neoplasias Hepáticas/epidemiología , Colombia Británica/epidemiología , Cirrosis Hepática/epidemiología , IncidenciaRESUMEN
Prior distributions, which represent one's belief in the distributions of unknown parameters before observing the data, impact Bayesian inference in a critical and fundamental way. With the ability to incorporate external information from expert opinions or historical datasets, the priors, if specified appropriately, can improve the statistical efficiency of Bayesian inference. In survival analysis, based on the concept of unit information (UI) under parametric models, we propose the unit information Dirichlet process (UIDP) as a new class of nonparametric priors for the underlying distribution of time-to-event data. By deriving the Fisher information in terms of the differential of the cumulative hazard function, the UIDP prior is formulated to match its prior UI with the weighted average of UI in historical datasets and thus can utilize both parametric and nonparametric information provided by historical datasets. With a Markov chain Monte Carlo algorithm, simulations and real data analysis demonstrate that the UIDP prior can adaptively borrow historical information and improve statistical efficiency in survival analysis.
Asunto(s)
Teorema de Bayes , Simulación por Computador , Cadenas de Markov , Modelos Estadísticos , Método de Montecarlo , Análisis de Supervivencia , Humanos , Algoritmos , Biometría/métodos , Interpretación Estadística de DatosRESUMEN
The scope of this paper is a multivariate setting involving categorical variables. Following an external manipulation of one variable, the goal is to evaluate the causal effect on an outcome of interest. A typical scenario involves a system of variables representing lifestyle, physical and mental features, symptoms, and risk factors, with the outcome being the presence or absence of a disease. These variables are interconnected in complex ways, allowing the effect of an intervention to propagate through multiple paths. A distinctive feature of our approach is the estimation of causal effects while accounting for uncertainty in both the dependence structure, which we represent through a directed acyclic graph (DAG), and the DAG-model parameters. Specifically, we propose a Markov chain Monte Carlo algorithm that targets the joint posterior over DAGs and parameters, based on an efficient reversible-jump proposal scheme. We validate our method through extensive simulation studies and demonstrate that it outperforms current state-of-the-art procedures in terms of estimation accuracy. Finally, we apply our methodology to analyze a dataset on depression and anxiety in undergraduate students.
Asunto(s)
Algoritmos , Causalidad , Simulación por Computador , Depresión , Cadenas de Markov , Modelos Estadísticos , Método de Montecarlo , Humanos , Ansiedad , Biometría/métodosRESUMEN
Inferring the cancer-type specificities of ultra-rare, genome-wide somatic mutations is an open problem. Traditional statistical methods cannot handle such data due to their ultra-high dimensionality and extreme data sparsity. To harness information in rare mutations, we have recently proposed a formal multilevel multilogistic "hidden genome" model. Through its hierarchical layers, the model condenses information in ultra-rare mutations through meta-features embodying mutation contexts to characterize cancer types. Consistent, scalable point estimation of the model can incorporate 10s of millions of variants across thousands of tumors and permit impressive prediction and attribution. However, principled statistical inference is infeasible due to the volume, correlation, and noninterpretability of mutation contexts. In this paper, we propose a novel framework that leverages topic models from computational linguistics to effectuate dimension reduction of mutation contexts producing interpretable, decorrelated meta-feature topics. We propose an efficient MCMC algorithm for implementation that permits rigorous full Bayesian inference at a scale that is orders of magnitude beyond the capability of existing out-of-the-box inferential high-dimensional multi-class regression methods and software. Applying our model to the Pan Cancer Analysis of Whole Genomes dataset reveals interesting biological insights including somatic mutational topics associated with UV exposure in skin cancer, aging in colorectal cancer, and strong influence of epigenome organization in liver cancer. Under cross-validation, our model demonstrates highly competitive predictive performance against blackbox methods of random forest and deep learning.
Asunto(s)
Algoritmos , Teorema de Bayes , Mutación , Neoplasias , Humanos , Neoplasias/genética , Modelos Estadísticos , Neoplasias Cutáneas/genéticaRESUMEN
Time-to-event data are often recorded on a discrete scale with multiple, competing risks as potential causes for the event. In this context, application of continuous survival analysis methods with a single risk suffers from biased estimation. Therefore, we propose the multivariate Bernoulli detector for competing risks with discrete times involving a multivariate change point model on the cause-specific baseline hazards. Through the prior on the number of change points and their location, we impose dependence between change points across risks, as well as allowing for data-driven learning of their number. Then, conditionally on these change points, a multivariate Bernoulli prior is used to infer which risks are involved. Focus of posterior inference is cause-specific hazard rates and dependence across risks. Such dependence is often present due to subject-specific changes across time that affect all risks. Full posterior inference is performed through a tailored local-global Markov chain Monte Carlo (MCMC) algorithm, which exploits a data augmentation trick and MCMC updates from nonconjugate Bayesian nonparametric methods. We illustrate our model in simulations and on ICU data, comparing its performance with existing approaches.
Asunto(s)
Algoritmos , Teorema de Bayes , Simulación por Computador , Cadenas de Markov , Método de Montecarlo , Humanos , Análisis de Supervivencia , Modelos Estadísticos , Análisis Multivariante , Biometría/métodosRESUMEN
BACKGROUND: Clinical trials are increasingly using Bayesian methods for their design and analysis. Inference in Bayesian trials typically uses simulation-based approaches such as Markov Chain Monte Carlo methods. Markov Chain Monte Carlo has high computational cost and can be complex to implement. The Integrated Nested Laplace Approximations algorithm provides approximate Bayesian inference without the need for computationally complex simulations, making it more efficient than Markov Chain Monte Carlo. The practical properties of Integrated Nested Laplace Approximations compared to Markov Chain Monte Carlo have not been considered for clinical trials. Using data from a published clinical trial, we aim to investigate whether Integrated Nested Laplace Approximations is a feasible and accurate alternative to Markov Chain Monte Carlo and provide practical guidance for trialists interested in Bayesian trial design. METHODS: Data from an international Bayesian multi-platform adaptive trial that compared therapeutic-dose anticoagulation with heparin to usual care in non-critically ill patients hospitalized for COVID-19 were used to fit Bayesian hierarchical generalized mixed models. Integrated Nested Laplace Approximations was compared to two Markov Chain Monte Carlo algorithms, implemented in the software JAGS and stan, using packages available in the statistical software R. Seven outcomes were analysed: organ-support free days (an ordinal outcome), five binary outcomes related to survival and length of hospital stay, and a time-to-event outcome. The posterior distributions for the treatment and sex effects and the variances for the hierarchical effects of age, site and time period were obtained. We summarized these posteriors by calculating the mean, standard deviations and the 95% equitailed credible intervals and presenting the results graphically. The computation time for each algorithm was recorded. RESULTS: The average overlap of the 95% credible interval for the treatment and sex effects estimated using Integrated Nested Laplace Approximations was 96% and 97.6% compared with stan, respectively. The graphical posterior densities for these effects overlapped for all three algorithms. The posterior mean for the variance of the hierarchical effects of age, site and time estimated using Integrated Nested Laplace Approximations are within the 95% credible interval estimated using Markov Chain Monte Carlo but the average overlap of the credible interval is lower, 77%, 85.6% and 91.3%, respectively, for Integrated Nested Laplace Approximations compared to stan. Integrated Nested Laplace Approximations and stan were easily implemented in clear, well-established packages in R, while JAGS required the direct specification of the model. Integrated Nested Laplace Approximations was between 85 and 269 times faster than stan and 26 and 1852 times faster than JAGS. CONCLUSION: Integrated Nested Laplace Approximations could reduce the computational complexity of Bayesian analysis in clinical trials as it is easy to implement in R, substantially faster than Markov Chain Monte Carlo methods implemented in JAGS and stan, and provides near identical approximations to the posterior distributions for the treatment effect. Integrated Nested Laplace Approximations was less accurate when estimating the posterior distribution for the variance of hierarchical effects, particularly for the proportional odds model, and future work should determine if the Integrated Nested Laplace Approximations algorithm can be adjusted to improve this estimation.
Asunto(s)
Algoritmos , Teorema de Bayes , Cadenas de Markov , Método de Montecarlo , Humanos , Masculino , Anticoagulantes/uso terapéutico , COVID-19 , Femenino , Heparina/uso terapéutico , SARS-CoV-2 , Proyectos de InvestigaciónRESUMEN
Indirect mechanisms of cancer immunotherapies result in delayed treatment effects that vary among patients. Consequently, the use of the log-rank test in trial design and analysis can lead to significant power loss and pose additional challenges for interim decisions in adaptive designs. In this paper, we describe patients' survival using a piecewise proportional hazard model with random lag time and propose an adaptive promising zone design for cancer immunotherapy with heterogeneous delayed effects. We provide solutions for calculating conditional power and adjusting the critical value for the log-rank test with interim data. We divide the sample space into three zones - unfavourable, promising, and favourable -based on re-estimations of the survival parameters, the log-rank test statistic at the interim analysis, and the initial and maximum sample sizes. If the interim results fall into the promising zone, the sample size is increased; otherwise, it remains unchanged. We show through simulations that our proposed approach has greater overall power than the fixed sample design and similar power to the matched group sequential trial. Furthermore, we confirm that critical value adjustment effectively controls the type I error rate inflation. Finally, we provide recommendations on the implementation of our proposed method in cancer immunotherapy trials.
RESUMEN
Multi-regional clinical trial (MRCT) has become an increasing trend for its supporting simultaneous global drug development. After MRCT, consistency assessment needs to be conducted to evaluate regional efficacy. The weighted Z-test approach is a common consistency assessment approach in which the weighting parameter W does not have a good practical significance; the discounting factor approach improved from the weighted Z-test approach by converting the estimation of W in original weighted Z-test approach to the estimation of discounting factor D. However, the discounting factor approach is an approach of frequency statistics, in which D was fixed as a certain value; the variation of D was not considered, which may lead to un-reasonable results. In this paper, we proposed a Bayesian approach based on D to evaluate the treatment effect for the target region in MRCT, in which the variation of D was considered. Specifically, we first took D random instead of fixed as a certain value and specified a beta distribution for it. According to the results of simulation, we further adjusted the Bayesian approach. The application of the proposed approach was illustrated by Markov Chain Monte Carlo simulation.
RESUMEN
Reproductive performance is a key determinant of cow longevity in a pasture-based, seasonal dairy system. Unfortunately, direct fertility phenotypes such as intercalving interval or pregnancy rate tend to have low heritabilities and occur relatively late in an animal's life. In contrast, age at puberty (AGEP) is a moderately heritable, early-in-life trait that may be estimated using an animal's age at first measured elevation in blood plasma progesterone (AGEP4) concentrations. Understanding the genetic architecture of AGEP4 in addition to genetic relationships between AGEP4 and fertility traits in lactating cows is important, as is its relationship with body size in the growing animal. Thus, the objectives of this research were 3-fold. First, to estimate the genetic and phenotypic (co)variances between AGEP4 and subsequent fertility during first and second lactations. Second, to quantify the associations between AGEP4 and height, length, and BW measured when animals were approximately 11 mo old (standard deviation = 0.5). Third, to identify genomic regions that are likely to be associated with variation in AGEP4. We measured AGEP4, height, length, and BW in approximately 5,000 Holstein-Friesian or Holstein-Friesian × Jersey crossbred yearling heifers across 54 pasture-based herds managed in seasonal calving farm systems. We also obtained calving rate (CR42, success or failure to calve within the first 42 d of the seasonal calving period), breeding rate (PB21, success or failure to be presented for breeding within the first 21 d of the seasonal breeding period) and pregnancy rate (PR42, success or failure to become pregnant within the first 42 d of the seasonal breeding period) phenotypes from their first and second lactations. The animals were genotyped using the Weatherby's Versa 50K SNP array (Illumina, San Diego, CA). The estimated heritabilities of AGEP4, height, length, and BW were 0.34 (90% credibility interval [CRI]: 0.30, 0.37), 0.28 (90% CRI: 0.25, 0.31), 0.21 (90% CRI: 0.18, 0.23), and 0.33 (90% CRI: 0.30, 0.36), respectively. In contrast, the heritabilities of CR42, PB21 and PR42 were all <0.05 in both first and second lactations. The genetic correlations between AGEP4 and these fertility traits were generally moderate, ranging from 0.11 to 0.60, whereas genetic correlations between AGEP4 and yearling body-conformation traits ranged from 0.02 to 0.28. Our GWAS highlighted a genomic window on chromosome 5 that was strongly associated with variation in AGEP4. We also identified 4 regions, located on chromosomes 14, 6, 1, and 11 (in order of decreasing importance), that exhibited suggestive associations with AGEP4. Our results show that AGEP4 is a reasonable predictor of estimated breeding values for fertility traits in lactating cows. Although the GWAS provided insights into genetic mechanisms underpinning AGEP4, further work is required to test genomic predictions of fertility that use this information.
Asunto(s)
Fertilidad , Estudio de Asociación del Genoma Completo , Lactancia , Animales , Bovinos/genética , Fertilidad/genética , Femenino , Lactancia/genética , Fenotipo , Maduración Sexual/genética , Embarazo , GenotipoRESUMEN
Benzophenone (BP) and BP derivatives (BPDs) are widely used as ultraviolet (UV) stabilizers in food packaging materials and as photoinitiators in UV-curable inks for printing on food-contact materials. However, our knowledge regarding the sources and risks of dietary exposure to BP and BPDs in cereals remains limited, which prompted us to conduct this study. We measured the levels of BP and nine BPDs-BP-1, BP-2, BP-3, BP-8, 2-hydroxybenzophenone, 4-hydroxybenzophenone, 4-methylbenzophenone (4-MBP), methyl-2-benzoylbenzoate, and 4-benzoylbiphenyl-in three types of cereals (rice flour, oatmeal, and cornflakes; 180 samples in total). A Bayesian Markov-chain Monte Carlo (MC) simulation approach was used for deriving the posterior distributions of BP and BPD residues. This approach helped in addressing the uncertainty in probabilistic distribution for the sampled data under the detection limit. Through an MC simulation, we calculated the daily exposure levels of dietary BP and BPDs and corresponding health risks. The results revealed the ubiquitous presence of BP, BP-3, and 4-MBP in cereals. Older adults (aged >65 years) had the highest (97.5 percentile) lifetime carcinogenic risk for BP exposure through cereals (9.41 × 10-7), whereas children aged 0-3 years had the highest (97.5 percentile) hazard indices for BPD exposure through cereals (2.5 × 10-2). Nevertheless, across age groups, the lifetime carcinogenic risks of BP exposure through cereals were acceptable, and the hazard indices for BPD exposure through cereals were <1. Therefore, BPD exposure through cereals may not be a health concern for individuals in Taiwan.
RESUMEN
Catastrophe bonds (cat bond in short) are an alternative risk-transfer instrument used to transfer peril-specific financial risk from governments, financial institutions, or (re)insurers, to the capital market. Current approaches for cat bond pricing are calibrated on seismic mainshocks, and thus do not account for potential effects induced by earthquake sequences. This simplifying assumption implies that damage arises from mainshocks only, while aftershocks yield no damage. Postearthquake field surveys reveal that this assumption is inaccurate. For example, in the 2011 Christchurch Earthquake sequence and 2016-2017 Central Italy Earthquake sequence, aftershocks were responsible for higher economic losses when compared to those caused by mainshocks. This article proposes a time-dependent aggregate loss model that takes into account seismicity clustering and damage accumulation effects in the computation of damage. The model is calibrated on the seismic events recorded during the recent 2016-2017 Central Italy Earthquake sequence. Furthermore, the effects of earthquake sequence on cat bond pricing is explored by implementing the proposed model on five Italian municipalities. The investigation showed that neglecting time-dependency may lead to higher difference (up to 45%) in the cat bond price when compared to standard approaches.