Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 24.007
Filtrar
1.
AAPS J ; 26(3): 53, 2024 Apr 23.
Artigo em Inglês | MEDLINE | ID: mdl-38722435

RESUMO

The standard errors (SE) of the maximum likelihood estimates (MLE) of the population parameter vector in nonlinear mixed effect models (NLMEM) are usually estimated using the inverse of the Fisher information matrix (FIM). However, at a finite distance, i.e. far from the asymptotic, the FIM can underestimate the SE of NLMEM parameters. Alternatively, the standard deviation of the posterior distribution, obtained in Stan via the Hamiltonian Monte Carlo algorithm, has been shown to be a proxy for the SE, since, under some regularity conditions on the prior, the limiting distributions of the MLE and of the maximum a posterior estimator in a Bayesian framework are equivalent. In this work, we develop a similar method using the Metropolis-Hastings (MH) algorithm in parallel to the stochastic approximation expectation maximisation (SAEM) algorithm, implemented in the saemix R package. We assess this method on different simulation scenarios and data from a real case study, comparing it to other SE computation methods. The simulation study shows that our method improves the results obtained with frequentist methods at finite distance. However, it performed poorly in a scenario with the high variability and correlations observed in the real case study, stressing the need for calibration.


Assuntos
Algoritmos , Simulação por Computador , Método de Monte Carlo , Dinâmica não Linear , Incerteza , Funções Verossimilhança , Teorema de Bayes , Humanos , Modelos Estatísticos
2.
PLoS One ; 19(5): e0303276, 2024.
Artigo em Inglês | MEDLINE | ID: mdl-38768166

RESUMO

Binary classification methods encompass various algorithms to categorize data points into two distinct classes. Binary prediction, in contrast, estimates the likelihood of a binary event occurring. We introduce a novel graphical and quantitative approach, the U-smile method, for assessing prediction improvement stratified by binary outcome class. The U-smile method utilizes a smile-like plot and novel coefficients to measure the relative and absolute change in prediction compared with the reference method. The likelihood-ratio test was used to assess the significance of the change in prediction. Logistic regression models using the Heart Disease dataset and generated random variables were employed to validate the U-smile method. The receiver operating characteristic (ROC) curve was used to compare the results of the U-smile method. The likelihood-ratio test demonstrated that the proposed coefficients consistently generated smile-shaped U-smile plots for the most informative predictors. The U-smile plot proved more effective than the ROC curve in comparing the effects of adding new predictors to the reference method. It effectively highlighted differences in model performance for both non-events and events. Visual analysis of the U-smile plots provided an immediate impression of the usefulness of different predictors at a glance. The U-smile method can guide the selection of the most valuable predictors. It can also be helpful in applications beyond prediction.


Assuntos
Curva ROC , Humanos , Modelos Logísticos , Algoritmos , Funções Verossimilhança , Cardiopatias
3.
BMC Med Res Methodol ; 24(1): 111, 2024 May 10.
Artigo em Inglês | MEDLINE | ID: mdl-38730436

RESUMO

BACKGROUND: A Generalized Linear Mixed Model (GLMM) is recommended to meta-analyze diagnostic test accuracy studies (DTAs) based on aggregate or individual participant data. Since a GLMM does not have a closed-form likelihood function or parameter solutions, computational methods are conventionally used to approximate the likelihoods and obtain parameter estimates. The most commonly used computational methods are the Iteratively Reweighted Least Squares (IRLS), the Laplace approximation (LA), and the Adaptive Gauss-Hermite quadrature (AGHQ). Despite being widely used, it has not been clear how these computational methods compare and perform in the context of an aggregate data meta-analysis (ADMA) of DTAs. METHODS: We compared and evaluated the performance of three commonly used computational methods for GLMM - the IRLS, the LA, and the AGHQ, via a comprehensive simulation study and real-life data examples, in the context of an ADMA of DTAs. By varying several parameters in our simulations, we assessed the performance of the three methods in terms of bias, root mean squared error, confidence interval (CI) width, coverage of the 95% CI, convergence rate, and computational speed. RESULTS: For most of the scenarios, especially when the meta-analytic data were not sparse (i.e., there were no or negligible studies with perfect diagnosis), the three computational methods were comparable for the estimation of sensitivity and specificity. However, the LA had the largest bias and root mean squared error for pooled sensitivity and specificity when the meta-analytic data were sparse. Moreover, the AGHQ took a longer computational time to converge relative to the other two methods, although it had the best convergence rate. CONCLUSIONS: We recommend practitioners and researchers carefully choose an appropriate computational algorithm when fitting a GLMM to an ADMA of DTAs. We do not recommend the LA for sparse meta-analytic data sets. However, either the AGHQ or the IRLS can be used regardless of the characteristics of the meta-analytic data.


Assuntos
Simulação por Computador , Testes Diagnósticos de Rotina , Metanálise como Assunto , Humanos , Testes Diagnósticos de Rotina/métodos , Testes Diagnósticos de Rotina/normas , Testes Diagnósticos de Rotina/estatística & dados numéricos , Modelos Lineares , Algoritmos , Funções Verossimilhança , Sensibilidade e Especificidade
4.
Biometrics ; 80(2)2024 Mar 27.
Artigo em Inglês | MEDLINE | ID: mdl-38768225

RESUMO

Conventional supervised learning usually operates under the premise that data are collected from the same underlying population. However, challenges may arise when integrating new data from different populations, resulting in a phenomenon known as dataset shift. This paper focuses on prior probability shift, where the distribution of the outcome varies across datasets but the conditional distribution of features given the outcome remains the same. To tackle the challenges posed by such shift, we propose an estimation algorithm that can efficiently combine information from multiple sources. Unlike existing methods that are restricted to discrete outcomes, the proposed approach accommodates both discrete and continuous outcomes. It also handles high-dimensional covariate vectors through variable selection using an adaptive least absolute shrinkage and selection operator penalty, producing efficient estimates that possess the oracle property. Moreover, a novel semiparametric likelihood ratio test is proposed to check the validity of prior probability shift assumptions by embedding the null conditional density function into Neyman's smooth alternatives (Neyman, 1937) and testing study-specific parameters. We demonstrate the effectiveness of our proposed method through extensive simulations and a real data example. The proposed methods serve as a useful addition to the repertoire of tools for dealing dataset shifts.


Assuntos
Algoritmos , Simulação por Computador , Modelos Estatísticos , Probabilidade , Humanos , Funções Verossimilhança , Biometria/métodos , Interpretação Estatística de Dados , Aprendizado de Máquina Supervisionado
5.
Bull Math Biol ; 86(6): 70, 2024 May 08.
Artigo em Inglês | MEDLINE | ID: mdl-38717656

RESUMO

Practical limitations of quality and quantity of data can limit the precision of parameter identification in mathematical models. Model-based experimental design approaches have been developed to minimise parameter uncertainty, but the majority of these approaches have relied on first-order approximations of model sensitivity at a local point in parameter space. Practical identifiability approaches such as profile-likelihood have shown potential for quantifying parameter uncertainty beyond linear approximations. This research presents a genetic algorithm approach to optimise sample timing across various parameterisations of a demonstrative PK-PD model with the goal of aiding experimental design. The optimisation relies on a chosen metric of parameter uncertainty that is based on the profile-likelihood method. Additionally, the approach considers cases where multiple parameter scenarios may require simultaneous optimisation. The genetic algorithm approach was able to locate near-optimal sampling protocols for a wide range of sample number (n = 3-20), and it reduced the parameter variance metric by 33-37% on average. The profile-likelihood metric also correlated well with an existing Monte Carlo-based metric (with a worst-case r > 0.89), while reducing computational cost by an order of magnitude. The combination of the new profile-likelihood metric and the genetic algorithm demonstrate the feasibility of considering the nonlinear nature of models in optimal experimental design at a reasonable computational cost. The outputs of such a process could allow for experimenters to either improve parameter certainty given a fixed number of samples, or reduce sample quantity while retaining the same level of parameter certainty.


Assuntos
Algoritmos , Simulação por Computador , Conceitos Matemáticos , Modelos Biológicos , Método de Monte Carlo , Funções Verossimilhança , Humanos , Relação Dose-Resposta a Droga , Projetos de Pesquisa/estatística & dados numéricos , Modelos Genéticos , Incerteza
6.
PLoS One ; 19(5): e0298638, 2024.
Artigo em Inglês | MEDLINE | ID: mdl-38753595

RESUMO

Improved adaptive type-II progressive censoring schemes (IAT-II PCS) are increasingly being used to estimate parameters and reliability characteristics of lifetime distributions, leading to more accurate and reliable estimates. The logistic exponential distribution (LED), a flexible distribution with five hazard rate forms, is employed in several fields, including lifetime, financial, and environmental data. This research aims to enhance the accuracy and reliability estimation capabilities for the logistic exponential distribution under IAT-II PCS. By developing novel statistical inference methods, we can better understand the behavior of failure times, allow for more accurate decision-making, and improve the overall reliability of the model. In this research, we consider both classical and Bayesian techniques. The classical technique involves constructing maximum likelihood estimators of the model parameters and their asymptotic covariance matrix, followed by estimating the distribution's reliability using survival and hazard functions. The delta approach is used to create estimated confidence intervals for the model parameters. In the Bayesian technique, prior information about the LED parameters is used to estimate the posterior distribution of the parameters, which is derived using Bayes' theorem. The model's reliability is determined by computing the posterior predictive distribution of the survival or hazard functions. Extensive simulation studies and real-data applications assess the effectiveness of the proposed methods and evaluate their performance against existing methods.


Assuntos
Teorema de Bayes , Humanos , Funções Verossimilhança , Modelos Estatísticos , Reprodutibilidade dos Testes , Simulação por Computador , Modelos Logísticos
7.
Genet Sel Evol ; 56(1): 35, 2024 May 02.
Artigo em Inglês | MEDLINE | ID: mdl-38698347

RESUMO

BACKGROUND: The theory of "metafounders" proposes a unified framework for relationships across base populations within breeds (e.g. unknown parent groups), and base populations across breeds (crosses) together with a sensible compatibility with genomic relationships. Considering metafounders might be advantageous in pedigree best linear unbiased prediction (BLUP) or single-step genomic BLUP. Existing methods to estimate relationships across metafounders Γ are not well adapted to highly unbalanced data, genotyped individuals far from base populations, or many unknown parent groups (within breed per year of birth). METHODS: We derive likelihood methods to estimate Γ . For a single metafounder, summary statistics of pedigree and genomic relationships allow deriving a cubic equation with the real root being the maximum likelihood (ML) estimate of Γ . This equation is tested with Lacaune sheep data. For several metafounders, we split the first derivative of the complete likelihood in a term related to Γ , and a second term related to Mendelian sampling variances. Approximating the first derivative by its first term results in a pseudo-EM algorithm that iteratively updates the estimate of Γ by the corresponding block of the H-matrix. The method extends to complex situations with groups defined by year of birth, modelling the increase of Γ using estimates of the rate of increase of inbreeding ( Δ F ), resulting in an expanded Γ and in a pseudo-EM+ Δ F algorithm. We compare these methods with the generalized least squares (GLS) method using simulated data: complex crosses of two breeds in equal or unsymmetrical proportions; and in two breeds, with 10 groups per year of birth within breed. We simulate genotyping in all generations or in the last ones. RESULTS: For a single metafounder, the ML estimates of the Lacaune data corresponded to the maximum. For simulated data, when genotypes were spread across all generations, both GLS and pseudo-EM(+ Δ F ) methods were accurate. With genotypes only available in the most recent generations, the GLS method was biased, whereas the pseudo-EM(+ Δ F ) approach yielded more accurate and unbiased estimates. CONCLUSIONS: We derived ML, pseudo-EM and pseudo-EM+ Δ F methods to estimate Γ in many realistic settings. Estimates are accurate in real and simulated data and have a low computational cost.


Assuntos
Cruzamento , Modelos Genéticos , Linhagem , Animais , Funções Verossimilhança , Cruzamento/métodos , Algoritmos , Ovinos/genética , Genômica/métodos , Simulação por Computador , Masculino , Feminino , Genótipo
8.
J Parasitol ; 110(3): 186-194, 2024 May 01.
Artigo em Inglês | MEDLINE | ID: mdl-38700436

RESUMO

Leech specimens of the genus Pontobdella (Hirudinida: Piscicolidae) were found off the coast of the state of Oaxaca (Pacific) as well as in Veracruz and Tabasco (Gulf of Mexico), Mexico. Based on the specimens collected in Oaxaca, a redescription of Pontobdella californiana is provided, with emphasis on the differences in the reproductive organs with the original description of the species. In addition, leech cocoons assigned to P. californiana were found attached to items hauled by gillnets and studied using scanning electron microscopy and molecular approaches. Samples of Pontobdella macrothela were found in both Pacific and Atlantic oceans, representing new geographic records. The phylogenetic position of P. californiana is investigated for the first time, and with the addition of Mexican samples of both species, the phylogenetic relationships within Pontobdella are reinvestigated. Parsimony and maximum-likelihood phylogenetic analysis were based on mitochondrial (cytochrome oxidase subunit I [COI] and 12S rRNA) and nuclear (18S rRNA and 28S rRNA) DNA sequences. Based on our results, we confirm the monophyly of Pontobdella and the pantropical distribution of P. macrothela with a new record in the Tropical Eastern Pacific.


Assuntos
Sanguessugas , Microscopia Eletrônica de Varredura , Filogenia , Animais , Sanguessugas/classificação , Sanguessugas/genética , Sanguessugas/anatomia & histologia , México , Microscopia Eletrônica de Varredura/veterinária , Oceano Pacífico , Oceano Atlântico , DNA Ribossômico/química , RNA Ribossômico 28S/genética , Doenças dos Peixes/parasitologia , Golfo do México/epidemiologia , Complexo IV da Cadeia de Transporte de Elétrons/genética , Ectoparasitoses/parasitologia , Ectoparasitoses/veterinária , RNA Ribossômico 18S/genética , Dados de Sequência Molecular , Alinhamento de Sequência/veterinária , Funções Verossimilhança , Peixes/parasitologia
9.
Nan Fang Yi Ke Da Xue Xue Bao ; 44(4): 689-696, 2024 Apr 20.
Artigo em Chinês | MEDLINE | ID: mdl-38708502

RESUMO

OBJECTIVE: To construct a nonparametric proportional hazards (PH) model for mixed informative interval-censored failure time data for predicting the risks in heart transplantation surgeries. METHODS: Based on the complexity of mixed informative interval-censored failure time data, we considered the interdependent relationship between failure time process and observation time process, constructed a nonparametric proportional hazards (PH) model to describe the nonlinear relationship between the risk factors and heart transplant surgery risks and proposed a two-step sieve estimation maximum likelihood algorithm. An estimation equation was established to estimate frailty variables using the observation process model. Ⅰ-spline and B-spline were used to approximate the unknown baseline hazard function and nonparametric function, respectively, to obtain the working likelihood function in the sieve space. The partial derivative of the model parameters was used to obtain the scoring equation. The maximum likelihood estimation of the parameters was obtained by solving the scoring equation, and a function curve of the impact of risk factors on the risk of heart transplantation surgery was drawn. RESULTS: Simulation experiment suggested that the estimated values obtained by the proposed method were consistent and asymptotically effective under various settings with good fitting effects. Analysis of heart transplant surgery data showed that the donor's age had a positive linear relationship with the surgical risk. The impact of the recipient's age at disease onset increased at first and then stabilized, but increased against at an older age. The donor-recipient age difference had a positive linear relationship with the surgical risk of heart transplantation. CONCLUSION: The nonparametric PH model established in this study can be used for predicting the risks in heart transplantation surgery and exploring the functional relationship between the surgery risks and the risk factors.


Assuntos
Transplante de Coração , Modelos de Riscos Proporcionais , Humanos , Fatores de Risco , Algoritmos , Funções Verossimilhança
10.
Theor Appl Genet ; 137(6): 134, 2024 May 16.
Artigo em Inglês | MEDLINE | ID: mdl-38753078

RESUMO

The standard approach to variance component estimation in linear mixed models for alpha designs is the residual maximum likelihood (REML) method. One drawback of the REML method in the context of incomplete block designs is that the block variance may be estimated as zero, which can compromise the recovery of inter-block information and hence reduce the accuracy of treatment effects estimation. Due to the development of statistical and computational methods, there is an increasing interest in adopting hierarchical approaches to analysis. In order to increase the precision of the analysis of individual trials laid out as alpha designs, we here make a proposal to create an objectively informed prior distribution for variance components for replicates, blocks and plots, based on the results of previous (historical) trials. We propose different modelling approaches for the prior distributions and evaluate the effectiveness of the hierarchical approach compared to the REML method, which is classically used for analysing individual trials in two-stage approaches for multi-environment trials.


Assuntos
Modelos Genéticos , Funções Verossimilhança , Modelos Lineares , Simulação por Computador , Modelos Estatísticos
11.
Nat Commun ; 15(1): 4240, 2024 May 18.
Artigo em Inglês | MEDLINE | ID: mdl-38762491

RESUMO

Despite a wealth of studies documenting prey responses to perceived predation risk, researchers have only recently begun to consider how prey integrate information from multiple cues in their assessment of risk. We conduct a systematic review and meta-analysis of studies that experimentally manipulated perceived predation risk in birds and evaluate support for three alternative models of cue integration: redundancy/equivalence, enhancement, and antagonism. One key insight from our analysis is that the current theory, generally applied to study cue integration in animals, is incomplete. These theories specify the effects of increasing information level on mean, but not variance, in responses. In contrast, we show that providing multiple complementary cues of predation risk simultaneously does not affect mean response. Instead, as information richness increases, populations appear to assess risk more accurately, resulting in lower among-population variance in response to manipulations of perceived predation risk. We show that this may arise via a statistical process called maximum-likelihood estimation (MLE) integration. Our meta-analysis illustrates how explicit consideration of variance in responses can yield important biological insights.


Assuntos
Aves , Comportamento Predatório , Animais , Comportamento Predatório/fisiologia , Aves/fisiologia , Medição de Risco , Sinais (Psicologia) , Cadeia Alimentar , Funções Verossimilhança
12.
Sci Rep ; 14(1): 11373, 2024 05 18.
Artigo em Inglês | MEDLINE | ID: mdl-38762564

RESUMO

There are some discrepancies about the superiority of the off-pump coronary artery bypass grafting (CABG) surgery over the conventional cardiopulmonary bypass (on-pump). The aim of this study was estimating risk ratio of mortality in the off-pump coronary bypass compared with the on-pump using a causal model known as collaborative targeted maximum likelihood estimation (C-TMLE). The data of the Tehran Heart Cohort study from 2007 to 2020 was used. A collaborative targeted maximum likelihood estimation and targeted maximum likelihood estimation, and propensity score (PS) adjustment methods were used to estimate causal risk ratio adjusting for the minimum sufficient set of confounders, and the results were compared. Among 24,883 participants (73.6% male), 5566 patients died during an average of 8.2 years of follow-up. The risk ratio estimates (95% confidence intervals) by unadjusted log-binomial regression model, PS adjustment, TMLE, and C-TMLE methods were 0.86 (0.78-0.95), 0.88 (0.80-0.97), 0.88 (0.80-0.97), and 0.87(0.85-0.89), respectively. This study provides evidence for a protective effect of off-pump surgery on mortality risk for up to 8 years in diabetic and non-diabetic patients.


Assuntos
Ponte de Artéria Coronária sem Circulação Extracorpórea , Humanos , Masculino , Ponte de Artéria Coronária sem Circulação Extracorpórea/efeitos adversos , Ponte de Artéria Coronária sem Circulação Extracorpórea/mortalidade , Feminino , Pessoa de Meia-Idade , Idoso , Funções Verossimilhança , Ponte de Artéria Coronária/efeitos adversos , Ponte de Artéria Coronária/mortalidade , Irã (Geográfico)/epidemiologia , Doença da Artéria Coronariana/cirurgia , Doença da Artéria Coronariana/mortalidade , Resultado do Tratamento , Pontuação de Propensão , Ponte Cardiopulmonar/efeitos adversos
13.
Biometrics ; 80(2)2024 Mar 27.
Artigo em Inglês | MEDLINE | ID: mdl-38563532

RESUMO

Deep learning has continuously attained huge success in diverse fields, while its application to survival data analysis remains limited and deserves further exploration. For the analysis of current status data, a deep partially linear Cox model is proposed to circumvent the curse of dimensionality. Modeling flexibility is attained by using deep neural networks (DNNs) to accommodate nonlinear covariate effects and monotone splines to approximate the baseline cumulative hazard function. We establish the convergence rate of the proposed maximum likelihood estimators. Moreover, we derive that the finite-dimensional estimator for treatment covariate effects is $\sqrt{n}$-consistent, asymptotically normal, and attains semiparametric efficiency. Finally, we demonstrate the performance of our procedures through extensive simulation studies and application to real-world data on news popularity.


Assuntos
Modelos de Riscos Proporcionais , Funções Verossimilhança , Análise de Sobrevida , Simulação por Computador , Modelos Lineares
14.
Genome Med ; 16(1): 50, 2024 Apr 02.
Artigo em Inglês | MEDLINE | ID: mdl-38566210

RESUMO

BACKGROUND: Mitochondria play essential roles in tumorigenesis; however, little is known about the contribution of mitochondrial DNA (mtDNA) to esophageal squamous cell carcinoma (ESCC). Whole-genome sequencing (WGS) is by far the most efficient technology to fully characterize the molecular features of mtDNA; however, due to the high redundancy and heterogeneity of mtDNA in regular WGS data, methods for mtDNA analysis are far from satisfactory. METHODS: Here, we developed a likelihood-based method dMTLV to identify low-heteroplasmic mtDNA variants. In addition, we described fNUMT, which can simultaneously detect non-reference nuclear sequences of mitochondrial origin (non-ref NUMTs) and their derived artifacts. Using these new methods, we explored the contribution of mtDNA to ESCC utilizing the multi-omics data of 663 paired tumor-normal samples. RESULTS: dMTLV outperformed the existing methods in sensitivity without sacrificing specificity. The verification using Nanopore long-read sequencing data showed that fNUMT has superior specificity and more accurate breakpoint identification than the current methods. Leveraging the new method, we identified a significant association between the ESCC overall survival and the ratio of mtDNA copy number of paired tumor-normal samples, which could be potentially explained by the differential expression of genes enriched in pathways related to metabolism, DNA damage repair, and cell cycle checkpoint. Additionally, we observed that the expression of CBWD1 was downregulated by the non-ref NUMTs inserted into its intron region, which might provide precursor conditions for the tumor cells to adapt to a hypoxic environment. Moreover, we identified a strong positive relationship between the number of mtDNA truncating mutations and the contribution of signatures linked to tumorigenesis and treatment response. CONCLUSIONS: Our new frameworks promote the characterization of mtDNA features, which enables the elucidation of the landscapes and roles of mtDNA in ESCC essential for extending the current understanding of ESCC etiology. dMTLV and fNUMT are freely available from https://github.com/sunnyzxh/dMTLV and https://github.com/sunnyzxh/fNUMT , respectively.


Assuntos
Neoplasias Esofágicas , Carcinoma de Células Escamosas do Esôfago , Humanos , Carcinoma de Células Escamosas do Esôfago/genética , DNA Mitocondrial/genética , DNA Mitocondrial/análise , DNA Mitocondrial/metabolismo , Neoplasias Esofágicas/genética , Neoplasias Esofágicas/metabolismo , Neoplasias Esofágicas/patologia , Funções Verossimilhança , Mitocôndrias/genética , Carcinogênese
15.
Bioinformatics ; 40(5)2024 May 02.
Artigo em Inglês | MEDLINE | ID: mdl-38688661

RESUMO

MOTIVATION: Genome partitioning of quantitative genetic variation is useful for dissecting the genetic architecture of complex traits. However, existing methods, such as Haseman-Elston regression and linkage disequilibrium score regression, often face limitations when handling extensive farm animal datasets, as demonstrated in this study. RESULTS: To overcome this challenge, we present MPH, a novel software tool designed for efficient genome partitioning analyses using restricted maximum likelihood. The computational efficiency of MPH primarily stems from two key factors: the utilization of stochastic trace estimators and the comprehensive implementation of parallel computation. Evaluations with simulated and real datasets demonstrate that MPH achieves comparable accuracy and significantly enhances convergence, speed, and memory efficiency compared to widely used tools like GCTA and LDAK. These advancements facilitate large-scale, comprehensive analyses of complex genetic architectures in farm animals. AVAILABILITY AND IMPLEMENTATION: The MPH software is available at https://jiang18.github.io/mph/.


Assuntos
Variação Genética , Software , Animais , Genoma , Locos de Características Quantitativas , Funções Verossimilhança , Desequilíbrio de Ligação , Genômica/métodos
16.
Virol J ; 21(1): 84, 2024 04 10.
Artigo em Inglês | MEDLINE | ID: mdl-38600521

RESUMO

BACKGROUND: PlMERS-CoV is a coronavirus known to cause severe disease in humans, taxonomically classified under the subgenus Merbecovirus. Recent findings showed that the close relatives of MERS-CoV infecting vespertillionid bats (family Vespertillionidae), named NeoCoV and PDF-2180, use their hosts' ACE2 as their entry receptor, unlike the DPP4 receptor usage of MERS-CoV. Previous research suggests that this difference in receptor usage between these related viruses is a result of recombination. However, the precise location of the recombination breakpoints and the details of the recombination event leading to the change of receptor usage remain unclear. METHODS: We used maximum likelihood-based phylogenetics and genetic similarity comparisons to characterise the evolutionary history of all complete Merbecovirus genome sequences. Recombination events were detected by multiple computational methods implemented in the recombination detection program. To verify the influence of recombination, we inferred the phylogenetic relation of the merbecovirus genomes excluding recombinant segments and that of the viruses' receptor binding domains and examined the level of congruency between the phylogenies. Finally, the geographic distribution of the genomes was inspected to identify the possible location where the recombination event occurred. RESULTS: Similarity plot analysis and the recombination-partitioned phylogenetic inference showed that MERS-CoV is highly similar to NeoCoV (and PDF-2180) across its whole genome except for the spike-encoding region. This is confirmed to be due to recombination by confidently detecting a recombination event between the proximal ancestor of MERS-CoV and a currently unsampled merbecovirus clade. Notably, the upstream recombination breakpoint was detected in the N-terminal domain and the downstream breakpoint at the S2 subunit of spike, indicating that the acquired recombined fragment includes the receptor-binding domain. A tanglegram comparison further confirmed that the receptor binding domain-encoding region of MERS-CoV was acquired via recombination. Geographic mapping analysis on sampling sites suggests the possibility that the recombination event occurred in Africa. CONCLUSION: Together, our results suggest that recombination can lead to receptor switching of merbecoviruses during circulation in bats. These results are useful for future epidemiological assessments and surveillance to understand the spillover risk of bat coronaviruses to the human population.


Assuntos
Quirópteros , Infecções por Coronavirus , Coronavírus da Síndrome Respiratória do Oriente Médio , Animais , Humanos , Coronavírus da Síndrome Respiratória do Oriente Médio/genética , Filogenia , Funções Verossimilhança , Infecções por Coronavirus/veterinária , Infecções por Coronavirus/epidemiologia , Recombinação Genética , Glicoproteína da Espícula de Coronavírus/genética , Glicoproteína da Espícula de Coronavírus/metabolismo
17.
PLoS Comput Biol ; 20(4): e1012032, 2024 Apr.
Artigo em Inglês | MEDLINE | ID: mdl-38683863

RESUMO

Public health decisions must be made about when and how to implement interventions to control an infectious disease epidemic. These decisions should be informed by data on the epidemic as well as current understanding about the transmission dynamics. Such decisions can be posed as statistical questions about scientifically motivated dynamic models. Thus, we encounter the methodological task of building credible, data-informed decisions based on stochastic, partially observed, nonlinear dynamic models. This necessitates addressing the tradeoff between biological fidelity and model simplicity, and the reality of misspecification for models at all levels of complexity. We assess current methodological approaches to these issues via a case study of the 2010-2019 cholera epidemic in Haiti. We consider three dynamic models developed by expert teams to advise on vaccination policies. We evaluate previous methods used for fitting these models, and we demonstrate modified data analysis strategies leading to improved statistical fit. Specifically, we present approaches for diagnosing model misspecification and the consequent development of improved models. Additionally, we demonstrate the utility of recent advances in likelihood maximization for high-dimensional nonlinear dynamic models, enabling likelihood-based inference for spatiotemporal incidence data using this class of models. Our workflow is reproducible and extendable, facilitating future investigations of this disease system.


Assuntos
Cólera , Haiti/epidemiologia , Cólera/epidemiologia , Cólera/transmissão , Cólera/prevenção & controle , Humanos , Biologia Computacional/métodos , Epidemias/estatística & dados numéricos , Epidemias/prevenção & controle , Modelos Epidemiológicos , Política de Saúde , Funções Verossimilhança , Processos Estocásticos , Modelos Estatísticos
18.
Mol Phylogenet Evol ; 196: 108087, 2024 Jul.
Artigo em Inglês | MEDLINE | ID: mdl-38677353

RESUMO

Polyploidy, or whole-genome duplication, is expected to confound the inference of species trees with phylogenetic methods for two reasons. First, the presence of retained duplicated genes requires the reconciliation of the inferred gene trees to a proposed species tree. Second, even if the analyses are restricted to shared single copy genes, the occurrence of reciprocal gene loss, where the surviving genes in different species are paralogs from the polyploidy rather than orthologs, will mean that such genes will not have evolved under the corresponding species tree and may not produce gene trees that allow inference of that species tree. Here we analyze three different ancient polyploidy events, using synteny-based inferences of orthology and paralogy to infer gene trees from nearly 17,000 sets of homologous genes. We find that the simple use of single copy genes from polyploid organisms provides reasonably robust phylogenetic signals, despite the presence of reciprocal gene losses. Such gene trees are also most often in accord with the inferred species relationships inferred from maximum likelihood models of gene loss after polyploidy: a completely distinct phylogenetic signal present in these genomes. As seen in other studies, however, we find that methods for inferring phylogenetic confidence yield high support values even in cases where the underlying data suggest meaningful conflict in the phylogenetic signals.


Assuntos
Modelos Genéticos , Filogenia , Poliploidia , Evolução Molecular , Sintenia , Funções Verossimilhança
19.
Biomed Phys Eng Express ; 10(4)2024 May 14.
Artigo em Inglês | MEDLINE | ID: mdl-38608316

RESUMO

Objectives: The aim of this study was to evaluate Cu-64 PET phantom image quality using Bayesian Penalized Likelihood (BPL) and Ordered Subset Expectation Maximum with point-spread function modeling (OSEM-PSF) reconstruction algorithms. In the BPL, the regularization parameterßwas varied to identify the optimum value for image quality. In the OSEM-PSF, the effect of acquisition time was evaluated to assess the feasibility of shortened scan duration.Methods: A NEMA IEC PET body phantom was filled with known activities of water soluble Cu-64. The phantom was imaged on a PET/CT scanner and was reconstructed using BPL and OSEM-PSF algorithms. For the BPL reconstruction, variousßvalues (150, 250, 350, 450, and 550) were evaluated. For the OSEM-PSF algorithm, reconstructions were performed using list-mode data intervals ranging from 7.5 to 240 s. Image quality was assessed by evaluating the signal to noise ratio (SNR), contrast to noise ratio (CNR), and background variability (BV).Results: The SNR and CNR were higher in images reconstructed with BPL compared to OSEM-PSF. Both the SNR and CNR increased with increasingß, peaking atß= 550. The CNR for allß, sphere sizes and tumor-to-background ratios (TBRs) satisfied the Rose criterion for image detectability (CNR > 5). BPL reconstructed images withß= 550 demonstrated the highest improvement in image quality. For OSEM-PSF reconstructed images with list-mode data duration ≥ 120 s, the noise level and CNR were not significantly different from the baseline 240 s list-mode data duration.Conclusions: BPL reconstruction improved Cu-64 PET phantom image quality by increasing SNR and CNR relative to OSEM-PSF reconstruction. Additionally, this study demonstrated scan time can be reduced from 240 to 120 s when using OSEM-PSF reconstruction while maintaining similar image quality. This study provides baseline data that may guide future studies aimed to improve clinical Cu-64 imaging.


Assuntos
Algoritmos , Teorema de Bayes , Radioisótopos de Cobre , Processamento de Imagem Assistida por Computador , Imagens de Fantasmas , Tomografia por Emissão de Pósitrons combinada à Tomografia Computadorizada , Razão Sinal-Ruído , Tomografia por Emissão de Pósitrons combinada à Tomografia Computadorizada/métodos , Processamento de Imagem Assistida por Computador/métodos , Funções Verossimilhança , Humanos
20.
Stat Med ; 43(12): 2452-2471, 2024 May 30.
Artigo em Inglês | MEDLINE | ID: mdl-38599784

RESUMO

Many longitudinal studies are designed to monitor participants for major events related to the progression of diseases. Data arising from such longitudinal studies are usually subject to interval censoring since the events are only known to occur between two monitoring visits. In this work, we propose a new method to handle interval-censored multistate data within a proportional hazards model framework where the hazard rate of events is modeled by a nonparametric function of time and the covariates affect the hazard rate proportionally. The main idea of this method is to simplify the likelihood functions of a discrete-time multistate model through an approximation and the application of data augmentation techniques, where the assumed presence of censored information facilitates a simpler parameterization. Then the expectation-maximization algorithm is used to estimate the parameters in the model. The performance of the proposed method is evaluated by numerical studies. Finally, the method is employed to analyze a dataset on tracking the advancement of coronary allograft vasculopathy following heart transplantation.


Assuntos
Algoritmos , Transplante de Coração , Modelos de Riscos Proporcionais , Humanos , Funções Verossimilhança , Transplante de Coração/estatística & dados numéricos , Estudos Longitudinais , Simulação por Computador , Modelos Estatísticos , Interpretação Estatística de Dados
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA