Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 24.173
Filtrar
1.
Clin Chim Acta ; 564: 119928, 2025 Jan 01.
Artigo em Inglês | MEDLINE | ID: mdl-39163897

RESUMO

BACKGROUND AND AIMS: Rheumatoid arthritis (RA) manifests through various symptoms and systemic manifestations. Diagnosis involves serological markers like rheumatoid factor (RF) and anti-citrullinated protein antibodies (ACPA). Past studies have shown the added value of likelihood ratios (LRs) in result interpretation. LRs can be combined with pretest probability to estimate posttest probability for RA. There is a lack of information on pretest probability. This study aimed to estimate pretest probabilities for RA. MATERIALS AND METHODS: This retrospective study included 133 consecutive RA patients and 651 consecutive disease controls presenting at a rheumatology outpatient clinic. Disease characteristics, risk factors associated with RA and laboratory parameters were documented for calculating pretest probabilities and LRs. RESULTS: Joint involvement, erosions, morning stiffness, and positive CRP, ESR tests significantly correlated with RA. Based on these factors, probabilities for RA were estimated. Besides, LRs for RA were established for RF and ACPA and combinations thereof. LRs increased with antibody levels and were highest for double high positivity. Posttest probabilities were estimated based on pretest probability and LR. CONCLUSION: By utilizing pretest probabilities for RA and LRs for RF and ACPA, posttest probabilities were estimated. Such approach enhances diagnostic accuracy, offering laboratory professionals and clinicians insights in the value of serological testing during the diagnostic process.


Assuntos
Anticorpos Antiproteína Citrulinada , Artrite Reumatoide , Fator Reumatoide , Humanos , Artrite Reumatoide/diagnóstico , Artrite Reumatoide/sangue , Artrite Reumatoide/imunologia , Fator Reumatoide/sangue , Feminino , Pessoa de Meia-Idade , Estudos Retrospectivos , Anticorpos Antiproteína Citrulinada/sangue , Masculino , Funções Verossimilhança , Probabilidade , Adulto , Autoanticorpos/sangue , Idoso
2.
Clin Chim Acta ; 564: 119941, 2025 Jan 01.
Artigo em Inglês | MEDLINE | ID: mdl-39181294

RESUMO

BACKGROUND: In Alzheimer's disease (AD) diagnosis, a cerebrospinal fluid (CSF) biomarker panel is commonly interpreted with binary cutoff values. However, these values are not generic and do not reflect the disease continuum. We explored the use of interval-specific likelihood ratios (LRs) and probability-based models for AD using a CSF biomarker panel. METHODS: CSF biomarker (Aß1-42, tTau and pTau181) data for both a clinical discovery cohort of 241 patients (measured with INNOTEST) and a clinical validation cohort of 129 patients (measured with EUROIMMUN), both including AD and non-AD dementia/cognitive complaints were retrospectively retrieved in a single-center study. Interval-specific LRs for AD were calculated and validated for univariate and combined (Aß1-42/tTau and pTau181) biomarkers, and a continuous bivariate probability-based model for AD, plotting Aß1-42/tTau versus pTau181 was constructed and validated. RESULTS: LR for AD increased as individual CSF biomarker values deviated from normal. Interval-specific LRs of a combined biomarker model showed that once one biomarker became abnormal, LRs increased even further when another biomarker largely deviated from normal, as replicated in the validation cohort. A bivariate probability-based model predicted AD with a validated accuracy of 88% on a continuous scale. CONCLUSIONS: Interval-specific LRs in a combined biomarker model and prediction of AD using a continuous bivariate biomarker probability-based model, offer a more meaningful interpretation of CSF AD biomarkers on a (semi-)continuous scale with respect to the post-test probability of AD across different assays and cohorts.


Assuntos
Doença de Alzheimer , Peptídeos beta-Amiloides , Biomarcadores , Probabilidade , Doença de Alzheimer/líquido cefalorraquidiano , Doença de Alzheimer/diagnóstico , Humanos , Biomarcadores/líquido cefalorraquidiano , Feminino , Masculino , Idoso , Peptídeos beta-Amiloides/líquido cefalorraquidiano , Funções Verossimilhança , Pessoa de Meia-Idade , Proteínas tau/líquido cefalorraquidiano , Estudos Retrospectivos , Fragmentos de Peptídeos/líquido cefalorraquidiano , Estudos de Coortes
3.
Sci Justice ; 64(5): 485-497, 2024 Sep.
Artigo em Inglês | MEDLINE | ID: mdl-39277331

RESUMO

Verifying the speaker of a speech fragment can be crucial in attributing a crime to a suspect. The question can be addressed given disputed and reference speech material, adopting the recommended and scientifically accepted likelihood ratio framework for reporting evidential strength in court. In forensic practice, usually, auditory and acoustic analyses are performed to carry out such a verification task considering a diversity of features, such as language competence, pronunciation, or other linguistic features. Automated speaker comparison systems can also be used alongside those manual analyses. State-of-the-art automatic speaker comparison systems are based on deep neural networks that take acoustic features as input. Additional information, though, may be obtained from linguistic analysis. In this paper, we aim to answer if, when and how modern acoustic-based systems can be complemented by an authorship technique based on frequent words, within the likelihood ratio framework. We consider three different approaches to derive a combined likelihood ratio: using a support vector machine algorithm, fitting bivariate normal distributions, and passing the score of the acoustic system as additional input to the frequent-word analysis. We apply our method to the forensically relevant dataset FRIDA and the FISHER corpus, and we explore under which conditions fusion is valuable. We evaluate our results in terms of log likelihood ratio cost (Cllr) and equal error rate (EER). We show that fusion can be beneficial, especially in the case of intercepted phone calls with noise in the background.


Assuntos
Ciências Forenses , Humanos , Ciências Forenses/métodos , Funções Verossimilhança , Linguística , Máquina de Vetores de Suporte , Acústica da Fala , Algoritmos , Fala
4.
PLoS Comput Biol ; 20(9): e1012393, 2024 Sep.
Artigo em Inglês | MEDLINE | ID: mdl-39231165

RESUMO

Accumulation processes, where many potentially coupled features are acquired over time, occur throughout the sciences from evolutionary biology to disease progression, and particularly in the study of cancer progression. Existing methods for learning the dynamics of such systems typically assume limited (often pairwise) relationships between feature subsets, cross-sectional or untimed observations, small feature sets, or discrete orderings of events. Here we introduce HyperTraPS-CT (Hypercubic Transition Path Sampling in Continuous Time) to compute posterior distributions on continuous-time dynamics of many, arbitrarily coupled, traits in unrestricted state spaces, accounting for uncertainty in observations and their timings. We demonstrate the capacity of HyperTraPS-CT to deal with cross-sectional, longitudinal, and phylogenetic data, which may have no, uncertain, or precisely specified sampling times. HyperTraPS-CT allows positive and negative interactions between arbitrary subsets of features (not limited to pairwise interactions), supporting Bayesian and maximum-likelihood inference approaches to identify these interactions, consequent pathways, and predictions of future and unobserved features. We also introduce a range of visualisations for the inferred outputs of these processes and demonstrate model selection and regularisation for feature interactions. We apply this approach to case studies on the accumulation of mutations in cancer progression and the acquisition of anti-microbial resistance genes in tuberculosis, demonstrating its flexibility and capacity to produce predictions aligned with applied priorities.


Assuntos
Teorema de Bayes , Biologia Computacional , Humanos , Biologia Computacional/métodos , Filogenia , Algoritmos , Mycobacterium tuberculosis/genética , Funções Verossimilhança
5.
Biol Open ; 13(9)2024 Sep 15.
Artigo em Inglês | MEDLINE | ID: mdl-39284732

RESUMO

Students of biological allometry have used the logarithmic transformation for over a century to linearize bivariate distributions that are curvilinear on the arithmetic scale. When the distribution is linear, the equation for a straight line fitted to the distribution can be back-transformed to form a two-parameter power function for describing the original observations. However, many of the data in contemporary studies of allometry fail to meet the requirement for log-linearity, thereby precluding the use of the aforementioned protocol. Even when data are linear in logarithmic form, the two-parameter power equation estimated by back-transformation may yield a misleading or erroneous perception of pattern in the original distribution. A better approach to bivariate allometry would be to forego transformation altogether and to fit multiple models to untransformed observations by nonlinear regression, thereby creating a pool of candidate models with different functional form and different assumptions regarding random error. The best model in the pool of candidate models could then be identified by a selection procedure based on maximum likelihood. Two examples are presented to illustrate the power and versatility of newer methods for studying allometric variation. It always is better to examine the original data when it is possible to do so.


Assuntos
Modelos Biológicos , Algoritmos , Funções Verossimilhança , Animais , Humanos
6.
Hum Brain Mapp ; 45(13): e70023, 2024 Sep.
Artigo em Inglês | MEDLINE | ID: mdl-39268584

RESUMO

The relationship between speech production and perception is a topic of ongoing debate. Some argue that there is little interaction between the two, while others claim they share representations and processes. One perspective suggests increased recruitment of the speech motor system in demanding listening situations to facilitate perception. However, uncertainties persist regarding the specific regions involved and the listening conditions influencing its engagement. This study used activation likelihood estimation in coordinate-based meta-analyses to investigate the neural overlap between speech production and three speech perception conditions: speech-in-noise, spectrally degraded speech and linguistically complex speech. Neural overlap was observed in the left frontal, insular and temporal regions. Key nodes included the left frontal operculum (FOC), left posterior lateral part of the inferior frontal gyrus (IFG), left planum temporale (PT), and left pre-supplementary motor area (pre-SMA). The left IFG activation was consistently observed during linguistic processing, suggesting sensitivity to the linguistic content of speech. In comparison, the left pre-SMA activation was observed when processing degraded and noisy signals, indicating sensitivity to signal quality. Activations of the left PT and FOC activation were noted in all conditions, with the posterior FOC area overlapping in all conditions. Our meta-analysis reveals context-independent (FOC, PT) and context-dependent (pre-SMA, posterior lateral IFG) regions within the speech motor system during challenging speech perception. These regions could contribute to sensorimotor integration and executive cognitive control for perception and production.


Assuntos
Percepção da Fala , Fala , Humanos , Percepção da Fala/fisiologia , Fala/fisiologia , Mapeamento Encefálico , Funções Verossimilhança , Córtex Motor/fisiologia , Córtex Cerebral/fisiologia , Córtex Cerebral/diagnóstico por imagem
7.
Biometrics ; 80(3)2024 Jul 01.
Artigo em Inglês | MEDLINE | ID: mdl-39253987

RESUMO

Meta-analysis is a powerful tool to synthesize findings from multiple studies. The normal-normal random-effects model is widely used to account for between-study heterogeneity. However, meta-analyses of sparse data, which may arise when the event rate is low for binary or count outcomes, pose a challenge to the normal-normal random-effects model in the accuracy and stability in inference since the normal approximation in the within-study model may not be good. To reduce bias arising from data sparsity, the generalized linear mixed model can be used by replacing the approximate normal within-study model with an exact model. Publication bias is one of the most serious threats in meta-analysis. Several quantitative sensitivity analysis methods for evaluating the potential impacts of selective publication are available for the normal-normal random-effects model. We propose a sensitivity analysis method by extending the likelihood-based sensitivity analysis with the $t$-statistic selection function of Copas to several generalized linear mixed-effects models. Through applications of our proposed method to several real-world meta-analyses and simulation studies, the proposed method was proven to outperform the likelihood-based sensitivity analysis based on the normal-normal model. The proposed method would give useful guidance to address publication bias in the meta-analysis of sparse data.


Assuntos
Simulação por Computador , Metanálise como Assunto , Viés de Publicação , Viés de Publicação/estatística & dados numéricos , Humanos , Funções Verossimilhança , Modelos Lineares , Interpretação Estatística de Dados , Modelos Estatísticos , Sensibilidade e Especificidade , Biometria/métodos
8.
PLoS One ; 19(9): e0304055, 2024.
Artigo em Inglês | MEDLINE | ID: mdl-39231125

RESUMO

Software reliability growth models (SRGMs) are universally admitted and employed for reliability assessment. The process of software reliability analysis is separated into two components. The first component is model construction, and the second is parameter estimation. This study concentrates on the second segment parameter estimation. The past few decades of literature observance say that the parameter estimation was typically done by either maximum likelihood estimation (MLE) or least squares estimation (LSE). Increasing attention has been noted in stochastic optimization methods in the previous couple of decades. There are various limitations in the traditional optimization criteria; to overcome these obstacles metaheuristic optimization algorithms are used. Therefore, it requires a method of search space and local optima avoidance. To analyze the applicability of various developed meta-heuristic algorithms in SRGMs parameter estimation. The proposed approach compares the meta-heuristic methods for parameter estimation by various criteria. For parameter estimation, this study uses four meta-heuristics algorithms: Grey-Wolf Optimizer (GWO), Regenerative Genetic Algorithm (RGA), Sine-Cosine Algorithm (SCA), and Gravitational Search Algorithm (GSA). Four popular SRGMs did the comparative analysis of the parameter estimation power of these four algorithms on three actual-failure datasets. The estimated value of parameters through meta-heuristic algorithms are approximately near the LSE method values. The results show that RGA and GWO are better on a variety of real-world failure data, and they have excellent parameter estimation potential. Based on the convergence and R2 distribution criteria, this study suggests that RGA and GWO are more appropriate for the parameter estimation of SRGMs. RGA could locate the optimal solution more correctly and faster than GWO and other optimization techniques.


Assuntos
Algoritmos , Software , Reprodutibilidade dos Testes , Funções Verossimilhança , Humanos , Heurística
9.
PLoS One ; 19(9): e0307380, 2024.
Artigo em Inglês | MEDLINE | ID: mdl-39241029

RESUMO

Despite their importance as members of the Glires group, lagomorph diversification processes have seldom been studied using molecular data. Notably, only a few phylogenetic studies have included most of the examined lagomorph lineages. Previous studies that included a larger sample of taxa and markers used nonconservative tests to support the branches of their proposed phylogeny. The objective of this study was to test the monophyly of families and genera of lagomorphs and to evaluate the group diversification process. To that end, this work expanded the sampling of markers and taxa in addition to implementing the bootstrap, a more rigorous statistical test to measure branch support; hence, a more robust phylogeny was recovered. Our supermatrix included five mitochondrial genes and 14 nuclear genes for eighty-eight taxa, including three rodent outgroups. Our maximum likelihood tree showed that all tested genera and both families, Leporidae and Ochotonidae, were recovered as monophyletic. In the Ochotona genus, the subgenera Conothoa and Pika, but not Ochotona, were recovered as monophyletic. Six calibration points based on fossils were used to construct a time tree. A calibration test was performed (via jackknife) by removing one calibration at a time and estimating divergence times for each set. The diversification of the main groups of lagomorphs indicated that the origin of the order's crown group was dated from the beginning of the Palaeogene. Our diversification time estimates for Lagomorpha were compared with those for the largest mammalian order, i.e., rodent lineages in Muroidea. According to our time-resolved phylogenetic tree, the leporids underwent major radiation by evolving a completely new morphospace-larger bodies and an efficient locomotor system-that enabled them to cover wide foraging areas and outrun predators more easily than rodents and pikas.


Assuntos
Lagomorpha , Filogenia , Animais , Lagomorpha/genética , Lagomorpha/classificação , Fósseis , Evolução Molecular , Funções Verossimilhança , Fatores de Tempo
10.
PLoS One ; 19(9): e0307391, 2024.
Artigo em Inglês | MEDLINE | ID: mdl-39269964

RESUMO

This paper introduces the modified Kies Topp-Leone (MKTL) distribution for modeling data on the (0, 1) or [0, 1] interval. The shapes of the density and hazard rate functions manifest desirable shapes, making the MKTL distribution suitable for modeling data with different characteristics at the unit interval. Twelve different estimation methods are utilized to estimate the distribution parameters, and Monte Carlo simulation experiments are executed to assess the performance of the methods. The simulation results suggest that the maximum likelihood method is the superior method. The usefulness of the new distribution is illustrated by utilizing three data sets, and its performance is juxtaposed with that of other competing models. The findings affirm the superiority of the MKTL distribution over the other candidate models. Applying the developed quantile regression model using the new distribution disclosed that it offers a competitive fit over other existing regression models.


Assuntos
Método de Monte Carlo , Análise de Regressão , Funções Verossimilhança , Modelos Estatísticos , Humanos , Simulação por Computador , Algoritmos
11.
Biometrics ; 80(3)2024 Jul 01.
Artigo em Inglês | MEDLINE | ID: mdl-39282732

RESUMO

We develop a methodology for valid inference after variable selection in logistic regression when the responses are partially observed, that is, when one observes a set of error-prone testing outcomes instead of the true values of the responses. Aiming at selecting important covariates while accounting for missing information in the response data, we apply the expectation-maximization algorithm to compute maximum likelihood estimators subject to LASSO penalization. Subsequent to variable selection, we make inferences on the selected covariate effects by extending post-selection inference methodology based on the polyhedral lemma. Empirical evidence from our extensive simulation study suggests that our post-selection inference results are more reliable than those from naive inference methods that use the same data to perform variable selection and inference without adjusting for variable selection.


Assuntos
Algoritmos , Simulação por Computador , Funções Verossimilhança , Humanos , Modelos Logísticos , Interpretação Estatística de Dados , Biometria/métodos , Modelos Estatísticos
12.
Biometrics ; 80(3)2024 Jul 01.
Artigo em Inglês | MEDLINE | ID: mdl-39282733

RESUMO

Benchmark dose analysis aims to estimate the level of exposure to a toxin associated with a clinically significant adverse outcome and quantifies uncertainty using the lower limit of a confidence interval for this level. We develop a novel framework for benchmark dose analysis based on monotone additive dose-response models. We first introduce a flexible approach for fitting monotone additive models via penalized B-splines and Laplace-approximate marginal likelihood. A reflective Newton method is then developed that employs de Boor's algorithm for computing splines and their derivatives for efficient estimation of the benchmark dose. Finally, we develop a novel approach for calculating benchmark dose lower limits based on an approximate pivot for the nonlinear equation solved by the estimated benchmark dose. The favorable properties of this approach compared to the Delta method and a parameteric bootstrap are discussed. We apply the new methods to make inferences about the level of prenatal alcohol exposure associated with clinically significant cognitive defects in children using data from six NIH-funded longitudinal cohort studies. Software to reproduce the results in this paper is available online and makes use of the novel semibmd  R package, which implements the methods in this paper.


Assuntos
Relação Dose-Resposta a Droga , Modelos Estatísticos , Humanos , Benchmarking , Feminino , Algoritmos , Gravidez , Efeitos Tardios da Exposição Pré-Natal/induzido quimicamente , Simulação por Computador , Criança , Interpretação Estatística de Dados , Funções Verossimilhança
13.
Neuroimage ; 298: 120798, 2024 Sep.
Artigo em Inglês | MEDLINE | ID: mdl-39153521

RESUMO

Functional magnetic resonance imaging research employing regional homogeneity (ReHo) analysis has uncovered aberrant local brain connectivity in individuals with mild cognitive impairment (MCI) and Alzheimer's disease (AD) in comparison with healthy controls. However, the precise localization, extent, and possible overlap of these aberrations are still not fully understood. To bridge this gap, we applied a novel meta-analytic and Bayesian method (minimum Bayes Factor Activation Likelihood Estimation, mBF-ALE) for a systematic exploration of local functional connectivity alterations in MCI and AD brains. We extracted ReHo data via a standardized MEDLINE database search, which included 35 peer-reviewed experiments, 1,256 individuals with AD or MCI, 1,118 healthy controls, and 205 x-y-z coordinates of ReHo variation. We then separated the data into two distinct datasets: one for MCI and the other for AD. Two mBF-ALE analyses were conducted, thresholded at "very strong evidence" (mBF ≥ 150), with a minimum cluster size of 200 mm³. We also assessed the spatial consistency and sensitivity of our Bayesian results using the canonical version of the ALE algorithm. For MCI, we observed two clusters of ReHo decrease and one of ReHo increase. Decreased local connectivity was notable in the left precuneus (Brodmann area - BA 7) and left inferior temporal gyrus (BA 20), while increased connectivity was evident in the right parahippocampal gyrus (BA 36). The canonical ALE confirmed these locations, except for the inferior temporal gyrus. In AD, one cluster each of ReHo decrease and increase were found, with decreased connectivity in the right posterior cingulate cortex (BA 30 extending to BA 23) and increased connectivity in the left posterior cingulate cortex (BA 31). These locations were confirmed by the canonical ALE. The identification of these distinct functional connectivity patterns sheds new light on the complex pathophysiology of MCI and AD, offering promising directions for future neuroimaging-based interventions. Additionally, the use of a Bayesian framework for statistical thresholding enhances the robustness of neuroimaging meta-analyses, broadening its applicability to small datasets.


Assuntos
Doença de Alzheimer , Teorema de Bayes , Disfunção Cognitiva , Imageamento por Ressonância Magnética , Humanos , Doença de Alzheimer/diagnóstico por imagem , Doença de Alzheimer/fisiopatologia , Disfunção Cognitiva/diagnóstico por imagem , Disfunção Cognitiva/fisiopatologia , Imageamento por Ressonância Magnética/métodos , Encéfalo/diagnóstico por imagem , Encéfalo/fisiopatologia , Funções Verossimilhança , Conectoma/métodos , Rede Nervosa/diagnóstico por imagem , Rede Nervosa/fisiopatologia
14.
Mol Biol Evol ; 41(9)2024 Sep 04.
Artigo em Inglês | MEDLINE | ID: mdl-39158305

RESUMO

Profile mixture models capture distinct biochemical constraints on the amino acid substitution process at different sites in proteins. These models feature a mixture of time-reversible models with a common matrix of exchangeabilities and distinct sets of equilibrium amino acid frequencies known as profiles. Combining the exchangeability matrix with each profile generates the matrix of instantaneous rates of amino acid exchange for that profile. Currently, empirically estimated exchangeability matrices (e.g. the LG matrix) are widely used for phylogenetic inference under profile mixture models. However, these were estimated using a single profile and are unlikely optimal for profile mixture models. Here, we describe the GTRpmix model that allows maximum likelihood estimation of a common exchangeability matrix under any profile mixture model. We show that exchangeability matrices estimated under profile mixture models differ from the LG matrix, dramatically improving model fit and topological estimation accuracy for empirical test cases. Because the GTRpmix model is computationally expensive, we provide two exchangeability matrices estimated from large concatenated phylogenomic-supermatrices to be used for phylogenetic analyses. One, called Eukaryotic Linked Mixture (ELM), is designed for phylogenetic analysis of proteins encoded by nuclear genomes of eukaryotes, and the other, Eukaryotic and Archaeal Linked mixture (EAL), for reconstructing relationships between eukaryotes and Archaea. These matrices, combined with profile mixture models, fit data better and have improved topology estimation relative to the LG matrix combined with the same mixture models. Starting with version 2.3.1, IQ-TREE2 allows users to estimate linked exchangeabilities (i.e. amino acid exchange rates) under profile mixture models.


Assuntos
Modelos Genéticos , Filogenia , Archaea/genética , Funções Verossimilhança , Substituição de Aminoácidos , Evolução Molecular , Eucariotos/genética
15.
Mol Phylogenet Evol ; 200: 108181, 2024 Nov.
Artigo em Inglês | MEDLINE | ID: mdl-39209046

RESUMO

Phylogenetic tree reconstruction with molecular data is important in many fields of life science research. The gold standard in this discipline is the phylogenetic tree reconstruction based on the Maximum Likelihood method. In this study, we present neural networks to predict the best model of sequence evolution and the correct topology for four sequence alignments of nucleotide or amino acid sequence data. We trained neural networks with different architectures using simulated alignments for a wide range of evolutionary models, model parameters and branch lengths. By comparing the accuracy of model and topology prediction of the trained neural networks with Maximum Likelihood and Neighbour Joining methods, we show that for quartet trees, the neural network classifier outperforms the Neighbour Joining method and is in most cases as good as the Maximum Likelihood method to infer the best model of sequence evolution and the best tree topology. These results are consistent for nucleotide and amino acid sequence data. We also show that our method is superior for model selection than previously published methods based on convolutionary networks. Furthermore, we found that neural network classifiers are much faster than the IQ-TREE implementation of the Maximum Likelihood method. Our results show that neural networks could become a true competitor for the Maximum Likelihood method in phylogenetic reconstructions.


Assuntos
Aprendizado de Máquina , Redes Neurais de Computação , Filogenia , Alinhamento de Sequência , Funções Verossimilhança , Modelos Genéticos , Evolução Molecular
16.
Math Med Biol ; 41(3): 250-276, 2024 Sep 16.
Artigo em Inglês | MEDLINE | ID: mdl-39135528

RESUMO

Glioblastoma multiforme is a highly aggressive form of brain cancer, with a median survival time for diagnosed patients of 15 months. Treatment of this cancer is typically a combination of radiation, chemotherapy and surgical removal of the tumour. However, the highly invasive and diffuse nature of glioblastoma makes surgical intrusions difficult, and the diffusive properties of glioblastoma are poorly understood. In this paper, we introduce a stochastic interacting particle system as a model of in vitro glioblastoma migration, along with a maximum likelihood-algorithm designed for inference using microscopy imaging data. The inference method is evaluated on in silico simulation of cancer cell migration, and then applied to a real data set. We find that the inference method performs with a high degree of accuracy on the in silico data, and achieve promising results given the in vitro data set.


Assuntos
Neoplasias Encefálicas , Movimento Celular , Simulação por Computador , Glioblastoma , Modelos Biológicos , Glioblastoma/patologia , Glioblastoma/diagnóstico por imagem , Glioblastoma/tratamento farmacológico , Humanos , Neoplasias Encefálicas/patologia , Neoplasias Encefálicas/diagnóstico por imagem , Neoplasias Encefálicas/tratamento farmacológico , Algoritmos , Processos Estocásticos , Conceitos Matemáticos , Funções Verossimilhança , Linhagem Celular Tumoral
17.
Forensic Sci Int Genet ; 73: 103099, 2024 Nov.
Artigo em Inglês | MEDLINE | ID: mdl-39089059

RESUMO

The validity of a probabilistic genotyping (PG) system is typically demonstrated by following international guidelines for the developmental and internal validation of PG software. These guidelines mainly focus on discriminatory power. Very few studies have reported with metrics that depend on calibration of likelihood ratio (LR) systems. In this study, discriminatory power as well as various calibration metrics, such as Empirical Cross-Entropy (ECE) plots, pool adjacent violator (PAV) plots, log likelihood ratio cost (Cllr and Cllrcal), fiducial calibration discrepancy plots, and Turing' expectation were examined using the publicly-available PROVEDIt dataset. The aim was to gain deeper insight into the performance of a variety of PG software in the 'lower' LR ranges (∼LR 1-10,000), with focus on DNAStatistX and EuroForMix which use maximum likelihood estimation (MLE). This may be a driving force for the end users to reconsider current LR thresholds for reporting. In previous studies, overstated 'low' LRs were observed for these PG software. However, applying (arbitrarily) high LR thresholds for reporting wastes relevant evidential value. This study demonstrates, based on calibration performance, that previously reported LR thresholds can be lowered or even discarded. Considering LRs >1, there was no evidence for miscalibration performance above LR ∼1000 when using Fst 0.01. Below this LR value, miscalibration was observed. Calibration performance generally improved with the use of Fst 0.03, but the extent of this was dependent on the dataset: results ranged from miscalibration up to LR ∼100 to no evidence of miscalibration alike PG software using different methods to model peak height, HMC and STRmix. This study demonstrates that practitioners using MLE-based models should be careful when low LR ranges are reported, though applying arbitrarily high LR thresholds is discouraged. This study also highlights various calibration metrics that are useful in understanding the performance of a PG system.


Assuntos
Impressões Digitais de DNA , Software , Humanos , Funções Verossimilhança , Calibragem , Genótipo , DNA/genética , DNA/análise , Repetições de Microssatélites
18.
Forensic Sci Int Genet ; 73: 103110, 2024 Nov.
Artigo em Inglês | MEDLINE | ID: mdl-39098056

RESUMO

Since 1995, national forensic DNA databases have used a maximum number of contributors, and a minimum number of loci to reduce the risk of providing false leads. DNA profiles of biological traces that do not meet these criteria cannot be loaded into these databases. In 2023, about 10 % of more than 15,000 trace DNA profiles analyzed in western Switzerland were not compared at the national level, even though they were considered to be interpretable, mainly because they contained the DNA from more than two persons. In this situation, police services can request local comparisons with DNA profiles of known persons and/or with other traces, but this occurs in only a small proportion of cases, so that DNA mixtures are rarely used to help detect potential series. The development of probabilistic genotyping software and its associated tools have made possible the efficient performance of this type of comparison, which is based on likelihood ratios (LR) rather than on the number of shared alleles. To highlight potential common contributors for investigation and intelligence purposes, the present study used the mixture-to-mixture tool of the software STRmix v2.7 to compare 235 DNA profiles that cannot be searched the Swiss DNA database. These DNA profiles originated from traces collected by six different police services in 2021 and 2022. Traces were selected by the police based on information that indicated that they were from potential series. Associations between profiles were compared with expected investigative associations to define the value of this approach. Among the 27,495 pairwise comparisons of DNA profiles, 88 pairs (0.3 %) showed at least one potential common contributor when using a LR threshold of 1000. Of these 88 pairs, 60 (68.2 %) were qualified by the police services as "expected" (60/88), 22 (25.0 %) as "possible", and six (6.8 %) as "unexpected". Although it is important to consider the limits of this approach (e.g., adventitious or missed associations, cost/benefit evaluation, integration of DNA mixture comparison in the process), these findings indicate that non CODIS loadable DNA mixtures could provide police agencies with information concerning potential series at both the local and national level.


Assuntos
Impressões Digitais de DNA , DNA , Bases de Dados de Ácidos Nucleicos , Repetições de Microssatélites , Software , Humanos , Suíça , DNA/genética , Funções Verossimilhança , Genótipo , Polícia
19.
PLoS Comput Biol ; 20(8): e1012337, 2024 Aug.
Artigo em Inglês | MEDLINE | ID: mdl-39102450

RESUMO

A phylogenetic tree represents hypothesized evolutionary history for a set of taxa. Besides the branching patterns (i.e., tree topology), phylogenies contain information about the evolutionary distances (i.e. branch lengths) between all taxa in the tree, which include extant taxa (external nodes) and their last common ancestors (internal nodes). During phylogenetic tree inference, the branch lengths are typically co-estimated along with other phylogenetic parameters during tree topology space exploration. There are well-known regions of the branch length parameter space where accurate estimation of phylogenetic trees is especially difficult. Several novel studies have recently demonstrated that machine learning approaches have the potential to help solve phylogenetic problems with greater accuracy and computational efficiency. In this study, as a proof of concept, we sought to explore the possibility of machine learning models to predict branch lengths. To that end, we designed several deep learning frameworks to estimate branch lengths on fixed tree topologies from multiple sequence alignments or its representations. Our results show that deep learning methods can exhibit superior performance in some difficult regions of branch length parameter space. For example, in contrast to maximum likelihood inference, which is typically used for estimating branch lengths, deep learning methods are more efficient and accurate. In general, we find that our neural networks achieve similar accuracy to a Bayesian approach and are the best-performing methods when inferring long branches that are associated with distantly related taxa. Together, our findings represent a next step toward accurate, fast, and reliable phylogenetic inference with machine learning approaches.


Assuntos
Biologia Computacional , Aprendizado Profundo , Redes Neurais de Computação , Filogenia , Biologia Computacional/métodos , Algoritmos , Aprendizado de Máquina , Modelos Genéticos , Alinhamento de Sequência/métodos , Evolução Molecular , Funções Verossimilhança
20.
Biom J ; 66(6): e202300185, 2024 Sep.
Artigo em Inglês | MEDLINE | ID: mdl-39101657

RESUMO

There has been growing research interest in developing methodology to evaluate the health care providers' performance with respect to a patient outcome. Random and fixed effects models are traditionally used for such a purpose. We propose a new method, using a fusion penalty to cluster health care providers based on quasi-likelihood. Without any priori knowledge of grouping information, our method provides a desirable data-driven approach for automatically clustering health care providers into different groups based on their performance. Further, the quasi-likelihood is more flexible and robust than the regular likelihood in that no distributional assumption is needed. An efficient alternating direction method of multipliers algorithm is developed to implement the proposed method. We show that the proposed method enjoys the oracle properties; namely, it performs as well as if the true group structure were known in advance. The consistency and asymptotic normality of the estimators are established. Simulation studies and analysis of the national kidney transplant registry data demonstrate the utility and validity of our method.


Assuntos
Biometria , Pessoal de Saúde , Análise por Conglomerados , Funções Verossimilhança , Humanos , Pessoal de Saúde/estatística & dados numéricos , Biometria/métodos , Transplante de Rim , Algoritmos
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA