Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 5 de 5
Filtrar
Más filtros










Base de datos
Intervalo de año de publicación
1.
J Forensic Sci ; 69(1): 40-51, 2024 Jan.
Artículo en Inglés | MEDLINE | ID: mdl-37753814

RESUMEN

There is interest in comparing the output, principally the likelihood ratio, from the two probabilistic genotyping software EuroForMix (EFM) and STRmix™. Many of these comparison studies are descriptive and make little or no effort to diagnose the cause of difference. There are fundamental differences between EFM and STRmix™ that are causative of the largest set of likelihood ratio differences. This set of differences is for false donors where there are many instances of LRs just above or below 1 for EFM that give much lower LRs in STRmix™. This is caused by the separate estimation of parameters such as allele height variance and mixture proportion using MLE under Hp and Ha for EFM. This can result in very different estimations of these parameters under Hp and Ha . It results in a departure from calibration for EFM in the region of LRs just above and below 1.

2.
Forensic Sci Int Genet ; 65: 102890, 2023 07.
Artículo en Inglés | MEDLINE | ID: mdl-37257308

RESUMEN

We investigate a class of DNA mixture deconvolution algorithms based on variational inference, and we show that this can significantly reduce computational runtimes with little or no effect on the accuracy and precision of the result. In particular, we consider Stein Variational Gradient Descent (SVGD) and Variational Inference (VI) with an evidence lower-bound objective. Both provide alternatives to the commonly used Markov-Chain Monte-Carlo methods for estimating the model posterior in Bayesian probabilistic genotyping. We demonstrate that both SVGD and VI significantly reduce computational costs over the current state of the art. Importantly, VI does so without sacrificing precision or accuracy, presenting an overall improvement over previously published methods.


Asunto(s)
Algoritmos , Humanos , Teorema de Bayes , Cadenas de Markov , Método de Montecarlo
3.
Forensic Sci Int Genet ; 64: 102840, 2023 05.
Artículo en Inglés | MEDLINE | ID: mdl-36764220

RESUMEN

We provide an internal validation study of a recently published precise DNA mixture algorithm based on Hamiltonian Monte Carlo sampling (Susik et al., 2022). We provide results for all 428 mixtures analysed by Riman et al. (2021) and compare the results with two state-of-the-art software products: STRmix™  v2.6 and Euroformix v3.4.0. The comparison shows that the Hamiltonian Monte Carlo method provides reliable values of likelihood ratios (LRs) close to the other methods. We further propose a novel large-scale precision benchmark and quantify the precision of the Hamiltonian Monte Carlo method, indicating its improvements over existing solutions. Finally, we analyse the influence of the factors discussed by Buckleton et al. (2022).


Asunto(s)
Algoritmos , Benchmarking , Humanos , Genotipo , Método de Montecarlo , Programas Informáticos
4.
Forensic Sci Int Genet ; 60: 102744, 2022 09.
Artículo en Inglés | MEDLINE | ID: mdl-35853341

RESUMEN

MOTIVATION: Analysing mixed DNA profiles is a common task in forensic genetics. Due to the complexity of the data, such analysis is often performed using Markov Chain Monte Carlo (MCMC)-based genotyping algorithms. These trade off precision against execution time. When default settings (including default chain lengths) are used, as large as a 10-fold changes in inferred log-likelihood ratios (LR) are observed when the software is run twice on the same case. So far, this uncertainty has been attributed to the stochasticity of MCMC algorithms. Since LRs translate directly to strength of the evidence in a criminal trial, forensic laboratories desire LR with small run-to-run variability. RESULTS: We present the use of a Hamiltonian Monte Carlo (HMC) algorithm that reduces run-to-run variability in forensic DNA mixture deconvolution by around an order of magnitude without increased runtime. We achieve this by enforcing strict convergence criteria. We show that the choice of convergence metric strongly influences precision. We validate our method by reproducing previously published results for benchmark DNA mixtures (MIX05, MIX13, and ProvedIt). We also present a complete software implementation of our algorithm that is able to leverage GPU acceleration for the inference process. In the benchmark mixtures, on consumer-grade hardware, the runtime is less than 7 min for 3 contributors, less than 35 min for 4 contributors, and less than an hour for 5 contributors with one known contributor.


Asunto(s)
Algoritmos , Programas Informáticos , Teorema de Bayes , ADN/genética , Humanos , Cadenas de Markov , Método de Montecarlo
5.
Nat Commun ; 9(1): 5160, 2018 12 04.
Artículo en Inglés | MEDLINE | ID: mdl-30514837

RESUMEN

Modern microscopes create a data deluge with gigabytes of data generated each second, and terabytes per day. Storing and processing this data is a severe bottleneck, not fully alleviated by data compression. We argue that this is because images are processed as grids of pixels. To address this, we propose a content-adaptive representation of fluorescence microscopy images, the Adaptive Particle Representation (APR). The APR replaces pixels with particles positioned according to image content. The APR overcomes storage bottlenecks, as data compression does, but additionally overcomes memory and processing bottlenecks. Using noisy 3D images, we show that the APR adaptively represents the content of an image while maintaining image quality and that it enables orders of magnitude benefits across a range of image processing tasks. The APR provides a simple and efficient content-aware representation of fluosrescence microscopy images.

SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA