Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 16 de 16
Filtrar
1.
Water Res ; 259: 121877, 2024 Aug 01.
Artigo em Inglês | MEDLINE | ID: mdl-38870891

RESUMO

When assessing risk posed by waterborne pathogens in drinking water, it is common to use Monte Carlo simulations in Quantitative Microbial Risk Assessment (QMRA). This method accounts for the variables that affect risk and their different values in a given system. A common underlying assumption in such analyses is that all random variables are independent (i.e., one is not associated in any way with another). Although the independence assumption simplifies the analysis, it is not always correct. For example, treatment efficiency can depend on microbial concentrations if changes in microbial concentrations either affect treatment themselves or are associated with water quality changes that affect treatment (e.g., during/after climate shocks like extreme precipitation events or wildfires). Notably, the effects of erroneous assumptions of independence in QMRA have not been widely discussed. Due to the implications of drinking water safety decisions on public health protection, it is critical that risk models accurately reflect the context being studied to meaningfully support decision-making. This work illustrates how dependence between pathogen concentration and either treatment efficiency or water consumption can impact risk estimates using hypothetical scenarios of relevance to drinking water QMRA. It is shown that the mean and variance of risk estimates can change substantially with different degrees of correlation. Data from a water supply system in Calgary, Canada are also used to illustrate the effect of dependence on risk. Recognizing the difficulty of obtaining data to empirically assess dependence, a framework to guide evaluation of the effect of dependence is presented to enhance support for decision making. This work emphasizes the importance of acknowledging and discussing assumptions implicit to models.


Assuntos
Tomada de Decisões , Água Potável , Método de Monte Carlo , Água Potável/microbiologia , Medição de Risco , Microbiologia da Água , Abastecimento de Água , Modelos Teóricos , Purificação da Água
2.
Anal Chem ; 96(16): 6245-6254, 2024 Apr 23.
Artigo em Inglês | MEDLINE | ID: mdl-38593420

RESUMO

Wastewater treatment plants (WWTPs) serve a pivotal role in transferring microplastics (MPs) from wastewater to sludge streams, thereby exerting a significant influence on their release into the environment and establishing wastewater and biosolids as vectors for MP transport and delivery. Hence, an accurate understanding of the fate and transport of MPs in WWTPs is vital. Enumeration is commonly used to estimate concentrations of MPs in performance evaluations of treatment processes, and risk assessment also typically involves MP enumeration. However, achieving high accuracy in concentration estimates is challenging due to inherent uncertainty in the analytical workflow to collect and process samples and count MPs. Here, sources of random error in MP enumeration in wastewater and other matrices were investigated using a modeling approach that addresses the sources of error associated with each step of the analysis. In particular, losses are reflected in data analysis rather than merely being measured as a validation step for MP extraction methods. A model for addressing uncertainty in the enumeration of microorganisms in water was adapted to include key assumptions relevant to the enumeration of MPs in wastewater. Critically, analytical recovery, the capacity to successfully enumerate particles considering losses and counting error, may be variable among MPs due to differences in size, shape, and type (differential analytical recovery) in addition to random variability between samples (nonconstant analytical recovery). Accordingly, differential analytical recovery among the categories of MPs was added to the existing model. This model was illustratively applied to estimate MP concentrations from simulated data and quantify uncertainty in the resulting estimates. Increasing the number of replicates, counting categories of MPs separately, and accounting for both differential and nonconstant analytical recovery improved the accuracy of MP enumeration. This work contributes to developing guidelines for analytical procedures quantifying MPs in diverse types of samples and provides a framework for enhanced interpretation of enumeration data, thereby facilitating the collection of more accurate and reliable MP data in environmental studies.

3.
Front Microbiol ; 14: 1048661, 2023.
Artigo em Inglês | MEDLINE | ID: mdl-36937263

RESUMO

The real-time polymerase chain reaction (PCR), commonly known as quantitative PCR (qPCR), is increasingly common in environmental microbiology applications. During the COVID-19 pandemic, qPCR combined with reverse transcription (RT-qPCR) has been used to detect and quantify SARS-CoV-2 in clinical diagnoses and wastewater monitoring of local trends. Estimation of concentrations using qPCR often features a log-linear standard curve model calibrating quantification cycle (Cq) values obtained from underlying fluorescence measurements to standard concentrations. This process works well at high concentrations within a linear dynamic range but has diminishing reliability at low concentrations because it cannot explain "non-standard" data such as Cq values reflecting increasing variability at low concentrations or non-detects that do not yield Cq values at all. Here, fundamental probabilistic modeling concepts from classical quantitative microbiology were integrated into standard curve modeling approaches by reflecting well-understood mechanisms for random error in microbial data. This work showed that data diverging from the log-linear regression model at low concentrations as well as non-detects can be seamlessly integrated into enhanced standard curve analysis. The newly developed model provides improved representation of standard curve data at low concentrations while converging asymptotically upon conventional log-linear regression at high concentrations and adding no fitting parameters. Such modeling facilitates exploration of the effects of various random error mechanisms in experiments generating standard curve data, enables quantification of uncertainty in standard curve parameters, and is an important step toward quantifying uncertainty in qPCR-based concentration estimates. Improving understanding of the random error in qPCR data and standard curve modeling is especially important when low concentrations are of particular interest and inappropriate analysis can unduly affect interpretation, conclusions regarding lab performance, reported concentration estimates, and associated decision-making.

4.
Front Microbiol ; 13: 728146, 2022.
Artigo em Inglês | MEDLINE | ID: mdl-35300475

RESUMO

Diversity analysis of amplicon sequencing data has mainly been limited to plug-in estimates calculated using normalized data to obtain a single value of an alpha diversity metric or a single point on a beta diversity ordination plot for each sample. As recognized for count data generated using classical microbiological methods, amplicon sequence read counts obtained from a sample are random data linked to source properties (e.g., proportional composition) by a probabilistic process. Thus, diversity analysis has focused on diversity exhibited in (normalized) samples rather than probabilistic inference about source diversity. This study applies fundamentals of statistical analysis for quantitative microbiology (e.g., microscopy, plating, and most probable number methods) to sample collection and processing procedures of amplicon sequencing methods to facilitate inference reflecting the probabilistic nature of such data and evaluation of uncertainty in diversity metrics. Following description of types of random error, mechanisms such as clustering of microorganisms in the source, differential analytical recovery during sample processing, and amplification are found to invalidate a multinomial relative abundance model. The zeros often abounding in amplicon sequencing data and their implications are addressed, and Bayesian analysis is applied to estimate the source Shannon index given unnormalized data (both simulated and experimental). Inference about source diversity is found to require knowledge of the exact number of unique variants in the source, which is practically unknowable due to library size limitations and the inability to differentiate zeros corresponding to variants that are actually absent in the source from zeros corresponding to variants that were merely not detected. Given these problems with estimation of diversity in the source even when the basic multinomial model is valid, diversity analysis at the level of samples with normalized library sizes is discussed.

5.
Sci Rep ; 11(1): 22302, 2021 11 16.
Artigo em Inglês | MEDLINE | ID: mdl-34785722

RESUMO

Amplicon sequencing has revolutionized our ability to study DNA collected from environmental samples by providing a rapid and sensitive technique for microbial community analysis that eliminates the challenges associated with lab cultivation and taxonomic identification through microscopy. In water resources management, it can be especially useful to evaluate ecosystem shifts in response to natural and anthropogenic landscape disturbances to signal potential water quality concerns, such as the detection of toxic cyanobacteria or pathogenic bacteria. Amplicon sequencing data consist of discrete counts of sequence reads, the sum of which is the library size. Groups of samples typically have different library sizes that are not representative of biological variation; library size normalization is required to meaningfully compare diversity between them. Rarefaction is a widely used normalization technique that involves the random subsampling of sequences from the initial sample library to a selected normalized library size. This process is often dismissed as statistically invalid because subsampling effectively discards a portion of the observed sequences, yet it remains prevalent in practice and the suitability of rarefying, relative to many other normalization approaches, for diversity analysis has been argued. Here, repeated rarefying is proposed as a tool to normalize library sizes for diversity analyses. This enables (i) proportionate representation of all observed sequences and (ii) characterization of the random variation introduced to diversity analyses by rarefying to a smaller library size shared by all samples. While many deterministic data transformations are not tailored to produce equal library sizes, repeatedly rarefying reflects the probabilistic process by which amplicon sequencing data are obtained as a representation of the amplified source microbial community. Specifically, it evaluates which data might have been obtained if a particular sample's library size had been smaller and allows graphical representation of the effects of this library size normalization process upon diversity analysis results.

6.
Viruses ; 12(9)2020 08 28.
Artigo em Inglês | MEDLINE | ID: mdl-32872283

RESUMO

Human noroviruses (HuNoVs) are the leading causative agents of epidemic and sporadic acute gastroenteritis that affect people of all ages worldwide. However, very few dose-response studies have been carried out to determine the median infectious dose of HuNoVs. In this study, we evaluated the median infectious dose (ID50) and diarrhea dose (DD50) of the GII.4/2003 variant of HuNoV (Cin-2) in the gnotobiotic pig model of HuNoV infection and disease. Using various mathematical approaches (Reed-Muench, Dragstedt-Behrens, Spearman-Karber, exponential, approximate beta-Poisson dose-response models, and area under the curve methods), we estimated the ID50 and DD50 to be between 2400-3400 RNA copies, and 21,000-38,000 RNA copies, respectively. Contemporary dose-response models offer greater flexibility and accuracy in estimating ID50. In contrast to classical methods of endpoint estimation, dose-response modelling allows seamless analyses of data that may include inconsistent dilution factors between doses or numbers of subjects per dose group, or small numbers of subjects. Although this investigation is consistent with state-of-the-art ID50 determinations and offers an advancement in clinical data analysis, it is important to underscore that such analyses remain confounded by pathogen aggregation. Regardless, challenging virus strain ID50 determination is crucial for identifying the true infectiousness of HuNoVs and for the accurate evaluation of protective efficacies in pre-clinical studies of therapeutics, vaccines and other prophylactics using this reliable animal model.


Assuntos
Infecções por Caliciviridae/virologia , Norovirus/fisiologia , Virologia/métodos , Animais , Modelos Animais de Doenças , Feminino , Gastroenterite/virologia , Vida Livre de Germes , Humanos , Masculino , Norovirus/genética , Norovirus/patogenicidade , Suínos , Virulência
8.
Water Res ; 176: 115702, 2020 Jun 01.
Artigo em Inglês | MEDLINE | ID: mdl-32247998

RESUMO

The degree to which a technology used for drinking water treatment physically removes or inactivates pathogenic microorganisms is commonly expressed as a log-reduction (or log-removal) and is of central importance to the provision of microbiologically safe drinking water. Many evaluations of water treatment process performance generate or compile multiple values of microorganism log-reduction, and it is common to report the average of these log-reduction values as a summary statistic. This work provides a cautionary note against misinterpretation and misuse of averaged log-reduction values by mathematically proving that the average of a set of log-reduction values characteristically overstates the average performance of which the set of log-reduction values is believed to be representative. This has two important consequences for drinking water and food safety as well as other applications of log-reduction: 1) a technology with higher average log-reduction does not necessarily have higher average performance, and 2) risk analyses using averaged log-reduction values as point estimates of treatment efficiency will underestimate average risk-sometimes by well over an order of magnitude. When analyzing a set of log-reduction values, a summary statistic called the effective log-reduction (which averages reduction or passage rates and expresses this as a log-reduction) provides a better representation of average performance of a treatment technology.


Assuntos
Água Potável , Purificação da Água
9.
Risk Anal ; 40(2): 352-369, 2020 02.
Artigo em Inglês | MEDLINE | ID: mdl-31441953

RESUMO

In the quest to model various phenomena, the foundational importance of parameter identifiability to sound statistical modeling may be less well appreciated than goodness of fit. Identifiability concerns the quality of objective information in data to facilitate estimation of a parameter, while nonidentifiability means there are parameters in a model about which the data provide little or no information. In purely empirical models where parsimonious good fit is the chief concern, nonidentifiability (or parameter redundancy) implies overparameterization of the model. In contrast, nonidentifiability implies underinformativeness of available data in mechanistically derived models where parameters are interpreted as having strong practical meaning. This study explores illustrative examples of structural nonidentifiability and its implications using mechanistically derived models (for repeated presence/absence analyses and dose-response of Escherichia coli O157:H7 and norovirus) drawn from quantitative microbial risk assessment. Following algebraic proof of nonidentifiability in these examples, profile likelihood analysis and Bayesian Markov Chain Monte Carlo with uniform priors are illustrated as tools to help detect model parameters that are not strongly identifiable. It is shown that identifiability should be considered during experimental design and ethics approval to ensure generated data can yield strong objective information about all mechanistic parameters of interest. When Bayesian methods are applied to a nonidentifiable model, the subjective prior effectively fabricates information about any parameters about which the data carry no objective information. Finally, structural nonidentifiability can lead to spurious models that fit data well but can yield severely flawed inferences and predictions when they are interpreted or used inappropriately.

10.
Front Microbiol ; 9: 2304, 2018.
Artigo em Inglês | MEDLINE | ID: mdl-30344512

RESUMO

Accurate estimation of microbial concentrations is necessary to inform many important environmental science and public health decisions and regulations. Critically, widespread misconceptions about laboratory-reported microbial non-detects have led to their erroneous description and handling as "censored" values. This ultimately compromises their interpretation and undermines efforts to describe and model microbial concentrations accurately. Herein, these misconceptions are dispelled by (1) discussing the critical differences between discrete microbial observations and continuous data acquired using analytical chemistry methodologies and (2) demonstrating the bias introduced by statistical approaches tailored for chemistry data and misapplied to discrete microbial data. Notably, these approaches especially preclude the accurate representation of low concentrations and those estimated using microbial methods with low or variable analytical recovery, which can be expected to result in non-detects. Techniques that account for the probabilistic relationship between observed data and underlying microbial concentrations have been widely demonstrated, and their necessity for handling non-detects (in a way which is consistent with the handling of positive observations) is underscored herein. Habitual reporting of raw microbial observations and sample sizes is proposed to facilitate accurate estimation and analysis of microbial concentrations.

13.
Risk Anal ; 35(7): 1364-83, 2015 Jul.
Artigo em Inglês | MEDLINE | ID: mdl-25522208

RESUMO

Two forms of single-hit infection dose-response models have previously been developed to assess available data from human feeding trials and estimate the norovirus dose-response relationship. The mechanistic interpretations of these models include strong assumptions that warrant reconsideration: the first study includes an implicit assumption that there is no immunity to Norwalk virus among the specific study population, while the recent second study includes assumptions that such immunity could exist and that the nonimmune have no defensive barriers to prevent infection from exposure to just one virus. Both models addressed unmeasured virus aggregation in administered doses. In this work, the available data are reanalyzed using a generalization of the first model to explore these previous assumptions. It was hypothesized that concurrent estimation of an unmeasured degree of virus aggregation and important dose-response parameters could lead to structural nonidentifiability of the model (i.e., that a diverse range of alternative mechanistic interpretations yield the same optimal fit), and this is demonstrated using the profile likelihood approach and by algebraic proof. It is also demonstrated that omission of an immunity parameter can artificially inflate the estimated degree of aggregation and falsely suggest high susceptibility among the nonimmune. The currently available data support the assumption of immunity within the specific study population, but provide only weak information about the degree of aggregation and susceptibility among the nonimmune. The probability of infection at low and moderate doses may be much lower than previously asserted, but more data from strategically designed dose-response experiments are needed to provide adequate information.


Assuntos
Norovirus/patogenicidade , Humanos
14.
Risk Anal ; 33(9): 1677-93, 2013 Sep.
Artigo em Inglês | MEDLINE | ID: mdl-23311599

RESUMO

Dose-response models are the essential link between exposure assessment and computed risk values in quantitative microbial risk assessment, yet the uncertainty that is inherent to computed risks because the dose-response model parameters are estimated using limited epidemiological data is rarely quantified. Second-order risk characterization approaches incorporating uncertainty in dose-response model parameters can provide more complete information to decisionmakers by separating variability and uncertainty to quantify the uncertainty in computed risks. Therefore, the objective of this work is to develop procedures to sample from posterior distributions describing uncertainty in the parameters of exponential and beta-Poisson dose-response models using Bayes's theorem and Markov Chain Monte Carlo (in OpenBUGS). The theoretical origins of the beta-Poisson dose-response model are used to identify a decomposed version of the model that enables Bayesian analysis without the need to evaluate Kummer confluent hypergeometric functions. Herein, it is also established that the beta distribution in the beta-Poisson dose-response model cannot address variation among individual pathogens, criteria to validate use of the conventional approximation to the beta-Poisson model are proposed, and simple algorithms to evaluate actual beta-Poisson probabilities of infection are investigated. The developed MCMC procedures are applied to analysis of a case study data set, and it is demonstrated that an important region of the posterior distribution of the beta-Poisson dose-response model parameters is attributable to the absence of low-dose data. This region includes beta-Poisson models for which the conventional approximation is especially invalid and in which many beta distributions have an extreme shape with questionable plausibility.


Assuntos
Infecções por Campylobacter/epidemiologia , Infecções por Campylobacter/prevenção & controle , Medição de Risco/métodos , Algoritmos , Animais , Teorema de Bayes , Campylobacter jejuni/metabolismo , Relação Dose-Resposta a Droga , Contaminação de Alimentos , Microbiologia de Alimentos , Humanos , Infectologia/métodos , Funções Verossimilhança , Cadeias de Markov , Modelos Estatísticos , Método de Monte Carlo , Distribuição de Poisson , Probabilidade , Reprodutibilidade dos Testes , Incerteza
15.
Environ Sci Technol ; 44(5): 1720-7, 2010 Mar 01.
Artigo em Inglês | MEDLINE | ID: mdl-20121082

RESUMO

Many of the methods routinely used to quantify microscopic discrete particles and microorganisms are based on enumeration, yet these methods are often known to yield highly variable results. This variability arises from sampling error and variations in analytical recovery (i.e., losses during sample processing and errors in counting), and leads to considerable uncertainty in particle concentration or log(10)-reduction estimates. Conventional statistical analysis techniques based on the t-distribution are often inappropriate, however, because the data must be corrected for mean analytical recovery and may not be normally distributed with equal variance. Furthermore, these statistical approaches do not include subjective knowledge about the stochastic processes involved in enumeration. Here we develop two probabilistic models to account for the random errors in enumeration data, with emphasis on sampling error assumptions, nonconstant analytical recovery, and discussion of counting errors. These models are implemented using Bayes' theorem to yield posterior distributions (by numerical integration or Gibbs sampling) that completely quantify the uncertainty in particle concentration or log(10)-reduction given the experimental data and parameters that describe variability in analytical recovery. The presented approach can easily be implemented to correctly and rigorously analyze single or replicate (bio)particle enumeration data.


Assuntos
Abastecimento de Água/normas , Biometria , Simulação por Computador , Cryptosporidium/isolamento & purificação , Monitoramento Ambiental/métodos , Monitoramento Ambiental/normas , Humanos , Cadeias de Markov , Modelos Estatísticos , Método de Monte Carlo , Distribuição de Poisson , Densidade Demográfica , Reprodutibilidade dos Testes , Projetos de Pesquisa , Medição de Risco , Incerteza
16.
Environ Sci Technol ; 44(5): 1705-12, 2010 Mar 01.
Artigo em Inglês | MEDLINE | ID: mdl-20108894

RESUMO

Enumeration-based methods that are often used to quantify microorganisms and microscopic discrete particles in aqueous systems may include losses during sample processing or errors in counting. Analytical recovery (the capacity of the analyst to successfully count each microorganism or particle of interest in a sample using a specific enumeration method) is frequently assessed by enumerating samples that are seeded with known quantities of the microorganisms or particles. Probabilistic models were developed to account for the impacts of seeding and analytical error on recovery data, and probability intervals, obtained by Monte Carlo simulation, were used to evaluate recovery experiment design (i.e., seeding method, number of seeded particles, and number of samples). The method of moments, maximum likelihood estimation, and credible intervals were used to statistically analyze recovery experiment results. Low or uncertain numbers of seeded particles were found to result in variability in recovery data that was not due to analytical recovery, and should be avoided if possible. This additional variability was found to reduce the reproducibility of experimental results and necessitated the use of statistical analysis techniques, such as maximum likelihood estimation using probabilistic models that account for the impacts of sampling and analytical error in recovery data.


Assuntos
Modelos Estatísticos , Método de Monte Carlo , Algoritmos , Cryptosporidium/crescimento & desenvolvimento , Funções Verossimilhança , Distribuição de Poisson , Densidade Demográfica , Probabilidade , Tamanho da Amostra , Incerteza
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA