Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 26
Filtrar
1.
Regul Toxicol Pharmacol ; 146: 105525, 2024 Jan.
Artículo en Inglés | MEDLINE | ID: mdl-37972849

RESUMEN

In October 2022, the World Health Organization (WHO) convened an expert panel in Lisbon, Portugal in which the 2005 WHO TEFs for chlorinated dioxin-like compounds were reevaluated. In contrast to earlier panels that employed expert judgement and consensus-based assignment of TEF values, the present effort employed an update to the 2006 REP database, a consensus-based weighting scheme, a Bayesian dose response modeling and meta-analysis to derive "Best-Estimate" TEFs. The updated database contains almost double the number of datasets from the earlier version and includes metadata that informs the weighting scheme. The Bayesian analysis of this dataset results in an unbiased quantitative assessment of the congener-specific potencies with uncertainty estimates. The "Best-Estimate" TEF derived from the model was used to assign 2022 WHO-TEFs for almost all congeners and these values were not rounded to half-logs as was done previously. The exception was for the mono-ortho PCBs, for which the panel agreed to retain their 2005 WHO-TEFs due to limited and heterogenous data available for these compounds. Applying these new TEFs to a limited set of dioxin-like chemical concentrations measured in human milk and seafood indicates that the total toxic equivalents will tend to be lower than when using the 2005 TEFs.


Asunto(s)
Dioxinas , Bifenilos Policlorados , Dibenzodioxinas Policloradas , Animales , Humanos , Teorema de Bayes , Dibenzofuranos/toxicidad , Dibenzofuranos Policlorados/toxicidad , Dioxinas/toxicidad , Mamíferos , Bifenilos Policlorados/toxicidad , Dibenzodioxinas Policloradas/toxicidad , Organización Mundial de la Salud
2.
Regul Toxicol Pharmacol ; 141: 105389, 2023 Jun.
Artículo en Inglés | MEDLINE | ID: mdl-37061082

RESUMEN

Toxicology analyses are built around dose-response modeling, and increasingly these methodologies utilize Bayesian estimation techniques. Bayesian estimation is unique because it includes prior distributional information in the analysis, which may impact the dose-response estimate meaningfully. As such analyses are often used for human health risk assessment, the practitioner must understand the impact of adding prior information to the dose-response study. One proposal in the literature is the use of the flat uniform prior distribution, which places a uniform prior probability over the dose-response model's parameters for a chosen range of values. Though the motivation of such a prior distribution is laudable in that it is most like maximum likelihood estimation seeking unbiased estimates of the dose-response, one can show that such priors add information and may introduce unexpected biases into the analysis. This manuscript shows through numerous empirical examples why prior distributions that are non-informative across all endpoints of interest do not exist for dose-response models; that is, other quantities of interest will be informed by choosing one inferential quantity not informed.


Asunto(s)
Teorema de Bayes , Humanos , Sesgo , Medición de Riesgo
3.
Regul Toxicol Pharmacol ; 143: 105464, 2023 Sep.
Artículo en Inglés | MEDLINE | ID: mdl-37516304

RESUMEN

In 2005, the World Health Organization (WHO) re-evaluated Toxic Equivalency factors (TEFs) developed for dioxin-like compounds believed to act through the Ah receptor based on an updated database of relative estimated potency (REP)(REP2004 database). This re-evalution identified the need to develop a consistent approach for dose-response modeling. Further, the WHO Panel discussed the significant heterogeneity of experimental datasets and dataset quality underlying the REPs in the database. There is a critical need to develop a quantitative, and quality weighted approach to characterize the TEF for each congener. To address this, a multi-tiered approach that combines Bayesian dose-response fitting and meta-regression with a machine learning model to predict REPS' quality categorizations was developed to predict the most likely relationship between each congener and its reference and derive model-predicted TEF uncertainty distributions. As a proof of concept, this 'Best-Estimate TEF workflow' was applied to the REP2004 database to derive TEF point-estimates and characterizations of uncertainty for all congeners. Model-TEFs were similar to the 2005 WHO TEFs, with the data-poor congeners having larger levels of uncertainty. This transparent and reproducible computational workflow incorporates WHO expert panel recommendations and represents a substantial improvement in the TEF methodology.


Asunto(s)
Dioxinas , Bifenilos Policlorados , Dioxinas/toxicidad , Teorema de Bayes , Medición de Riesgo , Incertidumbre , Receptores de Hidrocarburo de Aril
4.
J R Stat Soc Series B Stat Methodol ; 84(4): 1198-1228, 2022 Sep.
Artículo en Inglés | MEDLINE | ID: mdl-36570797

RESUMEN

Gaussian processes (GPs) are common components in Bayesian non-parametric models having a rich methodological literature and strong theoretical grounding. The use of exact GPs in Bayesian models is limited to problems containing several thousand observations due to their prohibitive computational demands. We develop a posterior sampling algorithm using H -matrix approximations that scales at O ( n log 2 n ) . We show that this approximation's Kullback-Leibler divergence to the true posterior can be made arbitrarily small. Though multidimensional GPs could be used with our algorithm, d-dimensional surfaces are modeled as tensor products of univariate GPs to minimize the cost of matrix construction and maximize computational efficiency. We illustrate the performance of this fast increased fidelity approximate GP, FIFA-GP, using both simulated and non-synthetic data sets.

5.
Environmetrics ; 33(5)2022 Aug.
Artículo en Inglés | MEDLINE | ID: mdl-36589902

RESUMEN

When estimating a benchmark dose (BMD) from chemical toxicity experiments, model averaging is recommended by the National Institute for Occupational Safety and Health, World Health Organization and European Food Safety Authority. Though numerous studies exist for Model Average BMD estimation using dichotomous responses, fewer studies investigate it for BMD estimation using continuous response. In this setting, model averaging a BMD poses additional problems as the assumed distribution is essential to many BMD definitions, and distributional uncertainty is underestimated when one error distribution is chosen a priori. As model averaging combines full models, there is no reason one cannot include multiple error distributions. Consequently, we define a continuous model averaging approach over distributional models and show that it is superior to single distribution model averaging. To show the superiority of the approach, we apply the method to simulated and experimental response data.

6.
Risk Anal ; 41(1): 56-66, 2021 01.
Artículo en Inglés | MEDLINE | ID: mdl-33063372

RESUMEN

To better understand the risk of exposure to food allergens, food challenge studies are designed to slowly increase the dose of an allergen delivered to allergic individuals until an objective reaction occurs. These dose-to-failure studies are used to determine acceptable intake levels and are analyzed using parametric failure time models. Though these models can provide estimates of the survival curve and risk, their parametric form may misrepresent the survival function for doses of interest. Different models that describe the data similarly may produce different dose-to-failure estimates. Motivated by predictive inference, we developed a Bayesian approach to combine survival estimates based on posterior predictive stacking, where the weights are formed to maximize posterior predictive accuracy. The approach defines a model space that is much larger than traditional parametric failure time modeling approaches. In our case, we use the approach to include random effects accounting for frailty components. The methodology is investigated in simulation, and is used to estimate allergic population eliciting doses for multiple food allergens.


Asunto(s)
Teorema de Bayes , Hipersensibilidad a los Alimentos/diagnóstico , Medición de Riesgo/métodos , Alérgenos/administración & dosificación , Simulación por Computador , Humanos , Modelos Estadísticos
7.
Risk Anal ; 40(9): 1706-1722, 2020 09.
Artículo en Inglés | MEDLINE | ID: mdl-32602232

RESUMEN

Model averaging for dichotomous dose-response estimation is preferred to estimate the benchmark dose (BMD) from a single model, but challenges remain regarding implementing these methods for general analyses before model averaging is feasible to use in many risk assessment applications, and there is little work on Bayesian methods that include informative prior information for both the models and the parameters of the constituent models. This article introduces a novel approach that addresses many of the challenges seen while providing a fully Bayesian framework. Furthermore, in contrast to methods that use Monte Carlo Markov Chain, we approximate the posterior density using maximum a posteriori estimation. The approximation allows for an accurate and reproducible estimate while maintaining the speed of maximum likelihood, which is crucial in many applications such as processing massive high throughput data sets. We assess this method by applying it to empirical laboratory dose-response data and measuring the coverage of confidence limits for the BMD. We compare the coverage of this method to that of other approaches using the same set of models. Through the simulation study, the method is shown to be markedly superior to the traditional approach of selecting a single preferred model (e.g., from the U.S. EPA BMD software) for the analysis of dichotomous data and is comparable or superior to the other approaches.


Asunto(s)
Teorema de Bayes , Medición de Riesgo , Incertidumbre , Relación Dosis-Respuesta a Droga , Isocianatos/administración & dosificación , Nitrosaminas/administración & dosificación
8.
Environmetrics ; 31(7)2020 Nov.
Artículo en Inglés | MEDLINE | ID: mdl-36052215

RESUMEN

Protection and safety authorities recommend the use of model averaging to determine the benchmark dose approach as a scientifically more advanced method compared with the no-observed-adverse-effect-level approach for obtaining a reference point and deriving health-based guidance values. Model averaging however highly depends on the set of candidate dose-response models and such a set should be rich enough to ensure that a well-fitting model is included. The currently applied set of candidate models for continuous endpoints is typically limited to two models, the exponential and Hill model, and differs completely from the richer set of candidate models currently used for binary endpoints. The objective of this article is to propose a general and wide framework of dose response models, which can be applied both to continuous and binary endpoints and covers the current models for both type of endpoints. In combination with the bootstrap, this framework offers a unified approach to benchmark dose estimation. The methodology is illustrated using two data sets, one with a continuous and another with a binary endpoint.

9.
Biometrics ; 75(1): 193-201, 2019 03.
Artículo en Inglés | MEDLINE | ID: mdl-30081432

RESUMEN

Many modern datasets are sampled with error from complex high-dimensional surfaces. Methods such as tensor product splines or Gaussian processes are effective and well suited for characterizing a surface in two or three dimensions, but they may suffer from difficulties when representing higher dimensional surfaces. Motivated by high throughput toxicity testing where observed dose-response curves are cross sections of a surface defined by a chemical's structural properties, a model is developed to characterize this surface to predict untested chemicals' dose-responses. This manuscript proposes a novel approach that models the multidimensional surface as a sum of learned basis functions formed as the tensor product of lower dimensional functions, which are themselves representable by a basis expansion learned from the data. The model is described and a Gibbs sampling algorithm is proposed. The approach is investigated in a simulation study and through data taken from the US EPA's ToxCast high throughput toxicity testing platform.


Asunto(s)
Teorema de Bayes , Pruebas de Toxicidad/estadística & datos numéricos , Animales , Simulación por Computador , Relación Dosis-Respuesta a Droga , Contaminantes Ambientales/farmacología , Ensayos Analíticos de Alto Rendimiento/métodos , Humanos , Distribución Normal , Relación Estructura-Actividad Cuantitativa , Pruebas de Toxicidad/métodos
10.
Risk Anal ; 39(3): 616-629, 2019 03.
Artículo en Inglés | MEDLINE | ID: mdl-30368842

RESUMEN

Quantitative risk assessments for physical, chemical, biological, occupational, or environmental agents rely on scientific studies to support their conclusions. These studies often include relatively few observations, and, as a result, models used to characterize the risk may include large amounts of uncertainty. The motivation, development, and assessment of new methods for risk assessment is facilitated by the availability of a set of experimental studies that span a range of dose-response patterns that are observed in practice. We describe construction of such a historical database focusing on quantal data in chemical risk assessment, and we employ this database to develop priors in Bayesian analyses. The database is assembled from a variety of existing toxicological data sources and contains 733 separate quantal dose-response data sets. As an illustration of the database's use, prior distributions for individual model parameters in Bayesian dose-response analysis are constructed. Results indicate that including prior information based on curated historical data in quantitative risk assessments may help stabilize eventual point estimates, producing dose-response functions that are more stable and precisely estimated. These in turn produce potency estimates that share the same benefit. We are confident that quantitative risk analysts will find many other applications and issues to explore using this database.


Asunto(s)
Teorema de Bayes , Bases de Datos Factuales , Medición de Riesgo/métodos , alfa-Clorhidrina/toxicidad , Animales , Relación Dosis-Respuesta a Droga , Humanos , Masculino , Probabilidad , Lenguajes de Programación , Salud Pública , Ratas , Ratas Sprague-Dawley , Programas Informáticos , Incertidumbre , alfa-Clorhidrina/análisis
11.
Risk Anal ; 37(10): 1865-1878, 2017 10.
Artículo en Inglés | MEDLINE | ID: mdl-28032899

RESUMEN

Human variability is a very important factor considered in human health risk assessment for protecting sensitive populations from chemical exposure. Traditionally, to account for this variability, an interhuman uncertainty factor is applied to lower the exposure limit. However, using a fixed uncertainty factor rather than probabilistically accounting for human variability can hardly support probabilistic risk assessment advocated by a number of researchers; new methods are needed to probabilistically quantify human population variability. We propose a Bayesian hierarchical model to quantify variability among different populations. This approach jointly characterizes the distribution of risk at background exposure and the sensitivity of response to exposure, which are commonly represented by model parameters. We demonstrate, through both an application to real data and a simulation study, that using the proposed hierarchical structure adequately characterizes variability across different populations.


Asunto(s)
Arsénico/toxicidad , Enfermedades Cardiovasculares/inducido químicamente , Relación Dosis-Respuesta a Droga , Medición de Riesgo/métodos , Algoritmos , Teorema de Bayes , Variación Genética , Humanos , Cadenas de Markov , Probabilidad , Incertidumbre
12.
Risk Anal ; 37(11): 2107-2118, 2017 11.
Artículo en Inglés | MEDLINE | ID: mdl-28555874

RESUMEN

Quantitative risk assessment often begins with an estimate of the exposure or dose associated with a particular risk level from which exposure levels posing low risk to populations can be extrapolated. For continuous exposures, this value, the benchmark dose, is often defined by a specified increase (or decrease) from the median or mean response at no exposure. This method of calculating the benchmark dose does not take into account the response distribution and, consequently, cannot be interpreted based upon probability statements of the target population. We investigate quantile regression as an alternative to the use of the median or mean regression. By defining the dose-response quantile relationship and an impairment threshold, we specify a benchmark dose as the dose associated with a specified probability that the population will have a response equal to or more extreme than the specified impairment threshold. In addition, in an effort to minimize model uncertainty, we use Bayesian monotonic semiparametric regression to define the exposure-response quantile relationship, which gives the model flexibility to estimate the quantal dose-response function. We describe this methodology and apply it to both epidemiology and toxicology data.

13.
Regul Toxicol Pharmacol ; 67(1): 75-82, 2013 Oct.
Artículo en Inglés | MEDLINE | ID: mdl-23831127

RESUMEN

Experiments with relatively high doses are often used to predict risks at appreciably lower doses. A point of departure (PoD) can be calculated as the dose associated with a specified moderate response level that is often in the range of experimental doses considered. A linear extrapolation to lower doses often follows. An alternative to the PoD method is to develop a model that accounts for the model uncertainty in the dose-response relationship and to use this model to estimate the risk at low doses. Two such approaches that account for model uncertainty are model averaging (MA) and semi-parametric methods. We use these methods, along with the PoD approach in the context of a large animal (40,000+ animal) bioassay that exhibited sub-linearity. When models are fit to high dose data and risks at low doses are predicted, the methods that account for model uncertainty produce dose estimates associated with an excess risk that are closer to the observed risk than the PoD linearization. This comparison provides empirical support to accompany previous simulation studies that suggest methods that incorporate model uncertainty provide viable, and arguably preferred, alternatives to linear extrapolation from a PoD.


Asunto(s)
Modelos Biológicos , Incertidumbre , Animales , Benchmarking , Relación Dosis-Respuesta a Droga , Medición de Riesgo
14.
Comput Toxicol ; 252023 Feb.
Artículo en Inglés | MEDLINE | ID: mdl-36909352

RESUMEN

The need to analyze the complex relationships observed in high-throughput toxicogenomic and other omic platforms has resulted in an explosion of methodological advances in computational toxicology. However, advancements in the literature often outpace the development of software researchers can implement in their pipelines, and existing software is frequently based on pre-specified workflows built from well-vetted assumptions that may not be optimal for novel research questions. Accordingly, there is a need for a stable platform and open-source codebase attached to a programming language that allows users to program new algorithms. To fill this gap, the Biostatistics and Computational Biology Branch of the National Institute of Environmental Health Sciences, in cooperation with the National Toxicology Program (NTP) and US Environmental Protection Agency (EPA), developed ToxicR, an open-source R programming package. The ToxicR platform implements many of the standard analyses used by the NTP and EPA, including dose-response analyses for continuous and dichotomous data that employ Bayesian, maximum likelihood, and model averaging methods, as well as many standard tests the NTP uses in rodent toxicology and carcinogenicity studies, such as the poly-K and Jonckheere trend tests. ToxicR is built on the same codebase as current versions of the EPA's Benchmark Dose software and NTP's BMDExpress software but has increased flexibility because it directly accesses this software. To demonstrate ToxicR, we developed a custom workflow to illustrate its capabilities for analyzing toxicogenomic data. The unique features of ToxicR will allow researchers in other fields to add modules, increasing its functionality in the future.

15.
Comput Toxicol ; 212022 Feb.
Artículo en Inglés | MEDLINE | ID: mdl-35083394

RESUMEN

Computational methods for genomic dose-response integrate dose-response modeling with bioinformatics tools to evaluate changes in molecular and cellular functions related to pathogenic processes. These methods use parametric models to describe each gene's dose-response, but such models may not adequately capture expression changes. Additionally, current approaches do not consider gene co-expression networks. When assessing co-expression networks, one typically does not consider the dose-response relationship, resulting in 'co-regulated' gene sets containing genes having different dose-response patterns. To avoid these limitations, we develop an analysis pipeline called Aggregated Local Extrema Splines for High-throughput Analysis (ALOHA), which computes individual genomic dose-response functions using a flexible class Bayesian shape constrained splines and clusters gene co-regulation based upon these fits. Using splines, we reduce information loss due to parametric lack-of-fit issues, and because we cluster on dose-response relationships, we better identify co-regulation clusters for genes that have co-expressed dose-response patterns from chemical exposure. The clustered pathways can then be used to estimate a dose associated with a pre-specified biological response, i.e., the benchmark dose (BMD), and approximate a point of departure dose corresponding to minimal adverse response in the whole tissue/organism. We compare our approach to current parametric methods and our biologically enriched gene sets to cluster on normalized expression data. Using this methodology, we can more effectively extract the underlying structure leading to more cohesive estimates of gene set potency.

16.
Ann Appl Stat ; 15(3): 1405-1430, 2021 Sep.
Artículo en Inglés | MEDLINE | ID: mdl-35765365

RESUMEN

Today there are approximately 85,000 chemicals regulated under the Toxic Substances Control Act, with around 2,000 new chemicals introduced each year. It is impossible to screen all of these chemicals for potential toxic effects, either via full organism in vivo studies or in vitro high-throughput screening (HTS) programs. Toxicologists face the challenge of choosing which chemicals to screen, and predicting the toxicity of as yet unscreened chemicals. Our goal is to describe how variation in chemical structure relates to variation in toxicological response to enable in silico toxicity characterization designed to meet both of these challenges. With our Bayesian partially Supervised Sparse and Smooth Factor Analysis (BS3FA) model, we learn a distance between chemicals targeted to toxicity, rather than one based on molecular structure alone. Our model also enables the prediction of chemical dose-response profiles based on chemical structure (i.e., without in vivo or in vitro testing) by taking advantage of a large database of chemicals that have already been tested for toxicity in HTS programs. We show superior simulation performance in distance learning and modest to large gains in predictive ability compared to existing methods. Results from the high-throughput screening data application elucidate the relationship between chemical structure and a toxicity-relevant high-throughput assay. An R package for BS3FA is available online at https://github.com/kelrenmor/bs3fa.

17.
Am J Respir Crit Care Med ; 180(3): 257-64, 2009 Aug 01.
Artículo en Inglés | MEDLINE | ID: mdl-19423717

RESUMEN

RATIONALE: Previous studies have shown associations between dust exposure or lung burden and emphysema in coal miners, although the separate contributions of various predictors have not been clearly demonstrated. OBJECTIVES: To quantitatively evaluate the relationship between cumulative exposure to respirable coal mine dust, cigarette smoking, and other factors on emphysema severity. METHODS: The study group included 722 autopsied coal miners and nonminers in the United States. Data on work history, smoking, race, and age at death were obtained from medical records and questionnaire completed by next-of-kin. Emphysema was classified and graded using a standardized schema. Job-specific mean concentrations of respirable coal mine dust were matched with work histories to estimate cumulative exposure. Relationships between various metrics of dust exposure (including cumulative exposure and lung dust burden) and emphysema severity were investigated in weighted least squares regression models. MEASUREMENTS AND MAIN RESULTS: Emphysema severity was significantly elevated in coal miners compared with nonminers among ever- and never-smokers (P < 0.0001). Cumulative exposure to respirable coal mine dust or coal dust retained in the lungs were significant predictors of emphysema severity (P < 0.0001) after accounting for cigarette smoking, age at death, and race. The contributions of coal mine dust exposure and cigarette smoking were similar in predicting emphysema severity averaged over this cohort. CONCLUSIONS: Coal dust exposure, cigarette smoking, age, and race are significant and additive predictors of emphysema severity in this study.


Asunto(s)
Minas de Carbón , Polvo , Enfermedades Profesionales/etiología , Enfisema Pulmonar/etiología , Fumar/efectos adversos , Contaminación por Humo de Tabaco/efectos adversos , Anciano , Autopsia , Femenino , Humanos , Masculino , Persona de Mediana Edad , Enfermedades Profesionales/mortalidad , Enfermedades Profesionales/patología , Enfisema Pulmonar/mortalidad , Enfisema Pulmonar/patología , Índice de Severidad de la Enfermedad , Tasa de Supervivencia/tendencias , Estados Unidos/epidemiología
18.
Food Chem Toxicol ; 146: 111831, 2020 Dec.
Artículo en Inglés | MEDLINE | ID: mdl-33166672

RESUMEN

Previously, we published selected Eliciting Dose (ED) values (i.e. ED01 and ED05 values) for 14 allergenic foods, predicted to elicit objective allergic symptoms in 1% and 5%, respectively, of the allergic population (Remington et al., 2020). These ED01 and ED05 values were specifically presented and discussed in the context of establishing Reference Doses for allergen management and the calculation of Action Levels for Precautionary Allergen Labeling (PAL). In the current paper, we publish the full range of ED values for these allergenic foods and provide recommendations for their use, specifically in the context of characterizing risks of concentrations of (unintended) allergenic proteins in food products. The data provided in this publication give risk assessors access to full population ED distribution information for 14 priority allergenic foods, based on the largest threshold database worldwide. The ED distributions were established using broad international consensus regarding suitable datapoints and methods for establishing individual patient's NOAELs and LOAELs and state of the art statistical modelling. Access to these ED data enables risk assessors to use this information for state-of-the-art food allergen risk assessment. This paper contributes to a harmonization of food allergen risk assessment and risk management and PAL practices.


Asunto(s)
Alérgenos/administración & dosificación , Alérgenos/toxicidad , Hipersensibilidad a los Alimentos , Relación Dosis-Respuesta a Droga , Humanos , Nivel sin Efectos Adversos Observados , Medición de Riesgo
19.
Food Chem Toxicol ; 139: 111259, 2020 May.
Artículo en Inglés | MEDLINE | ID: mdl-32179163

RESUMEN

Food allergy and allergen management are important global public health issues. In 2011, the first iteration of our allergen threshold database (ATDB) was established based on individual NOAELs and LOAELs from oral food challenge in roughly 1750 allergic individuals. Population minimal eliciting dose (EDp) distributions based on this dataset were published for 11 allergenic foods in 2014. Systematic data collection has continued (2011-2018) and the dataset now contains over 3400 data points. The current study provides new and updated EDp values for 14 allergenic foods and incorporates a newly developed Stacked Model Averaging statistical method for interval-censored data. ED01 and ED05 values, the doses at which 1%, and respectively 5%, of the respective allergic population would be predicted to experience any objective allergic reaction were determined. The 14 allergenic foods were cashew, celery, egg, fish, hazelnut, lupine, milk, mustard, peanut, sesame, shrimp (for crustacean shellfish), soy, walnut, and wheat. Updated ED01 estimates ranged between 0.03 mg for walnut protein and 26.2 mg for shrimp protein. ED05 estimates ranged between 0.4 mg for mustard protein and 280 mg for shrimp protein. The ED01 and ED05 values presented here are valuable in the risk assessment and subsequent risk management of allergenic foods.


Asunto(s)
Alérgenos/inmunología , Hipersensibilidad a los Alimentos/inmunología , Alérgenos/administración & dosificación , Animales , Arachis/química , Arachis/inmunología , Humanos , Juglans/química , Juglans/inmunología , Leche/química , Leche/inmunología , Nueces/química , Nueces/inmunología , Medición de Riesgo , Sesamum/química , Sesamum/inmunología
20.
Risk Anal ; 29(2): 249-56, 2009 Feb.
Artículo en Inglés | MEDLINE | ID: mdl-19000080

RESUMEN

With the increased availability of toxicological hazard information arising from multiple experimental sources, risk assessors are often confronted with the challenge of synthesizing all available scientific information into an analysis. This analysis is further complicated because significant between-source heterogeneity/lab-to-lab variability is often evident. We estimate benchmark doses using hierarchical models to account for the observed heterogeneity. These models are used to construct source-specific and population-average estimates of the benchmark dose (BMD). This is illustrated with an analysis of the U.S. EPA Region IX's reference toxicity database on the effects of sodium chloride on reproduction in Ceriodaphnia dubia. Results show that such models may effectively account for the lab-source heterogeneity while producing BMD estimates that more truly reflect the variability of the system under study. Failing to account for such heterogeneity may result in estimates having confidence intervals that are overly narrow.


Asunto(s)
Contaminantes Químicos del Agua/análisis , Algoritmos , Animales , Benchmarking , Daphnia , Recolección de Datos , Exposición a Riesgos Ambientales , Modelos Estadísticos , Análisis Multivariante , Distribución de Poisson , Análisis de Regresión , Riesgo , Medición de Riesgo , Cloruro de Sodio/toxicidad , Programas Informáticos , Contaminantes Químicos del Agua/toxicidad
SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA