Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 44
Filtrar
Mais filtros

Base de dados
País/Região como assunto
Tipo de documento
Intervalo de ano de publicação
1.
J Occup Environ Hyg ; 21(1): 47-57, 2024 Jan.
Artigo em Inglês | MEDLINE | ID: mdl-37874933

RESUMO

The NIOSH Pocket Guide to Chemical Hazards is a trusted resource that displays key information for a collection of chemicals commonly encountered in the workplace. Entries contain chemical structures-occupational exposure limit information ranging from limits based on full-shift time-weighted averages to acute limits such as short-term exposure limits and immediately dangerous to life or health values, as well as a variety of other data such as chemical-physical properties and symptoms of exposure. The NIOSH Pocket Guide (NPG) is available as a printed, hardcopy book, a PDF version, an electronic database, and a downloadable application for mobile phones. All formats of the NIOSH Pocket Guide allow users to access the data for each chemical separately, however, the guide does not support data analytics or visualization across chemicals. This project reformatted existing data in the NPG to make it searchable and compatible with exploration and analysis using a web application. The resulting application allows users to investigate the relationships between occupational exposure limits, the range and distribution of occupational exposure limits, and the specialized sorting of chemicals by health endpoint or to summarize information of particular interest. These tasks would have previously required manual extraction of the data and analysis. The usability of this application was evaluated among industrial hygienists and researchers and while the existing application seems most relevant to researchers, the open-source code and data are amenable to modification by users to increase customization.


Assuntos
Exposição Ocupacional , Estados Unidos , National Institute for Occupational Safety and Health, U.S. , Exposição Ocupacional/análise , Níveis Máximos Permitidos , Local de Trabalho
2.
Risk Anal ; 37(11): 2107-2118, 2017 11.
Artigo em Inglês | MEDLINE | ID: mdl-28555874

RESUMO

Quantitative risk assessment often begins with an estimate of the exposure or dose associated with a particular risk level from which exposure levels posing low risk to populations can be extrapolated. For continuous exposures, this value, the benchmark dose, is often defined by a specified increase (or decrease) from the median or mean response at no exposure. This method of calculating the benchmark dose does not take into account the response distribution and, consequently, cannot be interpreted based upon probability statements of the target population. We investigate quantile regression as an alternative to the use of the median or mean regression. By defining the dose-response quantile relationship and an impairment threshold, we specify a benchmark dose as the dose associated with a specified probability that the population will have a response equal to or more extreme than the specified impairment threshold. In addition, in an effort to minimize model uncertainty, we use Bayesian monotonic semiparametric regression to define the exposure-response quantile relationship, which gives the model flexibility to estimate the quantal dose-response function. We describe this methodology and apply it to both epidemiology and toxicology data.

3.
Regul Toxicol Pharmacol ; 67(1): 75-82, 2013 Oct.
Artigo em Inglês | MEDLINE | ID: mdl-23831127

RESUMO

Experiments with relatively high doses are often used to predict risks at appreciably lower doses. A point of departure (PoD) can be calculated as the dose associated with a specified moderate response level that is often in the range of experimental doses considered. A linear extrapolation to lower doses often follows. An alternative to the PoD method is to develop a model that accounts for the model uncertainty in the dose-response relationship and to use this model to estimate the risk at low doses. Two such approaches that account for model uncertainty are model averaging (MA) and semi-parametric methods. We use these methods, along with the PoD approach in the context of a large animal (40,000+ animal) bioassay that exhibited sub-linearity. When models are fit to high dose data and risks at low doses are predicted, the methods that account for model uncertainty produce dose estimates associated with an excess risk that are closer to the observed risk than the PoD linearization. This comparison provides empirical support to accompany previous simulation studies that suggest methods that incorporate model uncertainty provide viable, and arguably preferred, alternatives to linear extrapolation from a PoD.


Assuntos
Modelos Biológicos , Incerteza , Animais , Benchmarking , Relação Dose-Resposta a Droga , Medição de Risco
4.
Environ Toxicol Chem ; 42(7): 1614-1623, 2023 07.
Artigo em Inglês | MEDLINE | ID: mdl-37014189

RESUMO

In aquatic toxicology experiments, organisms are randomly assigned to an exposure group that receives a particular concentration level of a toxicant (including a control group with no exposure), and their survival, growth, or reproduction outcomes are recorded. Standard experiments use equal numbers of organisms in each exposure group. In the present study, we explored the potential benefits of modifying the current design of aquatic toxicology experiments when it is of interest to estimate the concentration associated with a specific level of decrease from control reproduction responses. A function of the parameter estimates from fitting a generalized linear regression model used to describe the relationship between individual responses and the toxicant concentration provides an estimate of the potency of the toxicant. After comparing different allocations of organisms to concentration groups, we observed that a reallocation of organisms among these concentration groups could provide more precise estimates of toxicity endpoints than the standard experimental design that uses equal number of organisms in each concentration group; this provides greater precision without the added cost of conducting the experiment. More specifically, assigning more observations to the control zero-concentration condition may result in more precise interval estimates of potency. Environ Toxicol Chem 2023;42:1614-1623. © 2023 SETAC.


Assuntos
Cladocera , Poluentes Químicos da Água , Animais , Cladocera/fisiologia , Reprodução , Modelos Lineares , Poluentes Químicos da Água/toxicidade
5.
Comput Toxicol ; 252023 Feb.
Artigo em Inglês | MEDLINE | ID: mdl-36909352

RESUMO

The need to analyze the complex relationships observed in high-throughput toxicogenomic and other omic platforms has resulted in an explosion of methodological advances in computational toxicology. However, advancements in the literature often outpace the development of software researchers can implement in their pipelines, and existing software is frequently based on pre-specified workflows built from well-vetted assumptions that may not be optimal for novel research questions. Accordingly, there is a need for a stable platform and open-source codebase attached to a programming language that allows users to program new algorithms. To fill this gap, the Biostatistics and Computational Biology Branch of the National Institute of Environmental Health Sciences, in cooperation with the National Toxicology Program (NTP) and US Environmental Protection Agency (EPA), developed ToxicR, an open-source R programming package. The ToxicR platform implements many of the standard analyses used by the NTP and EPA, including dose-response analyses for continuous and dichotomous data that employ Bayesian, maximum likelihood, and model averaging methods, as well as many standard tests the NTP uses in rodent toxicology and carcinogenicity studies, such as the poly-K and Jonckheere trend tests. ToxicR is built on the same codebase as current versions of the EPA's Benchmark Dose software and NTP's BMDExpress software but has increased flexibility because it directly accesses this software. To demonstrate ToxicR, we developed a custom workflow to illustrate its capabilities for analyzing toxicogenomic data. The unique features of ToxicR will allow researchers in other fields to add modules, increasing its functionality in the future.

6.
Risk Anal ; 32(7): 1207-18, 2012 Jul.
Artigo em Inglês | MEDLINE | ID: mdl-22385024

RESUMO

Quantitative risk assessment proceeds by first estimating a dose-response model and then inverting this model to estimate the dose that corresponds to some prespecified level of response. The parametric form of the dose-response model often plays a large role in determining this dose. Consequently, the choice of the proper model is a major source of uncertainty when estimating such endpoints. While methods exist that attempt to incorporate the uncertainty by forming an estimate based upon all models considered, such methods may fail when the true model is on the edge of the space of models considered and cannot be formed from a weighted sum of constituent models. We propose a semiparametric model for dose-response data as well as deriving a dose estimate associated with a particular response. In this model formulation, the only restriction on the model form is that it is monotonic. We use this model to estimate the dose-response curve from a long-term cancer bioassay, as well as compare this to methods currently used to account for model uncertainty. A small simulation study is conducted showing that the method is superior to model averaging when estimating exposure that arises from a quantal-linear dose-response mechanism, and is similar to these methods when investigating nonlinear dose-response patterns.


Assuntos
Teorema de Bayes , Relação Dose-Resposta a Droga , Modelos Estatísticos , Medição de Risco/métodos , Animais , Simulação por Computador , Hidrocarbonetos Bromados/toxicidade , Neoplasias Pulmonares/induzido quimicamente , Ratos
7.
Environ Epidemiol ; 5(2): e144, 2021 Apr.
Artigo em Inglês | MEDLINE | ID: mdl-33870016

RESUMO

Despite the precipitous decline of airborne lead concentrations following the removal of lead in gasoline, lead is still detectable in ambient air in most urban areas. Few studies, however, have examined the health effects of contemporary airborne lead concentrations in children. METHODS: We estimated monthly air lead exposure among 263 children (Cincinnati Childhood Allergy and Air Pollution Study; Cincinnati, OH; 2001-2005) using temporally scaled predictions from a validated land use model and assessed neurobehavioral outcomes at age 12 years using the parent-completed Behavioral Assessment System for Children, 2nd edition. We used distributed lag models to estimate the effect of airborne lead exposure on behavioral outcomes while adjusting for potential confounding by maternal education, community-level deprivation, blood lead concentrations, greenspace, and traffic related air pollution. RESULTS: We identified sensitive windows during mid- and late childhood for increased anxiety and atypicality scores, whereas sensitive windows for increased aggression and attention problems were identified immediately following birth. The strongest effect was at age 12, where a 1 ng/m3 increase in airborne lead exposure was associated with a 3.1-point (95% confidence interval: 0.4, 5.7) increase in anxiety scores. No sensitive windows were identified for depression, somatization, conduct problems, hyperactivity, or withdrawal behaviors. CONCLUSIONS: We observed associations between exposure to airborne lead concentrations and poor behavioral outcomes at concentrations 10 times lower than the National Ambient Air Quality Standards set by the US Environmental Protection Agency.

8.
Environ Toxicol Chem ; 29(5): 1168-71, 2010 May.
Artigo em Inglês | MEDLINE | ID: mdl-20821554

RESUMO

Smaller organisms may have too little tissue to allow assaying as individuals. To get a sufficient sample for assaying, a collection of smaller individual organisms is pooled together to produce a simple observation for modeling and analysis. When a dataset contains a mix of pooled and individual organisms, the variances of the observations are not equal. An unweighted regression method is no longer appropriate because it assumes equal precision among the observations. A weighted regression method is more appropriate and yields more precise estimates because it incorporates a weight to the pooled observations. To demonstrate the benefits of using a weighted analysis when some observations are pooled, the bias and confidence interval (CI) properties were compared using an ordinary least squares and a weighted least squares t-based confidence interval. The slope and intercept estimates were unbiased for both weighted and unweighted analyses. While CIs for the slope and intercept achieved nominal coverage, the CI lengths were smaller using a weighted analysis instead of an unweighted analysis, implying that a weighted analysis will yield greater precision.


Assuntos
Monitoramento Ambiental/métodos , Análise dos Mínimos Quadrados , Modelos Biológicos , Modelos Estatísticos , Viés , Tamanho da Amostra
9.
Environ Toxicol Chem ; 29(1): 212-9, 2010 Jan.
Artigo em Inglês | MEDLINE | ID: mdl-20821437

RESUMO

Endpoints in aquatic toxicity tests can be measured using a variety of measurement scales including dichotomous (survival), continuous (growth) and count (number of young). A distribution is assumed for an endpoint and analyses proceed accordingly. In certain situations, the assumed distribution may be incorrect and this may lead to incorrect statistical inference. The present study considers the analysis of count effects, here motivated by the Ceriodaphnia dubia reproduction study. While the Poisson probability model is a common starting point, this distribution assumes that the mean and variance are the same. This will not be the case if there is some extraneous source of variability in the system, and in this case, the variability may exceed the mean. A computer simulation study was used to examine the impact of overdispersion or outliers on the analysis of count data. Methods that assumed Poisson or negative binomially distributed outcomes were compared to methods that accommodated this potential overdispersion using quasi-likelihood (QL) or generalized linear mixed models (GLMM). If the data were truly Poisson, the adjusted methods still performed at nominal type I error rates. In the cases of overdispersed counts, the Poisson assumed methods resulted in rejection rates that exceeded nominal levels and standard errors for regression coefficients that were too narrow. The negative binomial methods worked best in the case when the data were, in fact, negative binomial but did not maintain nominal characteristics in other situations. In general, the QL and GLMM methods performed reasonably based on the present study, although all procedures suffered some impact in the presence of potential outliers. In particular, the QL is arguably preferred because it makes fewer assumptions than the GLMM and performed well over the range of conditions considered.


Assuntos
Cladocera/efeitos dos fármacos , Interpretação Estatística de Dados , Poluentes Químicos da Água/toxicidade , Animais , Simulação por Computador , Intervalos de Confiança , Distribuição de Poisson , Reprodução/efeitos dos fármacos
10.
Am J Public Health ; 99(8): 1400-8, 2009 Aug.
Artigo em Inglês | MEDLINE | ID: mdl-19542025

RESUMO

OBJECTIVES: We investigated the extent to which the political economy of US states, including the relative power of organized labor, predicts rates of fatal occupational injury. METHODS: We described states' political economies with 6 contextual variables measuring social and political conditions: "right-to-work" laws, union membership density, labor grievance rates, state government debt, unemployment rates, and social wage payments. We obtained data on fatal occupational injuries from the National Traumatic Occupational Fatality surveillance system and population data from the US national census. We used Poisson regression methods to analyze relationships for the years 1980 and 1995. RESULTS: States differed notably with respect to political-economic characteristics and occupational fatality rates, although these characteristics were more homogeneous within rather than between regions. Industry and workforce composition contributed significantly to differences in state injury rates, but political-economic characteristics of states were also significantly associated with injury rates, after adjustment accounting for those factors. CONCLUSIONS: Higher rates of fatal occupational injury were associated with a state policy climate favoring business over labor, with distinct regional clustering of such state policies in the South and Northeast.


Assuntos
Doenças Profissionais/mortalidade , Política , Governo Estadual , Ferimentos e Lesões/mortalidade , Economia , Emprego/estatística & dados numéricos , Humanos , Modelos Estatísticos , Saúde Ocupacional , Estados Unidos/epidemiologia
11.
Am J Ind Med ; 52(9): 683-97, 2009 Sep.
Artigo em Inglês | MEDLINE | ID: mdl-19670260

RESUMO

BACKGROUND: The rate of lost-time sprains and strains in private nursing homes is over three times the national average, and for back injuries, almost four times the national average. The Ohio Bureau of Workers' Compensation (BWC) has sponsored interventions that were preferentially promoted to nursing homes in 2000-2001, including training, consultation, and grants up to $40,000 for equipment purchases. METHODS: This study evaluated the impact of BWC interventions on back injury claim rates using BWC data on claims, interventions, and employer payroll for all Ohio nursing homes during 1995-2004 using Poisson regression. A subset of nursing homes was analyzed with more detailed data that allowed estimation of the impact of staffing levels and resident acuity on claim rates. Costs of interventions were compared to the associated savings in claim costs. RESULTS: A $500 equipment purchase per nursing home worker was associated with a 21% reduction in back injury rate. Assuming an equipment life of 10 years, this translates to an estimated $768 reduction in claim costs per worker, a present value of $495 with a 5% discount rate applied. Results for training courses were equivocal. Only those receiving below-median hours had a significant 19% reduction in claim rates. Injury rates did not generally decline with consultation independent of equipment purchases, although possible confounding, misclassification, and bias due to non-random management participation clouds interpretation. In nursing homes with available data, resident acuity was modestly associated with back injury risk, and the injury rate increased with resident-to-staff ratio (acting through three terms: RR = 1.50 for each additional resident per staff member; for the ratio alone, RR = 1.32, 95% CI = 1.18-1.48). In these NHs, an expenditure of $908 per resident care worker (equivalent to $500 per employee in the other model) was also associated with a 21% reduction in injury rate. However, with a resident-to-staff ratio greater than 2.0, the same expenditure was associated with a $1,643 reduction in back claim costs over 10 years per employee, a present value of $1,062 with 5% discount rate. CONCLUSIONS: Expenditures for ergonomic equipment in nursing homes by the Ohio BWC were associated with fewer worker injuries and reductions in claim costs that were similar in magnitude to expenditures. Un-estimated benefits and costs also need to be considered in assessing full health and financial impacts.


Assuntos
Lesões nas Costas/prevenção & controle , Capacitação em Serviço , Movimentação e Reposicionamento de Pacientes/instrumentação , Casas de Saúde , Doenças Profissionais/prevenção & controle , Lesões nas Costas/economia , Humanos , Movimentação e Reposicionamento de Pacientes/efeitos adversos , Movimentação e Reposicionamento de Pacientes/métodos , Assistentes de Enfermagem/educação , Doenças Profissionais/economia , Ohio , Indenização aos Trabalhadores , Carga de Trabalho
12.
Environ Toxicol Chem ; 28(5): 997-1006, 2009 May.
Artigo em Inglês | MEDLINE | ID: mdl-19049261

RESUMO

Historically, death is the most commonly studied effect in aquatic toxicity tests. These tests typically employ a gradient of concentrations and exposure with more than one organism in a series of replicate chambers in each concentration. Whereas a binomial distribution commonly is employed for such effects, variability may exceed that predicted by binomial probability models. This additional variability could result from heterogeneity in the probabilities across the chambers in which the organisms are housed and subsequently exposed to concentrations of toxins. Incorrectly assuming a binomial distribution for the statistical analysis may lead to incorrect statistical inference. We consider the analysis of grouped binary data, here motivated by the study of survival. We use a computer simulation study to examine the impact of overdispersion or outliers on the analysis of binary data. We compare methods that assume binomial or generalizations that accommodate this potential overdispersion. These generalizations include adjusting the standard probit model for clustering/correlation or using alternative estimation methods, generalized estimating equations, or generalized linear mixed models (GLMM). When data were binomial or overdispersed binomial, none of the models exhibited any significant bias when estimating regression coefficients. When the data were truly binomial, the probit model controlled type I errors, as did the Donald and Donner method and the GLMM method. When data were overdispersed, the probit model no longer controlled type I error, and the standard errors were too small. In general, the Donald and Donner and the GLMM methods performed reasonably based on this study, although all procedures suffered some impact in the presence of potential outliers.


Assuntos
Interpretação Estatística de Dados , Ecotoxicologia/métodos , Modelos Químicos , Modelos Estatísticos , Poluentes da Água , Simulação por Computador , Poluentes Ambientais , Água/química
13.
Risk Anal ; 29(2): 249-56, 2009 Feb.
Artigo em Inglês | MEDLINE | ID: mdl-19000080

RESUMO

With the increased availability of toxicological hazard information arising from multiple experimental sources, risk assessors are often confronted with the challenge of synthesizing all available scientific information into an analysis. This analysis is further complicated because significant between-source heterogeneity/lab-to-lab variability is often evident. We estimate benchmark doses using hierarchical models to account for the observed heterogeneity. These models are used to construct source-specific and population-average estimates of the benchmark dose (BMD). This is illustrated with an analysis of the U.S. EPA Region IX's reference toxicity database on the effects of sodium chloride on reproduction in Ceriodaphnia dubia. Results show that such models may effectively account for the lab-source heterogeneity while producing BMD estimates that more truly reflect the variability of the system under study. Failing to account for such heterogeneity may result in estimates having confidence intervals that are overly narrow.


Assuntos
Poluentes Químicos da Água/análise , Algoritmos , Animais , Benchmarking , Daphnia , Coleta de Dados , Exposição Ambiental , Modelos Estatísticos , Análise Multivariada , Distribuição de Poisson , Análise de Regressão , Risco , Medição de Risco , Cloreto de Sódio/toxicidade , Software , Poluentes Químicos da Água/toxicidade
14.
Risk Anal ; 29(4): 558-64, 2009 Apr.
Artigo em Inglês | MEDLINE | ID: mdl-19144062

RESUMO

Worker populations often provide data on adverse responses associated with exposure to potential hazards. The relationship between hazard exposure levels and adverse response can be modeled and then inverted to estimate the exposure associated with some specified response level. One concern is that this endpoint may be sensitive to the concentration metric and other variables included in the model. Further, it may be that the models yielding different risk endpoints are all providing relatively similar fits. We focus on evaluating the impact of exposure on a continuous response by constructing a model-averaged benchmark concentration from a weighted average of model-specific benchmark concentrations. A method for combining the estimates based on different models is applied to lung function in a cohort of miners exposed to coal dust. In this analysis, we see that a small number of the thousands of models considered survive a filtering criterion for use in averaging. Even after filtering, the models considered yield benchmark concentrations that differ by a factor of 2 to 9 depending on the concentration metric and covariates. The model-average BMC captures this uncertainty, and provides a useful strategy for addressing model uncertainty.


Assuntos
Benchmarking , Estudos Epidemiológicos , Antracose/epidemiologia , Teorema de Bayes , Humanos
15.
J Gerontol Nurs ; 34(10): 36-44, 2008 Oct.
Artigo em Inglês | MEDLINE | ID: mdl-18942538

RESUMO

The Minimum Data Set (MDS) is a tool used by nursing homes for resident assessment and care planning, indicating facility quality and the extent of residents' care needs. The process by which the MDS is completed by facilities has not been empirically studied. Understanding common strategies and practices for completing the MDS helps further comprehend the validity of the MDS and its relevance for focusing on residents and implementing clinical nursing interventions. This article reports on the responses to a survey questionnaire addressing this process by a sample of nursing homes in Ohio. The MDS assessment was found to be an intensive activity requiring the commitment of multiple staff members. Most facilities employed at least one full-time coordinator for this task. The importance of training was noted by a number of facilities, and the Resident Assessment Instrument manual was highlighted as one of the most valued resources for completing this assessment.


Assuntos
Controle de Formulários e Registros/organização & administração , Avaliação em Enfermagem/organização & administração , Casas de Saúde/organização & administração , Avaliação de Processos em Cuidados de Saúde , Vocabulário Controlado , Idoso , Controle de Formulários e Registros/estatística & dados numéricos , Pesquisas sobre Atenção à Saúde , Humanos , Medicaid , Medicare , Avaliação em Enfermagem/estatística & dados numéricos , Casas de Saúde/estatística & dados numéricos , Ohio , Sistema de Pagamento Prospectivo , Estados Unidos , Carga de Trabalho
16.
Environ Toxicol Chem ; 37(6): 1565-1578, 2018 06.
Artigo em Inglês | MEDLINE | ID: mdl-29350430

RESUMO

The fish acute toxicity test method is foundational to aquatic toxicity testing strategies, yet the literature lacks a concise sample size assessment. Although various sources address sample size, historical precedent seems to play a larger role than objective measures. We present a novel and comprehensive quantification of the effect of sample size on estimation of the median lethal concentration (LC50), covering a wide range of scenarios. The results put into perspective the practical differences across a range of sample sizes, from n = 5/concentration up to n = 23/concentration. We also provide a framework for setting sample size guidance illustrating ways to quantify the performance of LC50 estimation, which can be used to set sample size guidance given reasonably difficult (or worst-case) scenarios. There is a clear benefit to larger sample size studies: they reduce error in the determination of LC50s, and lead to more robust safe environmental concentration determinations, particularly in cases likely to be called worst-case (shallow slope and true LC50 near the edges of the concentration range). Given that the use of well-justified sample sizes is crucial to reducing uncertainty in toxicity estimates, these results lead us to recommend a reconsideration of the current de minimis 7/concentration sample size for critical studies (e.g., studies needed for a chemical registration, which are being tested for the first time, or involving difficult test substances). Environ Toxicol Chem 2018;37:1565-1578. © 2018 SETAC.


Assuntos
Peixes , Testes de Toxicidade Aguda/métodos , Animais , Tamanho da Amostra , Poluentes Químicos da Água/toxicidade
17.
Toxicol Sci ; 99(2): 395-402, 2007 Oct.
Artigo em Inglês | MEDLINE | ID: mdl-17483121

RESUMO

Physiologically based pharmacokinetic (PBPK) models are used in mode-of-action based risk and safety assessments to estimate internal dosimetry in animals and humans. When used in risk assessment, these models can provide a basis for extrapolating between species, doses, and exposure routes or for justifying nondefault values for uncertainty factors. Characterization of uncertainty and variability is increasingly recognized as important for risk assessment; this represents a continuing challenge for both PBPK modelers and users. Current practices show significant progress in specifying deterministic biological models and nondeterministic (often statistical) models, estimating parameters using diverse data sets from multiple sources, using them to make predictions, and characterizing uncertainty and variability of model parameters and predictions. The International Workshop on Uncertainty and Variability in PBPK Models, held 31 Oct-2 Nov 2006, identified the state-of-the-science, needed changes in practice and implementation, and research priorities. For the short term, these include (1) multidisciplinary teams to integrate deterministic and nondeterministic/statistical models; (2) broader use of sensitivity analyses, including for structural and global (rather than local) parameter changes; and (3) enhanced transparency and reproducibility through improved documentation of model structure(s), parameter values, sensitivity and other analyses, and supporting, discrepant, or excluded data. Longer-term needs include (1) theoretical and practical methodological improvements for nondeterministic/statistical modeling; (2) better methods for evaluating alternative model structures; (3) peer-reviewed databases of parameters and covariates, and their distributions; (4) expanded coverage of PBPK models across chemicals with different properties; and (5) training and reference materials, such as cases studies, bibliographies/glossaries, model repositories, and enhanced software. The multidisciplinary dialogue initiated by this Workshop will foster the collaboration, research, data collection, and training necessary to make characterizing uncertainty and variability a standard practice in PBPK modeling and risk assessment.


Assuntos
Modelos Biológicos , Farmacocinética , Animais , Calibragem , Humanos , Reprodutibilidade dos Testes , Medição de Risco
18.
J Econ Entomol ; 100(6): 1945-9, 2007 Dec.
Artigo em Inglês | MEDLINE | ID: mdl-18232415

RESUMO

A common measure of the relative toxicity is the ratio of median lethal doses for responses estimated in two bioassays. Robertson and Preisler previously proposed a method for constructing a confidence interval for the ratio. The applicability of this technique in common experimental situations, especially those involving small samples, may be questionable because the sampling distribution of this ratio estimator may be highly skewed. To examine this possibility, we did a computer simulation experiment to evaluate the coverage properties of the Robertson and Preisler method. The simulation showed that the method provided confidence intervals that performed at the nominal confidence level for the range of responses often observed in pesticide bioassays. Results of this study provide empirical support for the continued use this technique.


Assuntos
Insetos/efeitos dos fármacos , Inseticidas/toxicidade , Animais , Simulação por Computador , Intervalos de Confiança , Dose Letal Mediana , Modelos Biológicos , Modelos Estatísticos
19.
Environ Toxicol Chem ; 25(5): 1441-4, 2006 May.
Artigo em Inglês | MEDLINE | ID: mdl-16704080

RESUMO

Experimenters in toxicology often compare the concentration-response relationship between two distinct populations using the median lethal concentration (LC50). This comparison is sometimes done by calculating the 95% confidence interval for the LC50 for each population, concluding that no significant difference exists if the two confidence intervals overlap. A more appropriate test compares the ratio of the LC50s to 1 or the log(LC50 ratio) to 0. In this ratio test, we conclude that no difference exists in LC50s if the confidence interval for the ratio of the LC50s contains 1 or the confidence interval for the log(LC50 ratio) contains 0. A Monte Carlo simulation study was conducted to compare the confidence interval overlap test to the ratio test. The confidence interval overlap test performs substantially below the nominal alpha = 0.05 level, closer to p = 0.005; therefore, it has considerably less power for detecting true differences compared to the ratio test. The ratio-based method exhibited better type I error rates and superior power properties in comparison to the confidence interval overlap test. Thus, a ratio-based statistical procedure is preferred to using simple overlap of two independently derived confidence intervals.


Assuntos
Toxicologia , Animais , Intervalos de Confiança , Cyprinidae , Fluorenos/toxicidade , Larva/efeitos dos fármacos , Dose Letal Mediana
20.
Environ Toxicol Chem ; 25(1): 248-52, 2006 Jan.
Artigo em Inglês | MEDLINE | ID: mdl-16494249

RESUMO

The design of ecotoxicological studies requires decisions about the number and spacing of exposure groups tested, the number of replications, the spacing of sampling times, the duration of the study, and other considerations. For example, geometric spacing of sampling times or toxicant concentrations is often used as a default design. Optimal design methods in statistics can suggest alternative spacing of sampling times that yield more precise estimates of regression coefficients. In this study, we use a computer simulation to explore the impact of the spacing of sampling times and other factors on the estimation of uptake and elimination rate constants in an experiment addressing the bioaccumulation of a contaminant. Careful selection of sampling times can result in smaller standard errors for the parameter estimates, thereby allowing the construction of smaller, more precise confidence intervals. Thus, the effort invested in constructing an optimal experimental design may result in more precise inference or in a reduction of replications in an experimental design.


Assuntos
Poluentes Ambientais/metabolismo , Modelos Biológicos , Projetos de Pesquisa , Animais , Simulação por Computador , Dinâmica não Linear , Farmacocinética , Análise de Regressão
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA