Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 26
Filtrar
1.
Regul Toxicol Pharmacol ; 146: 105525, 2024 Jan.
Artigo em Inglês | MEDLINE | ID: mdl-37972849

RESUMO

In October 2022, the World Health Organization (WHO) convened an expert panel in Lisbon, Portugal in which the 2005 WHO TEFs for chlorinated dioxin-like compounds were reevaluated. In contrast to earlier panels that employed expert judgement and consensus-based assignment of TEF values, the present effort employed an update to the 2006 REP database, a consensus-based weighting scheme, a Bayesian dose response modeling and meta-analysis to derive "Best-Estimate" TEFs. The updated database contains almost double the number of datasets from the earlier version and includes metadata that informs the weighting scheme. The Bayesian analysis of this dataset results in an unbiased quantitative assessment of the congener-specific potencies with uncertainty estimates. The "Best-Estimate" TEF derived from the model was used to assign 2022 WHO-TEFs for almost all congeners and these values were not rounded to half-logs as was done previously. The exception was for the mono-ortho PCBs, for which the panel agreed to retain their 2005 WHO-TEFs due to limited and heterogenous data available for these compounds. Applying these new TEFs to a limited set of dioxin-like chemical concentrations measured in human milk and seafood indicates that the total toxic equivalents will tend to be lower than when using the 2005 TEFs.


Assuntos
Dioxinas , Bifenilos Policlorados , Dibenzodioxinas Policloradas , Animais , Humanos , Teorema de Bayes , Dibenzofuranos/toxicidade , Dibenzofuranos Policlorados/toxicidade , Dioxinas/toxicidade , Mamíferos , Bifenilos Policlorados/toxicidade , Dibenzodioxinas Policloradas/toxicidade , Organização Mundial da Saúde
2.
Regul Toxicol Pharmacol ; 143: 105464, 2023 Sep.
Artigo em Inglês | MEDLINE | ID: mdl-37516304

RESUMO

In 2005, the World Health Organization (WHO) re-evaluated Toxic Equivalency factors (TEFs) developed for dioxin-like compounds believed to act through the Ah receptor based on an updated database of relative estimated potency (REP)(REP2004 database). This re-evalution identified the need to develop a consistent approach for dose-response modeling. Further, the WHO Panel discussed the significant heterogeneity of experimental datasets and dataset quality underlying the REPs in the database. There is a critical need to develop a quantitative, and quality weighted approach to characterize the TEF for each congener. To address this, a multi-tiered approach that combines Bayesian dose-response fitting and meta-regression with a machine learning model to predict REPS' quality categorizations was developed to predict the most likely relationship between each congener and its reference and derive model-predicted TEF uncertainty distributions. As a proof of concept, this 'Best-Estimate TEF workflow' was applied to the REP2004 database to derive TEF point-estimates and characterizations of uncertainty for all congeners. Model-TEFs were similar to the 2005 WHO TEFs, with the data-poor congeners having larger levels of uncertainty. This transparent and reproducible computational workflow incorporates WHO expert panel recommendations and represents a substantial improvement in the TEF methodology.


Assuntos
Dioxinas , Bifenilos Policlorados , Dioxinas/toxicidade , Teorema de Bayes , Medição de Risco , Incerteza , Receptores de Hidrocarboneto Arílico
3.
Regul Toxicol Pharmacol ; 141: 105389, 2023 Jun.
Artigo em Inglês | MEDLINE | ID: mdl-37061082

RESUMO

Toxicology analyses are built around dose-response modeling, and increasingly these methodologies utilize Bayesian estimation techniques. Bayesian estimation is unique because it includes prior distributional information in the analysis, which may impact the dose-response estimate meaningfully. As such analyses are often used for human health risk assessment, the practitioner must understand the impact of adding prior information to the dose-response study. One proposal in the literature is the use of the flat uniform prior distribution, which places a uniform prior probability over the dose-response model's parameters for a chosen range of values. Though the motivation of such a prior distribution is laudable in that it is most like maximum likelihood estimation seeking unbiased estimates of the dose-response, one can show that such priors add information and may introduce unexpected biases into the analysis. This manuscript shows through numerous empirical examples why prior distributions that are non-informative across all endpoints of interest do not exist for dose-response models; that is, other quantities of interest will be informed by choosing one inferential quantity not informed.


Assuntos
Teorema de Bayes , Humanos , Viés , Medição de Risco
4.
Comput Toxicol ; 252023 Feb.
Artigo em Inglês | MEDLINE | ID: mdl-36909352

RESUMO

The need to analyze the complex relationships observed in high-throughput toxicogenomic and other omic platforms has resulted in an explosion of methodological advances in computational toxicology. However, advancements in the literature often outpace the development of software researchers can implement in their pipelines, and existing software is frequently based on pre-specified workflows built from well-vetted assumptions that may not be optimal for novel research questions. Accordingly, there is a need for a stable platform and open-source codebase attached to a programming language that allows users to program new algorithms. To fill this gap, the Biostatistics and Computational Biology Branch of the National Institute of Environmental Health Sciences, in cooperation with the National Toxicology Program (NTP) and US Environmental Protection Agency (EPA), developed ToxicR, an open-source R programming package. The ToxicR platform implements many of the standard analyses used by the NTP and EPA, including dose-response analyses for continuous and dichotomous data that employ Bayesian, maximum likelihood, and model averaging methods, as well as many standard tests the NTP uses in rodent toxicology and carcinogenicity studies, such as the poly-K and Jonckheere trend tests. ToxicR is built on the same codebase as current versions of the EPA's Benchmark Dose software and NTP's BMDExpress software but has increased flexibility because it directly accesses this software. To demonstrate ToxicR, we developed a custom workflow to illustrate its capabilities for analyzing toxicogenomic data. The unique features of ToxicR will allow researchers in other fields to add modules, increasing its functionality in the future.

5.
J R Stat Soc Series B Stat Methodol ; 84(4): 1198-1228, 2022 Sep.
Artigo em Inglês | MEDLINE | ID: mdl-36570797

RESUMO

Gaussian processes (GPs) are common components in Bayesian non-parametric models having a rich methodological literature and strong theoretical grounding. The use of exact GPs in Bayesian models is limited to problems containing several thousand observations due to their prohibitive computational demands. We develop a posterior sampling algorithm using H -matrix approximations that scales at O ( n log 2 n ) . We show that this approximation's Kullback-Leibler divergence to the true posterior can be made arbitrarily small. Though multidimensional GPs could be used with our algorithm, d-dimensional surfaces are modeled as tensor products of univariate GPs to minimize the cost of matrix construction and maximize computational efficiency. We illustrate the performance of this fast increased fidelity approximate GP, FIFA-GP, using both simulated and non-synthetic data sets.

6.
Comput Toxicol ; 212022 Feb.
Artigo em Inglês | MEDLINE | ID: mdl-35083394

RESUMO

Computational methods for genomic dose-response integrate dose-response modeling with bioinformatics tools to evaluate changes in molecular and cellular functions related to pathogenic processes. These methods use parametric models to describe each gene's dose-response, but such models may not adequately capture expression changes. Additionally, current approaches do not consider gene co-expression networks. When assessing co-expression networks, one typically does not consider the dose-response relationship, resulting in 'co-regulated' gene sets containing genes having different dose-response patterns. To avoid these limitations, we develop an analysis pipeline called Aggregated Local Extrema Splines for High-throughput Analysis (ALOHA), which computes individual genomic dose-response functions using a flexible class Bayesian shape constrained splines and clusters gene co-regulation based upon these fits. Using splines, we reduce information loss due to parametric lack-of-fit issues, and because we cluster on dose-response relationships, we better identify co-regulation clusters for genes that have co-expressed dose-response patterns from chemical exposure. The clustered pathways can then be used to estimate a dose associated with a pre-specified biological response, i.e., the benchmark dose (BMD), and approximate a point of departure dose corresponding to minimal adverse response in the whole tissue/organism. We compare our approach to current parametric methods and our biologically enriched gene sets to cluster on normalized expression data. Using this methodology, we can more effectively extract the underlying structure leading to more cohesive estimates of gene set potency.

7.
Environmetrics ; 33(5)2022 Aug.
Artigo em Inglês | MEDLINE | ID: mdl-36589902

RESUMO

When estimating a benchmark dose (BMD) from chemical toxicity experiments, model averaging is recommended by the National Institute for Occupational Safety and Health, World Health Organization and European Food Safety Authority. Though numerous studies exist for Model Average BMD estimation using dichotomous responses, fewer studies investigate it for BMD estimation using continuous response. In this setting, model averaging a BMD poses additional problems as the assumed distribution is essential to many BMD definitions, and distributional uncertainty is underestimated when one error distribution is chosen a priori. As model averaging combines full models, there is no reason one cannot include multiple error distributions. Consequently, we define a continuous model averaging approach over distributional models and show that it is superior to single distribution model averaging. To show the superiority of the approach, we apply the method to simulated and experimental response data.

8.
Ann Appl Stat ; 15(3): 1405-1430, 2021 Sep.
Artigo em Inglês | MEDLINE | ID: mdl-35765365

RESUMO

Today there are approximately 85,000 chemicals regulated under the Toxic Substances Control Act, with around 2,000 new chemicals introduced each year. It is impossible to screen all of these chemicals for potential toxic effects, either via full organism in vivo studies or in vitro high-throughput screening (HTS) programs. Toxicologists face the challenge of choosing which chemicals to screen, and predicting the toxicity of as yet unscreened chemicals. Our goal is to describe how variation in chemical structure relates to variation in toxicological response to enable in silico toxicity characterization designed to meet both of these challenges. With our Bayesian partially Supervised Sparse and Smooth Factor Analysis (BS3FA) model, we learn a distance between chemicals targeted to toxicity, rather than one based on molecular structure alone. Our model also enables the prediction of chemical dose-response profiles based on chemical structure (i.e., without in vivo or in vitro testing) by taking advantage of a large database of chemicals that have already been tested for toxicity in HTS programs. We show superior simulation performance in distance learning and modest to large gains in predictive ability compared to existing methods. Results from the high-throughput screening data application elucidate the relationship between chemical structure and a toxicity-relevant high-throughput assay. An R package for BS3FA is available online at https://github.com/kelrenmor/bs3fa.

9.
Risk Anal ; 41(1): 56-66, 2021 01.
Artigo em Inglês | MEDLINE | ID: mdl-33063372

RESUMO

To better understand the risk of exposure to food allergens, food challenge studies are designed to slowly increase the dose of an allergen delivered to allergic individuals until an objective reaction occurs. These dose-to-failure studies are used to determine acceptable intake levels and are analyzed using parametric failure time models. Though these models can provide estimates of the survival curve and risk, their parametric form may misrepresent the survival function for doses of interest. Different models that describe the data similarly may produce different dose-to-failure estimates. Motivated by predictive inference, we developed a Bayesian approach to combine survival estimates based on posterior predictive stacking, where the weights are formed to maximize posterior predictive accuracy. The approach defines a model space that is much larger than traditional parametric failure time modeling approaches. In our case, we use the approach to include random effects accounting for frailty components. The methodology is investigated in simulation, and is used to estimate allergic population eliciting doses for multiple food allergens.


Assuntos
Teorema de Bayes , Hipersensibilidade Alimentar/diagnóstico , Medição de Risco/métodos , Alérgenos/administração & dosagem , Simulação por Computador , Humanos , Modelos Estatísticos
10.
Food Chem Toxicol ; 146: 111831, 2020 Dec.
Artigo em Inglês | MEDLINE | ID: mdl-33166672

RESUMO

Previously, we published selected Eliciting Dose (ED) values (i.e. ED01 and ED05 values) for 14 allergenic foods, predicted to elicit objective allergic symptoms in 1% and 5%, respectively, of the allergic population (Remington et al., 2020). These ED01 and ED05 values were specifically presented and discussed in the context of establishing Reference Doses for allergen management and the calculation of Action Levels for Precautionary Allergen Labeling (PAL). In the current paper, we publish the full range of ED values for these allergenic foods and provide recommendations for their use, specifically in the context of characterizing risks of concentrations of (unintended) allergenic proteins in food products. The data provided in this publication give risk assessors access to full population ED distribution information for 14 priority allergenic foods, based on the largest threshold database worldwide. The ED distributions were established using broad international consensus regarding suitable datapoints and methods for establishing individual patient's NOAELs and LOAELs and state of the art statistical modelling. Access to these ED data enables risk assessors to use this information for state-of-the-art food allergen risk assessment. This paper contributes to a harmonization of food allergen risk assessment and risk management and PAL practices.


Assuntos
Alérgenos/administração & dosagem , Alérgenos/toxicidade , Hipersensibilidade Alimentar , Relação Dose-Resposta a Droga , Humanos , Nível de Efeito Adverso não Observado , Medição de Risco
11.
Risk Anal ; 40(9): 1706-1722, 2020 09.
Artigo em Inglês | MEDLINE | ID: mdl-32602232

RESUMO

Model averaging for dichotomous dose-response estimation is preferred to estimate the benchmark dose (BMD) from a single model, but challenges remain regarding implementing these methods for general analyses before model averaging is feasible to use in many risk assessment applications, and there is little work on Bayesian methods that include informative prior information for both the models and the parameters of the constituent models. This article introduces a novel approach that addresses many of the challenges seen while providing a fully Bayesian framework. Furthermore, in contrast to methods that use Monte Carlo Markov Chain, we approximate the posterior density using maximum a posteriori estimation. The approximation allows for an accurate and reproducible estimate while maintaining the speed of maximum likelihood, which is crucial in many applications such as processing massive high throughput data sets. We assess this method by applying it to empirical laboratory dose-response data and measuring the coverage of confidence limits for the BMD. We compare the coverage of this method to that of other approaches using the same set of models. Through the simulation study, the method is shown to be markedly superior to the traditional approach of selecting a single preferred model (e.g., from the U.S. EPA BMD software) for the analysis of dichotomous data and is comparable or superior to the other approaches.


Assuntos
Teorema de Bayes , Medição de Risco , Incerteza , Relação Dose-Resposta a Droga , Isocianatos/administração & dosagem , Nitrosaminas/administração & dosagem
12.
Food Chem Toxicol ; 139: 111259, 2020 May.
Artigo em Inglês | MEDLINE | ID: mdl-32179163

RESUMO

Food allergy and allergen management are important global public health issues. In 2011, the first iteration of our allergen threshold database (ATDB) was established based on individual NOAELs and LOAELs from oral food challenge in roughly 1750 allergic individuals. Population minimal eliciting dose (EDp) distributions based on this dataset were published for 11 allergenic foods in 2014. Systematic data collection has continued (2011-2018) and the dataset now contains over 3400 data points. The current study provides new and updated EDp values for 14 allergenic foods and incorporates a newly developed Stacked Model Averaging statistical method for interval-censored data. ED01 and ED05 values, the doses at which 1%, and respectively 5%, of the respective allergic population would be predicted to experience any objective allergic reaction were determined. The 14 allergenic foods were cashew, celery, egg, fish, hazelnut, lupine, milk, mustard, peanut, sesame, shrimp (for crustacean shellfish), soy, walnut, and wheat. Updated ED01 estimates ranged between 0.03 mg for walnut protein and 26.2 mg for shrimp protein. ED05 estimates ranged between 0.4 mg for mustard protein and 280 mg for shrimp protein. The ED01 and ED05 values presented here are valuable in the risk assessment and subsequent risk management of allergenic foods.


Assuntos
Alérgenos/imunologia , Hipersensibilidade Alimentar/imunologia , Alérgenos/administração & dosagem , Animais , Arachis/química , Arachis/imunologia , Humanos , Juglans/química , Juglans/imunologia , Leite/química , Leite/imunologia , Nozes/química , Nozes/imunologia , Medição de Risco , Sesamum/química , Sesamum/imunologia
13.
Environmetrics ; 31(7)2020 Nov.
Artigo em Inglês | MEDLINE | ID: mdl-36052215

RESUMO

Protection and safety authorities recommend the use of model averaging to determine the benchmark dose approach as a scientifically more advanced method compared with the no-observed-adverse-effect-level approach for obtaining a reference point and deriving health-based guidance values. Model averaging however highly depends on the set of candidate dose-response models and such a set should be rich enough to ensure that a well-fitting model is included. The currently applied set of candidate models for continuous endpoints is typically limited to two models, the exponential and Hill model, and differs completely from the richer set of candidate models currently used for binary endpoints. The objective of this article is to propose a general and wide framework of dose response models, which can be applied both to continuous and binary endpoints and covers the current models for both type of endpoints. In combination with the bootstrap, this framework offers a unified approach to benchmark dose estimation. The methodology is illustrated using two data sets, one with a continuous and another with a binary endpoint.

14.
Risk Anal ; 39(3): 616-629, 2019 03.
Artigo em Inglês | MEDLINE | ID: mdl-30368842

RESUMO

Quantitative risk assessments for physical, chemical, biological, occupational, or environmental agents rely on scientific studies to support their conclusions. These studies often include relatively few observations, and, as a result, models used to characterize the risk may include large amounts of uncertainty. The motivation, development, and assessment of new methods for risk assessment is facilitated by the availability of a set of experimental studies that span a range of dose-response patterns that are observed in practice. We describe construction of such a historical database focusing on quantal data in chemical risk assessment, and we employ this database to develop priors in Bayesian analyses. The database is assembled from a variety of existing toxicological data sources and contains 733 separate quantal dose-response data sets. As an illustration of the database's use, prior distributions for individual model parameters in Bayesian dose-response analysis are constructed. Results indicate that including prior information based on curated historical data in quantitative risk assessments may help stabilize eventual point estimates, producing dose-response functions that are more stable and precisely estimated. These in turn produce potency estimates that share the same benefit. We are confident that quantitative risk analysts will find many other applications and issues to explore using this database.


Assuntos
Teorema de Bayes , Bases de Dados Factuais , Medição de Risco/métodos , alfa-Cloridrina/toxicidade , Animais , Relação Dose-Resposta a Droga , Humanos , Masculino , Probabilidade , Linguagens de Programação , Saúde Pública , Ratos , Ratos Sprague-Dawley , Software , Incerteza , alfa-Cloridrina/análise
15.
Biometrics ; 75(1): 193-201, 2019 03.
Artigo em Inglês | MEDLINE | ID: mdl-30081432

RESUMO

Many modern datasets are sampled with error from complex high-dimensional surfaces. Methods such as tensor product splines or Gaussian processes are effective and well suited for characterizing a surface in two or three dimensions, but they may suffer from difficulties when representing higher dimensional surfaces. Motivated by high throughput toxicity testing where observed dose-response curves are cross sections of a surface defined by a chemical's structural properties, a model is developed to characterize this surface to predict untested chemicals' dose-responses. This manuscript proposes a novel approach that models the multidimensional surface as a sum of learned basis functions formed as the tensor product of lower dimensional functions, which are themselves representable by a basis expansion learned from the data. The model is described and a Gibbs sampling algorithm is proposed. The approach is investigated in a simulation study and through data taken from the US EPA's ToxCast high throughput toxicity testing platform.


Assuntos
Teorema de Bayes , Testes de Toxicidade/estatística & dados numéricos , Animais , Simulação por Computador , Relação Dose-Resposta a Droga , Poluentes Ambientais/farmacologia , Ensaios de Triagem em Larga Escala/métodos , Humanos , Distribuição Normal , Relação Quantitativa Estrutura-Atividade , Testes de Toxicidade/métodos
16.
Saf Health Work ; 8(2): 206-211, 2017 Jun.
Artigo em Inglês | MEDLINE | ID: mdl-28593078

RESUMO

BACKGROUND: Self-reported low back pain (LBP) has been evaluated in relation to material handling lifting tasks, but little research has focused on relating quantifiable stressors to LBP at the individual level. The National Institute for Occupational Safety and Health (NIOSH) Composite Lifting Index (CLI) has been used to quantify stressors for lifting tasks. A chemical exposure can be readily used as an exposure metric or stressor for chemical risk assessment (RA). Defining and quantifying lifting nonchemical stressors and related adverse responses is more difficult. Stressor-response models appropriate for CLI and LBP associations do not easily fit in common chemical RA modeling techniques (e.g., Benchmark Dose methods), so different approaches were tried. METHODS: This work used prospective data from 138 manufacturing workers to consider the linkage of the occupational stressor of material lifting to LBP. The final model used a Bayesian random threshold approach to estimate the probability of an increase in LBP as a threshold step function. RESULTS: Using maximal and mean CLI values, a significant increase in the probability of LBP for values above 1.5 was found. CONCLUSION: A risk of LBP associated with CLI values > 1.5 existed in this worker population. The relevance for other populations requires further study.

17.
Risk Anal ; 37(11): 2107-2118, 2017 11.
Artigo em Inglês | MEDLINE | ID: mdl-28555874

RESUMO

Quantitative risk assessment often begins with an estimate of the exposure or dose associated with a particular risk level from which exposure levels posing low risk to populations can be extrapolated. For continuous exposures, this value, the benchmark dose, is often defined by a specified increase (or decrease) from the median or mean response at no exposure. This method of calculating the benchmark dose does not take into account the response distribution and, consequently, cannot be interpreted based upon probability statements of the target population. We investigate quantile regression as an alternative to the use of the median or mean regression. By defining the dose-response quantile relationship and an impairment threshold, we specify a benchmark dose as the dose associated with a specified probability that the population will have a response equal to or more extreme than the specified impairment threshold. In addition, in an effort to minimize model uncertainty, we use Bayesian monotonic semiparametric regression to define the exposure-response quantile relationship, which gives the model flexibility to estimate the quantal dose-response function. We describe this methodology and apply it to both epidemiology and toxicology data.

18.
Risk Anal ; 37(10): 1865-1878, 2017 10.
Artigo em Inglês | MEDLINE | ID: mdl-28032899

RESUMO

Human variability is a very important factor considered in human health risk assessment for protecting sensitive populations from chemical exposure. Traditionally, to account for this variability, an interhuman uncertainty factor is applied to lower the exposure limit. However, using a fixed uncertainty factor rather than probabilistically accounting for human variability can hardly support probabilistic risk assessment advocated by a number of researchers; new methods are needed to probabilistically quantify human population variability. We propose a Bayesian hierarchical model to quantify variability among different populations. This approach jointly characterizes the distribution of risk at background exposure and the sensitivity of response to exposure, which are commonly represented by model parameters. We demonstrate, through both an application to real data and a simulation study, that using the proposed hierarchical structure adequately characterizes variability across different populations.


Assuntos
Arsênio/toxicidade , Doenças Cardiovasculares/induzido quimicamente , Relação Dose-Resposta a Droga , Medição de Risco/métodos , Algoritmos , Teorema de Bayes , Variação Genética , Humanos , Cadeias de Markov , Probabilidade , Incerteza
19.
J Am Stat Assoc ; 109(507): 894-904, 2014 Jul.
Artigo em Inglês | MEDLINE | ID: mdl-25541568

RESUMO

The statistics literature on functional data analysis focuses primarily on flexible black-box approaches, which are designed to allow individual curves to have essentially any shape while characterizing variability. Such methods typically cannot incorporate mechanistic information, which is commonly expressed in terms of differential equations. Motivated by studies of muscle activation, we propose a nonparametric Bayesian approach that takes into account mechanistic understanding of muscle physiology. A novel class of hierarchical Gaussian processes is defined that favors curves consistent with differential equations defined on motor, damper, spring systems. A Gibbs sampler is proposed to sample from the posterior distribution and applied to a study of rats exposed to non-injurious muscle activation protocols. Although motivated by muscle force data, a parallel approach can be used to include mechanistic information in broad functional data analysis applications.

20.
Regul Toxicol Pharmacol ; 67(1): 75-82, 2013 Oct.
Artigo em Inglês | MEDLINE | ID: mdl-23831127

RESUMO

Experiments with relatively high doses are often used to predict risks at appreciably lower doses. A point of departure (PoD) can be calculated as the dose associated with a specified moderate response level that is often in the range of experimental doses considered. A linear extrapolation to lower doses often follows. An alternative to the PoD method is to develop a model that accounts for the model uncertainty in the dose-response relationship and to use this model to estimate the risk at low doses. Two such approaches that account for model uncertainty are model averaging (MA) and semi-parametric methods. We use these methods, along with the PoD approach in the context of a large animal (40,000+ animal) bioassay that exhibited sub-linearity. When models are fit to high dose data and risks at low doses are predicted, the methods that account for model uncertainty produce dose estimates associated with an excess risk that are closer to the observed risk than the PoD linearization. This comparison provides empirical support to accompany previous simulation studies that suggest methods that incorporate model uncertainty provide viable, and arguably preferred, alternatives to linear extrapolation from a PoD.


Assuntos
Modelos Biológicos , Incerteza , Animais , Benchmarking , Relação Dose-Resposta a Droga , Medição de Risco
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA