Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 382
Filtrar
Mais filtros

Tipo de documento
Intervalo de ano de publicação
1.
BMC Bioinformatics ; 25(1): 168, 2024 Apr 27.
Artigo em Inglês | MEDLINE | ID: mdl-38678218

RESUMO

This study investigates the impact of spatio- temporal correlation using four spatio-temporal models: Spatio-Temporal Poisson Linear Trend Model (SPLTM), Poisson Temporal Model (TMS), Spatio-Temporal Poisson Anova Model (SPAM), and Spatio-Temporal Poisson Separable Model (STSM) concerning food security and nutrition in Africa. Evaluating model goodness of fit using the Watanabe Akaike Information Criterion (WAIC) and assessing bias through root mean square error and mean absolute error values revealed a consistent monotonic pattern. SPLTM consistently demonstrates a propensity for overestimating food security, while TMS exhibits a diverse bias profile, shifting between overestimation and underestimation based on varying correlation settings. SPAM emerges as a beacon of reliability, showcasing minimal bias and WAIC across diverse scenarios, while STSM consistently underestimates food security, particularly in regions marked by low to moderate spatio-temporal correlation. SPAM consistently outperforms other models, making it a top choice for modeling food security and nutrition dynamics in Africa. This research highlights the impact of spatial and temporal correlations on food security and nutrition patterns and provides guidance for model selection and refinement. Researchers are encouraged to meticulously evaluate the biases and goodness of fit characteristics of models, ensuring their alignment with the specific attributes of their data and research goals. This knowledge empowers researchers to select models that offer reliability and consistency, enhancing the applicability of their findings.


Assuntos
Segurança Alimentar , África , Segurança Alimentar/métodos , Análise Espaço-Temporal , Humanos , Simulação por Computador , Distribuição de Poisson
2.
Biometrics ; 80(1)2024 Jan 29.
Artigo em Inglês | MEDLINE | ID: mdl-38364808

RESUMO

We aim to estimate parameters in a generalized linear model (GLM) for a binary outcome when, in addition to the raw data from the internal study, more than 1 external study provides summary information in the form of parameter estimates from fitting GLMs with varying subsets of the internal study covariates. We propose an adaptive penalization method that exploits the external summary information and gains efficiency for estimation, and that is both robust and computationally efficient. The robust property comes from exploiting the relationship between parameters of a GLM and parameters of a GLM with omitted covariates and from downweighting external summary information that is less compatible with the internal data through a penalization. The computational burden associated with searching for the optimal tuning parameter for the penalization is reduced by using adaptive weights and by using an information criterion when searching for the optimal tuning parameter. Simulation studies show that the proposed estimator is robust against various types of population distribution heterogeneity and also gains efficiency compared to direct maximum likelihood estimation. The method is applied to improve a logistic regression model that predicts high-grade prostate cancer making use of parameter estimates from 2 external models.


Assuntos
Modelos Estatísticos , Masculino , Humanos , Modelos Lineares , Análise de Regressão , Funções Verossimilhança , Modelos Logísticos , Simulação por Computador
3.
Biometrics ; 80(1)2024 Jan 29.
Artigo em Inglês | MEDLINE | ID: mdl-38465988

RESUMO

Mixed panel count data represent a common complex data structure in longitudinal survey studies. A major challenge in analyzing such data is variable selection and estimation while efficiently incorporating both the panel count and panel binary data components. Analyses in the medical literature have often ignored the panel binary component and treated it as missing with the unknown panel counts, while obviously such a simplification does not effectively utilize the original data information. In this research, we put forward a penalized likelihood variable selection and estimation procedure under the proportional mean model. A computationally efficient EM algorithm is developed that ensures sparse estimation for variable selection, and the resulting estimator is shown to have the desirable oracle property. Simulation studies assessed and confirmed the good finite-sample properties of the proposed method, and the method is applied to analyze a motivating dataset from the Health and Retirement Study.


Assuntos
Algoritmos , Funções Verossimilhança , Simulação por Computador , Estudos Longitudinais
4.
Stat Med ; 2024 Sep 09.
Artigo em Inglês | MEDLINE | ID: mdl-39248697

RESUMO

Clustering functional data aims to identify unique functional patterns in the entire domain, but this can be challenging due to phase variability that distorts the observed patterns. Curve registration can be used to remove this variability, but determining the appropriate level of warping flexibility can be complicated. Curve registration also requires a target to which a functional object is aligned, typically the cross-sectional mean of functional objects within the same cluster. However, this mean is unknown prior to clustering. Furthermore, there is a trade-off between flexible warping and the number of resulting clusters. Removing more phase variability through curve registration can lead to fewer remaining variations in the functional data, resulting in a smaller number of clusters. Thus, the optimal number of clusters and warping flexibility cannot be uniquely identified. We propose to use external information to solve the identification issue. We define a cross validated Kullback-Leibler information criterion to select the number of clusters and the warping penalty. The criterion is derived from the predictive classification likelihood considering the joint distribution of both the functional data and external variable and penalizes the uncertainty in the cluster membership. We evaluate our method through simulation and apply it to electrocardiographic data collected in the Chronic Renal Insufficiency Cohort study. We identify two distinct clusters of electrocardiogram (ECG) profiles, with the second cluster exhibiting ST segment depression, an indication of cardiac ischemia, compared to the normal ECG profiles in the first cluster.

5.
Artigo em Inglês | MEDLINE | ID: mdl-38967731

RESUMO

Clinical trial endpoints are often bounded outcome scores (BOS), which are variables having restricted values within finite intervals. Common analysis approaches may treat the data as continuous, categorical, or a mixture of both. The appearance of BOS data being simultaneously continuous and categorical easily leads to confusions in pharmacometrics regarding the appropriate domain for model evaluation and the circumstances under which data likelihoods can be compared. This commentary aims to clarify these fundamental issues and facilitate appropriate pharmacometric analyses.

6.
J Am Water Resour Assoc ; 60(1): 57-78, 2024 Feb.
Artigo em Inglês | MEDLINE | ID: mdl-38377341

RESUMO

Many cold-water dependent aquatic organisms are experiencing habitat and population declines from increasing water temperatures. Identifying mechanisms which drive local and regional stream thermal regimes facilitates restoration at ecologically relevant scales. Stream temperatures vary spatially and temporally both within and among river basins. We developed a modeling process to identify statistical relationships between drivers of stream temperature and covariates representing landscape, climate, and management-related processes. The modeling process was tested in 3 study areas of the Pacific Northwest USA during the growing season (May [start], August [warmest], September [end]). Across all months and study systems, covariates with the highest relative importance represented the physical landscape (elevation [1st], catchment area [3rd], main channel slope [5th]) and climate covariates (mean monthly air temperature [2nd] and discharge [4th]). Two management covariates (ground water use [6th] and riparian shade [7th]) also had high relative importance. Across the growing season (for all basins) local reach slope had high relative importance in May, but transitioned to a regional main channel slope covariate in August and September. This modeling process identified regionally similar and locally unique relationships among drivers of stream temperature. High relative importance of management-related covariates suggested potential restoration actions for each system.

7.
Entropy (Basel) ; 26(1)2024 Jan 04.
Artigo em Inglês | MEDLINE | ID: mdl-38248176

RESUMO

Change points indicate significant shifts in the statistical properties in data streams at some time points. Detecting change points efficiently and effectively are essential for us to understand the underlying data-generating mechanism in modern data streams with versatile parameter-varying patterns. However, it becomes a highly challenging problem to locate multiple change points in the noisy data. Although the Bayesian information criterion has been proven to be an effective way of selecting multiple change points in an asymptotical sense, its finite sample performance could be deficient. In this article, we have reviewed a list of information criterion-based methods for multiple change point detection, including Akaike information criterion, Bayesian information criterion, minimum description length, and their variants, with the emphasis on their practical applications. Simulation studies are conducted to investigate the actual performance of different information criteria in detecting multiple change points with possible model mis-specification for the practitioners. A case study on the SCADA signals of wind turbines is conducted to demonstrate the actual change point detection power of different information criteria. Finally, some key challenges in the development and application of multiple change point detection are presented for future research work.

8.
Entropy (Basel) ; 26(6)2024 Jun 19.
Artigo em Inglês | MEDLINE | ID: mdl-38920534

RESUMO

This paper extends the concept of metrics based on the Bayesian information criterion (BIC), to achieve strongly consistent estimation of partition Markov models (PMMs). We introduce a set of metrics drawn from the family of model selection criteria known as efficient determination criteria (EDC). This generalization extends the range of options available in BIC for penalizing the number of model parameters. We formally specify the relationship that determines how EDC works when selecting a model based on a threshold associated with the metric. Furthermore, we improve the penalty options within EDC, identifying the penalty ln(ln(n)) as a viable choice that maintains the strongly consistent estimation of a PMM. To demonstrate the utility of these new metrics, we apply them to the modeling of three DNA sequences of dengue virus type 3, endemic in Brazil in 2023.

9.
Entropy (Basel) ; 26(7)2024 Jul 15.
Artigo em Inglês | MEDLINE | ID: mdl-39056962

RESUMO

Most statistical modeling applications involve the consideration of a candidate collection of models based on various sets of explanatory variables. The candidate models may also differ in terms of the structural formulations for the systematic component and the posited probability distributions for the random component. A common practice is to use an information criterion to select a model from the collection that provides an optimal balance between fidelity to the data and parsimony. The analyst then typically proceeds as if the chosen model was the only model ever considered. However, such a practice fails to account for the variability inherent in the model selection process, which can lead to inappropriate inferential results and conclusions. In recent years, inferential methods have been proposed for multimodel frameworks that attempt to provide an appropriate accounting of modeling uncertainty. In the frequentist paradigm, such methods should ideally involve model selection probabilities, i.e., the relative frequencies of selection for each candidate model based on repeated sampling. Model selection probabilities can be conveniently approximated through bootstrapping. When the Akaike information criterion is employed, Akaike weights are also commonly used as a surrogate for selection probabilities. In this work, we show that the conventional bootstrap approach for approximating model selection probabilities is impacted by bias. We propose a simple correction to adjust for this bias. We also argue that Akaike weights do not provide adequate approximations for selection probabilities, although they do provide a crude gauge of model plausibility.

10.
Entropy (Basel) ; 26(6)2024 Jun 11.
Artigo em Inglês | MEDLINE | ID: mdl-38920515

RESUMO

Information-theoretic (IT) and multi-model averaging (MMA) statistical approaches are widely used but suboptimal tools for pursuing a multifactorial approach (also known as the method of multiple working hypotheses) in ecology. (1) Conceptually, IT encourages ecologists to perform tests on sets of artificially simplified models. (2) MMA improves on IT model selection by implementing a simple form of shrinkage estimation (a way to make accurate predictions from a model with many parameters relative to the amount of data, by "shrinking" parameter estimates toward zero). However, other shrinkage estimators such as penalized regression or Bayesian hierarchical models with regularizing priors are more computationally efficient and better supported theoretically. (3) In general, the procedures for extracting confidence intervals from MMA are overconfident, providing overly narrow intervals. If researchers want to use limited data sets to accurately estimate the strength of multiple competing ecological processes along with reliable confidence intervals, the current best approach is to use full (maximal) statistical models (possibly with Bayesian priors) after making principled, a priori decisions about model complexity.

11.
Proc Biol Sci ; 290(2007): 20231261, 2023 09 27.
Artigo em Inglês | MEDLINE | ID: mdl-37752836

RESUMO

The various debates around model selection paradigms are important, but in lieu of a consensus, there is a demonstrable need for a deeper appreciation of existing approaches, at least among the end-users of statistics and model selection tools. In the ecological literature, the Akaike information criterion (AIC) dominates model selection practices, and while it is a relatively straightforward concept, there exists what we perceive to be some common misunderstandings around its application. Two specific questions arise with surprising regularity among colleagues and students when interpreting and reporting AIC model tables. The first is related to the issue of 'pretending' variables, and specifically a muddled understanding of what this means. The second is related to p-values and what constitutes statistical support when using AIC. There exists a wealth of technical literature describing AIC and the relationship between p-values and AIC differences. Here, we complement this technical treatment and use simulation to develop some intuition around these important concepts. In doing so we aim to promote better statistical practices when it comes to using, interpreting and reporting models selected when using AIC.


Assuntos
Intuição , Estudantes , Humanos , Simulação por Computador , Consenso
12.
NMR Biomed ; 36(7): e4905, 2023 07.
Artigo em Inglês | MEDLINE | ID: mdl-36637237

RESUMO

The acquisition of intravoxel incoherent motion (IVIM) data and diffusion tensor imaging (DTI) data from the brain can be integrated into a single measurement, which offers the possibility to determine orientation-dependent (tensorial) perfusion parameters in addition to established IVIM and DTI parameters. The purpose of this study was to evaluate the feasibility of such a protocol with a clinically feasible scan time below 6 min and to use a model-selection approach to find a set of DTI and IVIM tensor parameters that most adequately describes the acquired data. Diffusion-weighted images of the brain were acquired at 3 T in 20 elderly participants with cerebral small vessel disease using a multiband echoplanar imaging sequence with 15 b-values between 0 and 1000 s/mm2 and six non-collinear diffusion gradient directions for each b-value. Seven different IVIM-diffusion models with 4 to 14 parameters were implemented, which modeled diffusion and pseudo-diffusion as scalar or tensor quantities. The models were compared with respect to their fitting performance based on the goodness of fit (sum of squared fit residuals, chi2 ) and their Akaike weights (calculated from the corrected Akaike information criterion). Lowest chi2 values were found using the model with the largest number of model parameters. However, significantly highest Akaike weights indicating the most appropriate models for the acquired data were found with a nine-parameter IVIM-DTI model (with isotropic perfusion modeling) in normal-appearing white matter (NAWM), and with an 11-parameter model (IVIM-DTI with additional pseudo-diffusion anisotropy) in white matter with hyperintensities (WMH) and in gray matter (GM). The latter model allowed for the additional calculation of the fractional anisotropy of the pseudo-diffusion tensor (with a median value of 0.45 in NAWM, 0.23 in WMH, and 0.36 in GM), which is not accessible with the usually performed IVIM acquisitions based on three orthogonal diffusion-gradient directions.


Assuntos
Imagem de Tensor de Difusão , Substância Branca , Humanos , Idoso , Imagem de Tensor de Difusão/métodos , Imagem de Difusão por Ressonância Magnética/métodos , Encéfalo/diagnóstico por imagem , Perfusão , Movimento (Física)
13.
Stat Med ; 42(12): 1909-1930, 2023 05 30.
Artigo em Inglês | MEDLINE | ID: mdl-37194500

RESUMO

In this article, we propose a two-level copula joint model to analyze clinical data with multiple disparate continuous longitudinal outcomes and multiple event-times in the presence of competing risks. At the first level, we use a copula to model the dependence between competing latent event-times, in the process constructing the submodel for the observed event-time, and employ the Gaussian copula to construct the submodel for the longitudinal outcomes that accounts for their conditional dependence; these submodels are glued together at the second level via the Gaussian copula to construct a joint model that incorporates conditional dependence between the observed event-time and the longitudinal outcomes. To have the flexibility to accommodate skewed data and examine possibly different covariate effects on quantiles of a non-Gaussian outcome, we propose linear quantile mixed models for the continuous longitudinal data. We adopt a Bayesian framework for model estimation and inference via Markov Chain Monte Carlo sampling. We examine the performance of the copula joint model through a simulation study and show that our proposed method outperforms the conventional approach assuming conditional independence with smaller biases and better coverage probabilities of the Bayesian credible intervals. Finally, we carry out an analysis of clinical data on renal transplantation for illustration.


Assuntos
Modelos Estatísticos , Humanos , Teorema de Bayes , Simulação por Computador , Modelos Lineares , Probabilidade
14.
Stat Med ; 42(26): 4824-4849, 2023 Nov 20.
Artigo em Inglês | MEDLINE | ID: mdl-37670577

RESUMO

Recent substantial advances of molecular targeted oncology drug development is requiring new paradigms for early-phase clinical trial methodologies to enable us to evaluate efficacy of several subtypes simultaneously and efficiently. The concept of the basket trial is getting of much attention to realize this requirement borrowing information across subtypes, which are called baskets. Bayesian approach is a natural approach to this end and indeed the majority of the existing proposals relies on it. On the other hand, it required complicated modeling and may not necessarily control the type 1 error probabilities at the nominal level. In this article, we develop a purely frequentist approach for basket trials based on one-sample Mantel-Haenszel procedure relying on a very simple idea for borrowing information under the common treatment effect assumption over baskets. We show that the proposed Mantel-Haenszel estimator for the treatment effect is consistent under two limiting models of the large strata and sparse data limiting models (dually consistent) and propose dually consistent variance estimators. The proposed estimators are interpretable even if the common treatment effect assumptions are violated. Then, we can design basket trials in a confirmatory matter. We also propose an information criterion approach to identify effective subclasses of baskets.

15.
Br J Clin Pharmacol ; 89(9): 2798-2812, 2023 09.
Artigo em Inglês | MEDLINE | ID: mdl-37186478

RESUMO

AIM: Pharmacokinetics have historically been assessed using drug concentration data obtained via blood draws and bench-top analysis. The cumbersome nature of these typically constrains studies to at most a dozen concentration measurements per dosing event. This, in turn, limits our statistical power in the detection of hours-scale, time-varying physiological processes. Given the recent advent of in vivo electrochemical aptamer-based (EAB) sensors, however, we can now obtain hundreds of concentration measurements per administration. Our aim in this paper was to assess the ability of these time-dense datasets to describe time-varying pharmacokinetic models with good statistical significance. METHODS: We used seconds-resolved measurements of plasma tobramycin concentrations in rats to statistically compare traditional one- and two-compartmental pharmacokinetic models to new models in which the proportional relationship between a drug's plasma concentration and its elimination rate varies in response to changing kidney function. RESULTS: We found that a modified one-compartment model in which the proportionality between the plasma concentration of tobramycin and its elimination rate falls reciprocally with time either meets or is preferred over the standard two-compartment pharmacokinetic model for half of the datasets characterized. When we reduced the impact of the drug's rapid distribution phase on the model, this one-compartment, time-varying model was statistically preferred over the standard one-compartment model for 80% of our datasets. CONCLUSIONS: Our results highlight both the impact that simple physiological changes (such as varying kidney function) can have on drug pharmacokinetics and the ability of high-time resolution EAB sensor measurements to identify such impacts.


Assuntos
Modelos Biológicos , Tobramicina , Ratos , Animais
16.
Philos Trans A Math Phys Eng Sci ; 381(2247): 20220151, 2023 May 15.
Artigo em Inglês | MEDLINE | ID: mdl-36970817

RESUMO

In statistical inference, uncertainty is unknown and all models are wrong. That is to say, a person who makes a statistical model and a prior distribution is simultaneously aware that both are fictional candidates. To study such cases, statistical measures have been constructed, such as cross validation, information criteria and marginal likelihood; however, their mathematical properties have not yet been completely clarified when statistical models are under- or over-parametrized. We introduce a place of mathematical theory of Bayesian statistics for unknown uncertainty, which clarifies general properties of cross validation, information criteria and marginal likelihood, even if an unknown data-generating process is unrealizable by a model or even if the posterior distribution cannot be approximated by any normal distribution. Hence it gives a helpful standpoint for a person who cannot believe in any specific model and prior. This paper consists of three parts. The first is a new result, whereas the second and third are well-known previous results with new experiments. We show there exists a more precise estimator of the generalization loss than leave-one-out cross validation, there exists a more accurate approximation of marginal likelihood than Bayesian information criterion, and the optimal hyperparameters for generalization loss and marginal likelihood are different. This article is part of the theme issue 'Bayesian inference: challenges, perspectives, and prospects'.

17.
Int J Health Geogr ; 22(1): 30, 2023 Nov 08.
Artigo em Inglês | MEDLINE | ID: mdl-37940917

RESUMO

BACKGROUND: Correctly identifying spatial disease cluster is a fundamental concern in public health and epidemiology. The spatial scan statistic is widely used for detecting spatial disease clusters in spatial epidemiology and disease surveillance. Many studies default to a maximum reported cluster size (MRCS) set at 50% of the total population when searching for spatial clusters. However, this default setting can sometimes report clusters larger than true clusters, which include less relevant regions. For the Poisson, Bernoulli, ordinal, normal, and exponential models, a Gini coefficient has been developed to optimize the MRCS. Yet, no measure is available for the multinomial model. RESULTS: We propose two versions of a spatial cluster information criterion (SCIC) for selecting the optimal MRCS value for the multinomial-based spatial scan statistic. Our simulation study suggests that SCIC improves the accuracy of reporting true clusters. Analysis of the Korea Community Health Survey (KCHS) data further demonstrates that our method identifies more meaningful small clusters compared to the default setting. CONCLUSIONS: Our method focuses on improving the performance of the spatial scan statistic by optimizing the MRCS value when using the multinomial model. In public health and disease surveillance, the proposed method can be used to provide more accurate and meaningful spatial cluster detection for multinomial data, such as disease subtypes.


Assuntos
Surtos de Doenças , Modelos Estatísticos , Humanos , Análise por Conglomerados , Simulação por Computador , Saúde Pública
18.
J Biopharm Stat ; : 1-25, 2023 Jul 17.
Artigo em Inglês | MEDLINE | ID: mdl-37455635

RESUMO

We propose a new approach to select the regularization parameter using a new version of the generalized information criterion (GIC) in the subject of penalized regression. We prove the identifiability of bridge regression model as a prerequisite of statistical modeling. Then, we propose asymptotically efficient generalized information criterion (AGIC) and prove that it has asymptotic loss efficiency. Also, we verified the better performance of AGIC in comparison to the older versions of GIC. Furthermore, we propose MSE search paths to order the selected features by lasso regression based on numerical studies. The MSE search paths provide a way to cover the lack of feature ordering in lasso regression model. The performance of AGIC with other types of GIC is compared using MSE and model utility in simulation study. We exert AGIC and other criteria to analyze breast and prostate cancer and Parkinson disease datasets. The results confirm the superiority of AGIC in almost all situations.

19.
Sensors (Basel) ; 23(22)2023 Nov 18.
Artigo em Inglês | MEDLINE | ID: mdl-38005654

RESUMO

A noise-resistant linearization model that reveals the true nonlinearity of the sensor is essential for retrieving accurate physical displacement from the signals captured by sensing electronics. In this paper, we propose a novel information-driven smoothing spline linearization method, which innovatively integrates one new and three standard information criterions into a smoothing spline for the high-precision displacement sensors' linearization. Using theoretical analysis and Monte Carlo simulation, the proposed linearization method is demonstrated to outperform traditional polynomial and spline linearization methods for high-precision displacement sensors with a low noise to range ratio in the 10-5 level. Validation experiments were carried out on two different types of displacement sensors to benchmark the performance of the proposed method compared to the polynomial models and the the non-smoothing cubic spline. The results show that the proposed method with the new modified Akaike Information Criterion stands out compared to the other linearization methods and can improve the residual nonlinearity by over 50% compared to the standard polynomial model. After being linearized via the proposed method, the residual nonlinearities reach as low as ±0.0311% F.S. (Full Scale of Range), for the 1.5 mm range chromatic confocal displacement sensor, and ±0.0047% F.S., for the 100 mm range laser triangulation displacement sensor.

20.
Sensors (Basel) ; 23(2)2023 Jan 07.
Artigo em Inglês | MEDLINE | ID: mdl-36679490

RESUMO

The acoustic emission (AE) technique is one of the most widely used in the field of structural monitoring. Its popularity mainly stems from the fact that it belongs to the category of non-destructive techniques (NDT) and allows the passive monitoring of structures. The technique employs piezoelectric sensors to measure the elastic ultrasonic wave that propagates in the material as a result of the crack formation's abrupt release of energy. The recorded signal can be investigated to obtain information about the source crack, its position, and its typology (Mode I, Mode II). Over the years, many techniques have been developed for the localization, characterization, and quantification of damage from the study of acoustic emission. The onset time of the signal is an essential information item to be derived from waveform analysis. This information combined with the use of the triangulation technique allows for the identification of the crack location. In the literature, it is possible to find many methods to identify, with increasing accuracy, the onset time of the P-wave. Indeed, the precision of the onset time detection affects the accuracy of identifying the location of the crack. In this paper, two techniques for the definition of the onset time of acoustic emission signals are presented. The first method is based on the Akaike Information Criterion (AIC) while the second one relies on the use of artificial intelligence (AI). A recurrent convolutional neural network (R-CNN) designed for sound event detection (SED) is trained on three different datasets composed of seismic signals and acoustic emission signals to be tested on a real-world acoustic emission dataset. The new method allows taking advantage of the similarities between acoustic emissions, seismic signals, and sound signals, enhancing the accuracy in determining the onset time.


Assuntos
Acústica , Inteligência Artificial , Som , Redes Neurais de Computação , Ultrassom
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA