Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 53
Filtrar
1.
Biometrics ; 79(2): 975-987, 2023 06.
Artigo em Inglês | MEDLINE | ID: mdl-34825704

RESUMO

In many randomized clinical trials of therapeutics for COVID-19, the primary outcome is an ordinal categorical variable, and interest focuses on the odds ratio (OR; active agent vs control) under the assumption of a proportional odds model. Although at the final analysis the outcome will be determined for all subjects, at an interim analysis, the status of some participants may not yet be determined, for example, because ascertainment of the outcome may not be possible until some prespecified follow-up time. Accordingly, the outcome from these subjects can be viewed as censored. A valid interim analysis can be based on data only from those subjects with full follow-up; however, this approach is inefficient, as it does not exploit additional information that may be available on those for whom the outcome is not yet available at the time of the interim analysis. Appealing to the theory of semiparametrics, we propose an estimator for the OR in a proportional odds model with censored, time-lagged categorical outcome that incorporates additional baseline and time-dependent covariate information and demonstrate that it can result in considerable gains in efficiency relative to simpler approaches. A byproduct of the approach is a covariate-adjusted estimator for the OR based on the full data that would be available at a final analysis.


Assuntos
COVID-19 , Humanos , Razão de Chances , Resultado do Tratamento
2.
NEJM Evid ; 2(3): EVIDctcs2200301, 2023 Mar.
Artigo em Inglês | MEDLINE | ID: mdl-38320019

RESUMO

Monitoring U.S. Government-Supported Covid-19 Vaccine TrialsOperation Warp Speed was a partnership created to accelerate the development of Covid-19 vaccines. The National Institutes of Health oversaw one data and safety monitoring board to review/monitor all Operation Warp Speed trials. This article describes the challenges faced in monitoring these trials and provides ideas for future similar endeavors.


Assuntos
Vacinas contra COVID-19 , COVID-19 , Estados Unidos , Humanos , Comitês de Monitoramento de Dados de Ensaios Clínicos , National Institutes of Health (U.S.)
3.
Stat Med ; 41(28): 5517-5536, 2022 12 10.
Artigo em Inglês | MEDLINE | ID: mdl-36117235

RESUMO

The primary analysis in two-arm clinical trials usually involves inference on a scalar treatment effect parameter; for example, depending on the outcome, the difference of treatment-specific means, risk difference, risk ratio, or odds ratio. Most clinical trials are monitored for the possibility of early stopping. Because ordinarily the outcome on any given subject can be ascertained only after some time lag, at the time of an interim analysis, among the subjects already enrolled, the outcome is known for only a subset and is effectively censored for those who have not been enrolled sufficiently long for it to be observed. Typically, the interim analysis is based only on the data from subjects for whom the outcome has been ascertained. A goal of an interim analysis is to stop the trial as soon as the evidence is strong enough to do so, suggesting that the analysis ideally should make the most efficient use of all available data, thus including information on censoring as well as other baseline and time-dependent covariates in a principled way. A general group sequential framework is proposed for clinical trials with a time-lagged outcome. Treatment effect estimators that take account of censoring and incorporate covariate information at an interim analysis are derived using semiparametric theory and are demonstrated to lead to stronger evidence for early stopping than standard approaches. The associated test statistics are shown to have the independent increments structure, so that standard software can be used to obtain stopping boundaries.


Assuntos
Projetos de Pesquisa , Humanos , Ensaios Clínicos Controlados Aleatórios como Assunto , Razão de Chances
4.
Biometrics ; 78(3): 825-838, 2022 09.
Artigo em Inglês | MEDLINE | ID: mdl-34174097

RESUMO

The COVID-19 pandemic due to the novel coronavirus SARS CoV-2 has inspired remarkable breakthroughs in the development of vaccines against the virus and the launch of several phase 3 vaccine trials in Summer 2020 to evaluate vaccine efficacy (VE). Trials of vaccine candidates using mRNA delivery systems developed by Pfizer-BioNTech and Moderna have shown substantial VEs of 94-95%, leading the US Food and Drug Administration to issue Emergency Use Authorizations and subsequent widespread administration of the vaccines. As the trials continue, a key issue is the possibility that VE may wane over time. Ethical considerations dictate that trial participants be unblinded and those randomized to placebo be offered study vaccine, leading to trial protocol amendments specifying unblinding strategies. Crossover of placebo subjects to vaccine complicates inference on waning of VE. We focus on the particular features of the Moderna trial and propose a statistical framework based on a potential outcomes formulation within which we develop methods for inference on potential waning of VE over time and estimation of VE at any postvaccination time. The framework clarifies assumptions made regarding individual- and population-level phenomena and acknowledges the possibility that subjects who are more or less likely to become infected may be crossed over to vaccine differentially over time. The principles of the framework can be adapted straightforwardly to other trials.


Assuntos
Vacinas contra COVID-19 , COVID-19 , COVID-19/prevenção & controle , Humanos , Pandemias/prevenção & controle , Ensaios Clínicos Controlados Aleatórios como Assunto , Projetos de Pesquisa , SARS-CoV-2 , Eficácia de Vacinas
6.
J Infect Dis ; 224(12): 1995-2000, 2021 12 15.
Artigo em Inglês | MEDLINE | ID: mdl-34008027

RESUMO

To speed the development of vaccines against SARS-CoV-2, the United States Federal Government has funded multiple phase 3 trials of candidate vaccines. A single 11-member data and safety monitoring board (DSMB) monitors all government-funded trials to ensure coordinated oversight, promote harmonized designs, and allow shared insights related to safety across trials. DSMB reviews encompass 3 domains: (1) the conduct of trials, including overall and subgroup accrual and data quality and completeness; (2) safety, including individual events of concern and comparisons by randomized group; and (3) interim analyses of efficacy when event-driven milestones are met. Challenges have included the scale and pace of the trials, the frequency of safety events related to the combined enrollment of over 100 000 participants, many of whom are older adults or have comorbid conditions that place them at independent risk of serious health events, and the politicized environment in which the trials have taken place.


Assuntos
Vacinas contra COVID-19/efeitos adversos , Vacinas contra COVID-19/imunologia , COVID-19/prevenção & controle , Idoso , Vacinas contra COVID-19/administração & dosagem , Humanos , SARS-CoV-2 , Estados Unidos , Vacinas
7.
Biometrics ; 74(4): 1180-1192, 2018 12.
Artigo em Inglês | MEDLINE | ID: mdl-29775203

RESUMO

Clinicians often make multiple treatment decisions at key points over the course of a patient's disease. A dynamic treatment regime is a sequence of decision rules, each mapping a patient's observed history to the set of available, feasible treatment options at each decision point, and thus formalizes this process. An optimal regime is one leading to the most beneficial outcome on average if used to select treatment for the patient population. We propose a method for estimation of an optimal regime involving two decision points when the outcome of interest is a censored survival time, which is based on maximizing a locally efficient, doubly robust, augmented inverse probability weighted estimator for average outcome over a class of regimes. By casting this optimization as a classification problem, we exploit well-studied classification techniques such as support vector machines to characterize the class of regimes and facilitate implementation via a backward iterative algorithm. Simulation studies of performance and application of the method to data from a sequential, multiple assignment randomized clinical trial in acute leukemia are presented.


Assuntos
Biometria/métodos , Técnicas de Apoio para a Decisão , Avaliação de Resultados em Cuidados de Saúde/métodos , Máquina de Vetores de Suporte , Análise de Sobrevida , Doença Aguda , Algoritmos , Simulação por Computador , Humanos , Leucemia , Avaliação de Resultados em Cuidados de Saúde/normas , Ensaios Clínicos Controlados Aleatórios como Assunto
8.
Biometrics ; 74(3): 900-909, 2018 09.
Artigo em Inglês | MEDLINE | ID: mdl-29359317

RESUMO

We consider estimating the effect that discontinuing a beneficial treatment will have on the distribution of a time to event clinical outcome, and in particular assessing whether there is a period of time over which the beneficial effect may continue after discontinuation. There are two major challenges. The first is to make a distinction between mandatory discontinuation, where by necessity treatment has to be terminated and optional discontinuation which is decided by the preference of the patient or physician. The innovation in this article is to cast the intervention in the form of a dynamic regime "terminate treatment optionally at time v unless a mandatory treatment-terminating event occurs prior to v" and consider estimating the distribution of time to event as a function of treatment regime v. The second challenge arises from biases associated with the nonrandom assignment of treatment regimes, because, naturally, optional treatment discontinuation is left to the patient and physician, and so time to discontinuation may depend on the patient's disease status. To address this issue, we develop dynamic-regime Marginal Structural Models and use inverse probability of treatment weighting to estimate the impact of time to treatment discontinuation on a time to event outcome, compared to the effect of not discontinuing treatment. We illustrate our methods using the IMPROVE-IT data on cardiovascular disease.


Assuntos
Análise de Sobrevida , Suspensão de Tratamento/estatística & dados numéricos , Doenças Cardiovasculares/terapia , Simulação por Computador , Humanos , Estimativa de Kaplan-Meier , Modelos Estatísticos , Tempo para o Tratamento
9.
J Am Stat Assoc ; 113(524): 1541-1549, 2018.
Artigo em Inglês | MEDLINE | ID: mdl-30774169

RESUMO

Precision medicine is currently a topic of great interest in clinical and intervention science. A key component of precision medicine is that it is evidence-based, i.e., data-driven, and consequently there has been tremendous interest in estimation of precision medicine strategies using observational or randomized study data. One way to formalize precision medicine is through a treatment regime, which is a sequence of decision rules, one per stage of clinical intervention, that map up-to-date patient information to a recommended treatment. An optimal treatment regime is defined as maximizing the mean of some cumulative clinical outcome if applied to a population of interest. It is well-known that even under simple generative models an optimal treatment regime can be a highly nonlinear function of patient information. Consequently, a focal point of recent methodological research has been the development of flexible models for estimating optimal treatment regimes. However, in many settings, estimation of an optimal treatment regime is an exploratory analysis intended to generate new hypotheses for subsequent research and not to directly dictate treatment to new patients. In such settings, an estimated treatment regime that is interpretable in a domain context may be of greater value than an unintelligible treatment regime built using 'black-box' estimation methods. We propose an estimator of an optimal treatment regime composed of a sequence of decision rules, each expressible as a list of "if-then" statements that can be presented as either a paragraph or as a simple flowchart that is immediately interpretable to domain experts. The discreteness of these lists precludes smooth, i.e., gradient-based, methods of estimation and leads to non-standard asymptotics. Nevertheless, we provide a computationally efficient estimation algorithm, prove consistency of the proposed estimator, and derive rates of convergence. We illustrate the proposed methods using a series of simulation examples and application to data from a sequential clinical trial on bipolar disorder.

10.
Lifetime Data Anal ; 23(4): 585-604, 2017 10.
Artigo em Inglês | MEDLINE | ID: mdl-27480339

RESUMO

A treatment regime at a single decision point is a rule that assigns a treatment, among the available options, to a patient based on the patient's baseline characteristics. The value of a treatment regime is the average outcome of a population of patients if they were all treated in accordance to the treatment regime, where large values are desirable. The optimal treatment regime is a regime which results in the greatest value. Typically, the optimal treatment regime is estimated by positing a regression relationship for the outcome of interest as a function of treatment and baseline characteristics. However, this can lead to suboptimal treatment regimes when the regression model is misspecified. We instead consider value search estimators for the optimal treatment regime where we directly estimate the value for any treatment regime and then maximize this estimator over a class of regimes. For many studies the primary outcome of interest is survival time which is often censored. We derive a locally efficient, doubly robust, augmented inverse probability weighted complete case estimator for the value function with censored survival data and study the large sample properties of this estimator. The optimization is realized from a weighted classification perspective that allows us to use available off the shelf software. In some studies one treatment may have greater toxicity or side effects, thus we also consider estimating a quality adjusted optimal treatment regime that allows a patient to trade some additional risk of death in order to avoid the more invasive treatment.


Assuntos
Modelos Estatísticos , Análise de Sobrevida , Simulação por Computador , Ponte de Artéria Coronária , Doença da Artéria Coronariana/mortalidade , Doença da Artéria Coronariana/terapia , Tomada de Decisões , Humanos , Tábuas de Vida , Método de Monte Carlo , Intervenção Coronária Percutânea , Resultado do Tratamento
11.
Ann Am Thorac Soc ; 14(2): 172-181, 2017 02.
Artigo em Inglês | MEDLINE | ID: mdl-27779905

RESUMO

RATIONALE: Lung transplantation is an accepted and increasingly employed treatment for advanced lung diseases, but the anticipated survival benefit of lung transplantation is poorly understood. OBJECTIVES: To determine whether and for which patients lung transplantation confers a survival benefit in the modern era of U.S. lung allocation. METHODS: Data on 13,040 adults listed for lung transplantation between May 2005 and September 2011 were obtained from the United Network for Organ Sharing. A structural nested accelerated failure time model was used to model the survival benefit of lung transplantation over time. The effects of patient, donor, and transplant center characteristics on the relative survival benefit of transplantation were examined. MEASUREMENTS AND MAIN RESULTS: Overall, 73.8% of transplant recipients were predicted to achieve a 2-year survival benefit with lung transplantation. The survival benefit of transplantation varied by native disease group (P = 0.062), with 2-year expected benefit in 39.2 and 98.9% of transplants occurring in those with obstructive lung disease and cystic fibrosis, respectively, and by lung allocation score at the time of transplantation (P < 0.001), with net 2-year benefit in only 6.8% of transplants occurring for lung allocation score less than 32.5 and in 99.9% of transplants for lung allocation score exceeding 40. CONCLUSIONS: A majority of adults undergoing transplantation experience a survival benefit, with the greatest potential benefit in those with higher lung allocation scores or restrictive native lung disease or cystic fibrosis. These results provide novel information to assess the expected benefit of lung transplantation at an individual level and to enhance lung allocation policy.


Assuntos
Fibrose Cística/mortalidade , Pneumopatias Obstrutivas/mortalidade , Transplante de Pulmão/mortalidade , Doadores de Tecidos/estatística & dados numéricos , Obtenção de Tecidos e Órgãos , Listas de Espera/mortalidade , Adulto , Fibrose Cística/cirurgia , Feminino , Alocação de Recursos para a Atenção à Saúde/normas , Humanos , Pneumopatias Obstrutivas/cirurgia , Masculino , Pessoa de Meia-Idade , Seleção de Pacientes , Sistema de Registros , Estudos Retrospectivos , Taxa de Sobrevida , Fatores de Tempo , Estados Unidos/epidemiologia , Adulto Jovem
12.
J Biom Biostat ; 7(1)2016 Feb.
Artigo em Inglês | MEDLINE | ID: mdl-27175309

RESUMO

Often, sample size is not fixed by design. A key example is a sequential trial with a stopping rule, where stopping is based on what has been observed at an interim look. While such designs are used for time and cost efficiency, and hypothesis testing theory has been well developed, estimation following a sequential trial is a challenging, still controversial problem. Progress has been made in the literature, predominantly for normal outcomes and/or for a deterministic stopping rule. Here, we place these settings in a broader context of outcomes following an exponential family distribution and, with a stochastic stopping rule that includes a deterministic rule and completely random sample size as special cases. It is shown that the estimation problem is usually simpler than often thought. In particular, it is established that the ordinary sample average is a very sensible choice, contrary to commonly encountered statements. We study (1) The so-called incompleteness property of the sufficient statistics, (2) A general class of linear estimators, and (3) Joint and conditional likelihood estimation. Apart from the general exponential family setting, normal and binary outcomes are considered as key examples. While our results hold for a general number of looks, for ease of exposition, we focus on the simple yet generic setting of two possible sample sizes, N=n or N=2n.

13.
Lifetime Data Anal ; 22(2): 280-98, 2016 Apr.
Artigo em Inglês | MEDLINE | ID: mdl-26025499

RESUMO

In randomized clinical trials, the log rank test is often used to test the null hypothesis of the equality of treatment-specific survival distributions. In observational studies, however, the ordinary log rank test is no longer guaranteed to be valid. In such studies we must be cautious about potential confounders; that is, the covariates that affect both the treatment assignment and the survival distribution. In this paper, two cases were considered: the first is when it is believed that all the potential confounders are captured in the primary database, and the second case where a substudy is conducted to capture additional confounding covariates. We generalize the augmented inverse probability weighted complete case estimators for treatment-specific survival distribution proposed in Bai et al. (Biometrics 69:830-839, 2013) and develop the log rank type test in both cases. The consistency and double robustness of the proposed test statistics are shown in simulation studies. These statistics are then applied to the data from the observational study that motivated this research.


Assuntos
Estudos Observacionais como Assunto/estatística & dados numéricos , Análise de Sobrevida , Simulação por Computador , Doença da Artéria Coronariana/mortalidade , Doença da Artéria Coronariana/terapia , Humanos , Modelos Estatísticos , Probabilidade , Modelos de Riscos Proporcionais , Estudos de Amostragem
14.
Stat Biosci ; 7(2): 187-205, 2015 Oct.
Artigo em Inglês | MEDLINE | ID: mdl-26478751

RESUMO

Group sequential trials are one important instance of studies for which the sample size is not fixed a priori but rather takes one of a finite set of pre-specified values, dependent on the observed data. Much work has been devoted to the inferential consequences of this design feature. Molenberghs et al (2012) and Milanzi et al (2012) reviewed and extended the existing literature, focusing on a collection of seemingly disparate, but related, settings, namely completely random sample sizes, group sequential studies with deterministic and random stopping rules, incomplete data, and random cluster sizes. They showed that the ordinary sample average is a viable option for estimation following a group sequential trial, for a wide class of stopping rules and for random outcomes with a distribution in the exponential family. Their results are somewhat surprising in the sense that the sample average is not optimal, and further, there does not exist an optimal, or even, unbiased linear estimator. However, the sample average is asymptotically unbiased, both conditionally upon the observed sample size as well as marginalized over it. By exploiting ignorability they showed that the sample average is the conventional maximum likelihood estimator. They also showed that a conditional maximum likelihood estimator is finite sample unbiased, but is less efficient than the sample average and has the larger mean squared error. Asymptotically, the sample average and the conditional maximum likelihood estimator are equivalent. This previous work is restricted, however, to the situation in which the the random sample size can take only two values, N = n or N = 2n. In this paper, we consider the more practically useful setting of sample sizes in a the finite set {n1, n2, …, nL }. It is shown that the sample average is then a justifiable estimator , in the sense that it follows from joint likelihood estimation, and it is consistent and asymptotically unbiased. We also show why simulations can give the false impression of bias in the sample average when considered conditional upon the sample size. The consequence is that no corrections need to be made to estimators following sequential trials. When small-sample bias is of concern, the conditional likelihood estimator provides a relatively straightforward modification to the sample average. Finally, it is shown that classical likelihood-based standard errors and confidence intervals can be applied, obviating the need for technical corrections.

16.
J Stat Softw ; 56: 2, 2014 Jan 01.
Artigo em Inglês | MEDLINE | ID: mdl-24688453

RESUMO

Generalized linear and nonlinear mixed models (GMMMs and NLMMs) are commonly used to represent non-Gaussian or nonlinear longitudinal or clustered data. A common assumption is that the random effects are Gaussian. However, this assumption may be unrealistic in some applications, and misspecification of the random effects density may lead to maximum likelihood parameter estimators that are inconsistent, biased, and inefficient. Because testing if the random effects are Gaussian is difficult, previous research has recommended using a flexible random effects density. However, computational limitations have precluded widespread use of flexible random effects densities for GLMMs and NLMMs. We develop a SAS macro, SNP_NLMM, that overcomes the computational challenges to fit GLMMs and NLMMs where the random effects are assumed to follow a smooth density that can be represented by the seminonparametric formulation proposed by Gallant and Nychka (1987). The macro is flexible enough to allow for any density of the response conditional on the random effects and any nonlinear mean trajectory. We demonstrate the SNP_NLMM macro on a GLMM of the disease progression of toenail infection and on a NLMM of intravenous drug concentration over time.

17.
Stat Methods Med Res ; 23(1): 11-41, 2014 Feb.
Artigo em Inglês | MEDLINE | ID: mdl-22514029

RESUMO

The vast majority of settings for which frequentist statistical properties are derived assume a fixed, a priori known sample size. Familiar properties then follow, such as, for example, the consistency, asymptotic normality, and efficiency of the sample average for the mean parameter, under a wide range of conditions. We are concerned here with the alternative situation in which the sample size is itself a random variable which may depend on the data being collected. Further, the rule governing this may be deterministic or probabilistic. There are many important practical examples of such settings, including missing data, sequential trials, and informative cluster size. It is well known that special issues can arise when evaluating the properties of statistical procedures under such sampling schemes, and much has been written about specific areas (Grambsch P. Sequential sampling based on the observed Fisher information to guarantee the accuracy of the maximum likelihood estimator. Ann Stat 1983; 11: 68-77; Barndorff-Nielsen O and Cox DR. The effect of sampling rules on likelihood statistics. Int Stat Rev 1984; 52: 309-326). Our aim is to place these various related examples into a single framework derived from the joint modeling of the outcomes and sampling process and so derive generic results that in turn provide insight, and in some cases practical consequences, for different settings. It is shown that, even in the simplest case of estimating a mean, some of the results appear counterintuitive. In many examples, the sample average may exhibit small sample bias and, even when it is unbiased, may not be optimal. Indeed, there may be no minimum variance unbiased estimator for the mean. Such results follow directly from key attributes such as non-ancillarity of the sample size and incompleteness of the minimal sufficient statistic of the sample size and sample sum. Although our results have direct and obvious implications for estimation following group sequential trials, there are also ramifications for a range of other settings, such as random cluster sizes, censored time-to-event data, and the joint modeling of longitudinal and time-to-event data. Here, we use the simplest group sequential setting to develop and explicate the main results. Some implications for random sample sizes and missing data are also considered. Consequences for other related settings will be considered elsewhere.


Assuntos
Modelos Estatísticos , Tamanho da Amostra , Funções Verossimilhança , Probabilidade
18.
Stat Sci ; 29(4): 640-661, 2014 Nov.
Artigo em Inglês | MEDLINE | ID: mdl-25620840

RESUMO

In clinical practice, physicians make a series of treatment decisions over the course of a patient's disease based on his/her baseline and evolving characteristics. A dynamic treatment regime is a set of sequential decision rules that operationalizes this process. Each rule corresponds to a decision point and dictates the next treatment action based on the accrued information. Using existing data, a key goal is estimating the optimal regime, that, if followed by the patient population, would yield the most favorable outcome on average. Q- and A-learning are two main approaches for this purpose. We provide a detailed account of these methods, study their performance, and illustrate them using data from a depression study.

19.
Biometrika ; 100(3)2013.
Artigo em Inglês | MEDLINE | ID: mdl-24302771

RESUMO

A dynamic treatment regime is a list of sequential decision rules for assigning treatment based on a patient's history. Q- and A-learning are two main approaches for estimating the optimal regime, i.e., that yielding the most beneficial outcome in the patient population, using data from a clinical trial or observational study. Q-learning requires postulated regression models for the outcome, while A-learning involves models for that part of the outcome regression representing treatment contrasts and for treatment assignment. We propose an alternative to Q- and A-learning that maximizes a doubly robust augmented inverse probability weighted estimator for population mean outcome over a restricted class of regimes. Simulations demonstrate the method's performance and robustness to model misspecification, which is a key concern.

20.
Biometrics ; 69(4): 830-9, 2013 Dec.
Artigo em Inglês | MEDLINE | ID: mdl-24117096

RESUMO

Observational studies are frequently conducted to compare the effects of two treatments on survival. For such studies we must be concerned about confounding; that is, there are covariates that affect both the treatment assignment and the survival distribution. With confounding the usual treatment-specific Kaplan-Meier estimator might be a biased estimator of the underlying treatment-specific survival distribution. This article has two aims. In the first aim we use semiparametric theory to derive a doubly robust estimator of the treatment-specific survival distribution in cases where it is believed that all the potential confounders are captured. In cases where not all potential confounders have been captured one may conduct a substudy using a stratified sampling scheme to capture additional covariates that may account for confounding. The second aim is to derive a doubly-robust estimator for the treatment-specific survival distributions and its variance estimator with such a stratified sampling scheme. Simulation studies are conducted to show consistency and double robustness. These estimators are then applied to the data from the ASCERT study that motivated this research.


Assuntos
Doença da Artéria Coronariana/mortalidade , Doença da Artéria Coronariana/cirurgia , Interpretação Estatística de Dados , Estudos Observacionais como Assunto/métodos , Avaliação de Resultados em Cuidados de Saúde/métodos , Análise de Sobrevida , Humanos , Prevalência , Reprodutibilidade dos Testes , Tamanho da Amostra , Sensibilidade e Especificidade , Distribuições Estatísticas , Resultado do Tratamento , Estados Unidos/epidemiologia
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA