Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 47
Filtrar
1.
BMC Med Res Methodol ; 23(1): 279, 2023 11 24.
Artigo em Inglês | MEDLINE | ID: mdl-38001458

RESUMO

BACKGROUND: Clinical trials often seek to determine the superiority, equivalence, or non-inferiority of an experimental condition (e.g., a new drug) compared to a control condition (e.g., a placebo or an already existing drug). The use of frequentist statistical methods to analyze data for these types of designs is ubiquitous even though they have several limitations. Bayesian inference remedies many of these shortcomings and allows for intuitive interpretations, but are currently difficult to implement for the applied researcher. RESULTS: We outline the frequentist conceptualization of superiority, equivalence, and non-inferiority designs and discuss its disadvantages. Subsequently, we explain how Bayes factors can be used to compare the relative plausibility of competing hypotheses. We present baymedr, an R package and web application, that provides user-friendly tools for the computation of Bayes factors for superiority, equivalence, and non-inferiority designs. Instructions on how to use baymedr are provided and an example illustrates how existing results can be reanalyzed with baymedr. CONCLUSIONS: Our baymedr R package and web application enable researchers to conduct Bayesian superiority, equivalence, and non-inferiority tests. baymedr is characterized by a user-friendly implementation, making it convenient for researchers who are not statistical experts. Using baymedr, it is possible to calculate Bayes factors based on raw data and summary statistics.


Assuntos
Projetos de Pesquisa , Humanos , Teorema de Bayes
2.
PLoS One ; 18(10): e0292279, 2023.
Artigo em Inglês | MEDLINE | ID: mdl-37788282

RESUMO

BACKGROUND: Publishing study results in scientific journals has been the standard way of disseminating science. However, getting results published may depend on their statistical significance. The consequence of this is that the representation of scientific knowledge might be biased. This type of bias has been called publication bias. The main objective of the present study is to get more insight into publication bias by examining it at the author, reviewer, and editor level. Additionally, we make a direct comparison between publication bias induced by authors, by reviewers, and by editors. We approached our participants by e-mail, asking them to fill out an online survey. RESULTS: Our findings suggest that statistically significant findings have a higher likelihood to be published than statistically non-significant findings, because (1) authors (n = 65) are more likely to write up and submit articles with significant results compared to articles with non-significant results (median effect size 1.10, BF10 = 1.09*107); (2) reviewers (n = 60) give more favourable reviews to articles with significant results compared to articles with non-significant results (median effect size 0.58, BF10 = 4.73*102); and (3) editors (n = 171) are more likely to accept for publication articles with significant results compared to articles with non-significant results (median effect size, 0.94, BF10 = 7.63*107). Evidence on differences in the relative contributions to publication bias by authors, reviewers, and editors is ambiguous (editors vs reviewers: BF10 = 0.31, reviewers vs authors: BF10 = 3.11, and editors vs authors: BF10 = 0.42). DISCUSSION: One of the main limitations was that rather than investigating publication bias directly, we studied potential for publication bias. Another limitation was the low response rate to the survey.


Assuntos
Autoria , Redação , Humanos , Viés de Publicação , Inquéritos e Questionários , Correio Eletrônico
3.
R Soc Open Sci ; 10(2): 210586, 2023 Feb.
Artigo em Inglês | MEDLINE | ID: mdl-36756069

RESUMO

Increased execution of replication studies contributes to the effort to restore credibility of empirical research. However, a second generation of problems arises: the number of potential replication targets is at a serious mismatch with available resources. Given limited resources, replication target selection should be well-justified, systematic and transparently communicated. At present the discussion on what to consider when selecting a replication target is limited to theoretical discussion, self-reported justifications and a few formalized suggestions. In this Registered Report, we proposed a study involving the scientific community to create a list of considerations for consultation when selecting a replication target in psychology. We employed a modified Delphi approach. First, we constructed a preliminary list of considerations. Second, we surveyed psychologists who previously selected a replication target with regards to their considerations. Third, we incorporated the results into the preliminary list of considerations and sent the updated list to a group of individuals knowledgeable about concerns regarding replication target selection. Over the course of several rounds, we established consensus regarding what to consider when selecting a replication target. The resulting checklist can be used for transparently communicating the rationale for selecting studies for replication.

4.
PLoS One ; 18(1): e0274429, 2023.
Artigo em Inglês | MEDLINE | ID: mdl-36701303

RESUMO

As replications of individual studies are resource intensive, techniques for predicting the replicability are required. We introduce the repliCATS (Collaborative Assessments for Trustworthy Science) process, a new method for eliciting expert predictions about the replicability of research. This process is a structured expert elicitation approach based on a modified Delphi technique applied to the evaluation of research claims in social and behavioural sciences. The utility of processes to predict replicability is their capacity to test scientific claims without the costs of full replication. Experimental data supports the validity of this process, with a validation study producing a classification accuracy of 84% and an Area Under the Curve of 0.94, meeting or exceeding the accuracy of other techniques used to predict replicability. The repliCATS process provides other benefits. It is highly scalable, able to be deployed for both rapid assessment of small numbers of claims, and assessment of high volumes of claims over an extended period through an online elicitation platform, having been used to assess 3000 research claims over an 18 month period. It is available to be implemented in a range of ways and we describe one such implementation. An important advantage of the repliCATS process is that it collects qualitative data that has the potential to provide insight in understanding the limits of generalizability of scientific claims. The primary limitation of the repliCATS process is its reliance on human-derived predictions with consequent costs in terms of participant fatigue although careful design can minimise these costs. The repliCATS process has potential applications in alternative peer review and in the allocation of effort for replication studies.


Assuntos
Ciências do Comportamento , Confiabilidade dos Dados , Humanos , Reprodutibilidade dos Testes , Custos e Análise de Custo , Revisão por Pares
5.
Psychol Methods ; 28(3): 740-755, 2023 Jun.
Artigo em Inglês | MEDLINE | ID: mdl-34735173

RESUMO

Some important research questions require the ability to find evidence for two conditions being practically equivalent. This is impossible to accomplish within the traditional frequentist null hypothesis significance testing framework; hence, other methodologies must be utilized. We explain and illustrate three approaches for finding evidence for equivalence: The frequentist two one-sided tests procedure, the Bayesian highest density interval region of practical equivalence procedure, and the Bayes factor interval null procedure. We compare the classification performances of these three approaches for various plausible scenarios. The results indicate that the Bayes factor interval null approach compares favorably to the other two approaches in terms of statistical power. Critically, compared with the Bayes factor interval null procedure, the two one-sided tests and the highest density interval region of practical equivalence procedures have limited discrimination capabilities when the sample size is relatively small: Specifically, in order to be practically useful, these two methods generally require over 250 cases within each condition when rather large equivalence margins of approximately .2 or .3 are used; for smaller equivalence margins even more cases are required. Because of these results, we recommend that researchers rely more on the Bayes factor interval null approach for quantifying evidence for equivalence, especially for studies that are constrained on sample size. (PsycInfo Database Record (c) 2023 APA, all rights reserved).


Assuntos
Projetos de Pesquisa , Humanos , Teorema de Bayes , Tamanho da Amostra
6.
Psychol Methods ; 28(3): 558-579, 2023 Jun.
Artigo em Inglês | MEDLINE | ID: mdl-35298215

RESUMO

The last 25 years have shown a steady increase in attention for the Bayes factor as a tool for hypothesis evaluation and model selection. The present review highlights the potential of the Bayes factor in psychological research. We discuss six types of applications: Bayesian evaluation of point null, interval, and informative hypotheses, Bayesian evidence synthesis, Bayesian variable selection and model averaging, and Bayesian evaluation of cognitive models. We elaborate what each application entails, give illustrative examples, and provide an overview of key references and software with links to other applications. The article is concluded with a discussion of the opportunities and pitfalls of Bayes factor applications and a sketch of corresponding future research lines. (PsycInfo Database Record (c) 2023 APA, all rights reserved).


Assuntos
Teorema de Bayes , Pesquisa Comportamental , Psicologia , Humanos , Pesquisa Comportamental/métodos , Psicologia/métodos , Software , Projetos de Pesquisa
7.
Psychon Bull Rev ; 29(1): 70-87, 2022 Feb.
Artigo em Inglês | MEDLINE | ID: mdl-34254263

RESUMO

The practice of sequentially testing a null hypothesis as data are collected until the null hypothesis is rejected is known as optional stopping. It is well known that optional stopping is problematic in the context of p value-based null hypothesis significance testing: The false-positive rates quickly overcome the single test's significance level. However, the state of affairs under null hypothesis Bayesian testing, where p values are replaced by Bayes factors, has perhaps surprisingly been much less consensual. Rouder (2014) used simulations to defend the use of optional stopping under null hypothesis Bayesian testing. The idea behind these simulations is closely related to the idea of sampling from prior predictive distributions. Deng et al. (2016) and Hendriksen et al. (2020) have provided mathematical evidence to the effect that optional stopping under null hypothesis Bayesian testing does hold under some conditions. These papers are, however, exceedingly technical for most researchers in the applied social sciences. In this paper, we provide some mathematical derivations concerning Rouder's approximate simulation results for the two Bayesian hypothesis tests that he considered. The key idea is to consider the probability distribution of the Bayes factor, which is regarded as being a random variable across repeated sampling. This paper therefore offers an intuitive perspective to the literature and we believe it is a valid contribution towards understanding the practice of optional stopping in the context of Bayesian hypothesis testing.


Assuntos
Projetos de Pesquisa , Teorema de Bayes , Simulação por Computador , Humanos , Probabilidade
8.
Psychol Methods ; 27(3): 451-465, 2022 Jun.
Artigo em Inglês | MEDLINE | ID: mdl-34881956

RESUMO

Tendeiro and Kiers (2019) provide a detailed and scholarly critique of Null Hypothesis Bayesian Testing (NHBT) and its central component-the Bayes factor-that allows researchers to update knowledge and quantify statistical evidence. Tendeiro and Kiers conclude that NHBT constitutes an improvement over frequentist p-values, but primarily elaborate on a list of 11 "issues" of NHBT. We believe that several issues identified by Tendeiro and Kiers are of central importance for elucidating the complementary roles of hypothesis testing versus parameter estimation and for appreciating the virtue of statistical thinking over conducting statistical rituals. But although we agree with many of their thoughtful recommendations, we believe that Tendeiro and Kiers are overly pessimistic, and that several of their "issues" with NHBT may in fact be conceived as pronounced advantages. We illustrate our arguments with simple, concrete examples and end with a critical discussion of one of the recommendations by Tendeiro and Kiers, which is that "estimation of the full posterior distribution offers a more complete picture" than a Bayes factor hypothesis test. (PsycInfo Database Record (c) 2022 APA, all rights reserved).


Assuntos
Conhecimento , Projetos de Pesquisa , Teorema de Bayes , Humanos
9.
Nat Hum Behav ; 5(11): 1473-1480, 2021 11.
Artigo em Inglês | MEDLINE | ID: mdl-34764461

RESUMO

We argue that statistical practice in the social and behavioural sciences benefits from transparency, a fair acknowledgement of uncertainty and openness to alternative interpretations. Here, to promote such a practice, we recommend seven concrete statistical procedures: (1) visualizing data; (2) quantifying inferential uncertainty; (3) assessing data preprocessing choices; (4) reporting multiple models; (5) involving multiple analysts; (6) interpreting results modestly; and (7) sharing data and code. We discuss their benefits and limitations, and provide guidelines for adoption. Each of the seven procedures finds inspiration in Merton's ethos of science as reflected in the norms of communalism, universalism, disinterestedness and organized scepticism. We believe that these ethical considerations-as well as their statistical consequences-establish common ground among data analysts, despite continuing disagreements about the foundations of statistical inference.


Assuntos
Estatística como Assunto , Interpretação Estatística de Dados , Humanos , Disseminação de Informação , Modelos Estatísticos , Projetos de Pesquisa/normas , Estatística como Assunto/métodos , Estatística como Assunto/normas , Incerteza
10.
Elife ; 102021 11 09.
Artigo em Inglês | MEDLINE | ID: mdl-34751133

RESUMO

Any large dataset can be analyzed in a number of ways, and it is possible that the use of different analysis strategies will lead to different results and conclusions. One way to assess whether the results obtained depend on the analysis strategy chosen is to employ multiple analysts and leave each of them free to follow their own approach. Here, we present consensus-based guidance for conducting and reporting such multi-analyst studies, and we discuss how broader adoption of the multi-analyst approach has the potential to strengthen the robustness of results and conclusions obtained from analyses of datasets in basic and applied research.


Assuntos
Consenso , Análise de Dados , Conjuntos de Dados como Assunto , Pesquisa
11.
Psychol Med ; 51(16): 2752-2761, 2021 12.
Artigo em Inglês | MEDLINE | ID: mdl-34620261

RESUMO

Approval and prescription of psychotropic drugs should be informed by the strength of evidence for efficacy. Using a Bayesian framework, we examined (1) whether psychotropic drugs are supported by substantial evidence (at the time of approval by the Food and Drug Administration), and (2) whether there are systematic differences across drug groups. Data from short-term, placebo-controlled phase II/III clinical trials for 15 antipsychotics, 16 antidepressants for depression, nine antidepressants for anxiety, and 20 drugs for attention deficit hyperactivity disorder (ADHD) were extracted from FDA reviews. Bayesian model-averaged meta-analysis was performed and strength of evidence was quantified (i.e. BFBMA). Strength of evidence and trialling varied between drugs. Median evidential strength was extreme for ADHD medication (BFBMA = 1820.4), moderate for antipsychotics (BFBMA = 365.4), and considerably lower and more frequently classified as weak or moderate for antidepressants for depression (BFBMA = 94.2) and anxiety (BFBMA = 49.8). Varying median effect sizes (ESschizophrenia = 0.45, ESdepression = 0.30, ESanxiety = 0.37, ESADHD = 0.72), sample sizes (Nschizophrenia = 324, Ndepression = 218, Nanxiety = 254, NADHD = 189.5), and numbers of trials (kschizophrenia = 3, kdepression = 5.5, kanxiety = 3, kADHD = 2) might account for differences. Although most drugs were supported by strong evidence at the time of approval, some only had moderate or ambiguous evidence. These results show the need for more systematic quantification and classification of statistical evidence for psychotropic drugs. Evidential strength should be communicated transparently and clearly towards clinical decision makers.


Assuntos
Antipsicóticos , Transtorno do Deficit de Atenção com Hiperatividade , Humanos , Antipsicóticos/uso terapêutico , Teorema de Bayes , Psicotrópicos/uso terapêutico , Antidepressivos/uso terapêutico , Transtorno do Deficit de Atenção com Hiperatividade/tratamento farmacológico
12.
R Soc Open Sci ; 8(9): 191354, 2021 Sep.
Artigo em Inglês | MEDLINE | ID: mdl-34527263

RESUMO

Current discussions on improving the reproducibility of science often revolve around statistical innovations. However, equally important for improving methodological rigour is a valid operationalization of phenomena. Operationalization is the process of translating theoretical constructs into measurable laboratory quantities. Thus, the validity of operationalization is central for the quality of empirical studies. But do differences in the validity of operationalization affect the way scientists evaluate scientific literature? To investigate this, we manipulated the strength of operationalization of three published studies and sent them to researchers via email. In the first task, researchers were presented with a summary of the Method and Result section from one of the studies and were asked to guess the hypothesis that was investigated via a multiple-choice questionnaire. In a second task, researchers were asked to rate the perceived quality of the study. Our results show that (1) researchers are better at inferring the underlying research question from empirical results if the operationalization is more valid, but (2) the different validity is only to some extent reflected in a judgement of the study's quality. These results combined give partial corroboration to the notion that researchers' evaluations of research results are not affected by operationalization validity.

13.
PLoS One ; 16(7): e0255093, 2021.
Artigo em Inglês | MEDLINE | ID: mdl-34297766

RESUMO

BACKGROUND: Following testing in clinical trials, the use of remdesivir for treatment of COVID-19 has been authorized for use in parts of the world, including the USA and Europe. Early authorizations were largely based on results from two clinical trials. A third study published by Wang et al. was underpowered and deemed inconclusive. Although regulators have shown an interest in interpreting the Wang et al. study, under a frequentist framework it is difficult to determine if the non-significant finding was caused by a lack of power or by the absence of an effect. Bayesian hypothesis testing does allow for quantification of evidence in favor of the absence of an effect. FINDINGS: Results of our Bayesian reanalysis of the three trials show ambiguous evidence for the primary outcome of clinical improvement and moderate evidence against the secondary outcome of decreased mortality rate. Additional analyses of three studies published after initial marketing approval support these findings. CONCLUSIONS: We recommend that regulatory bodies take all available evidence into account for endorsement decisions. A Bayesian approach can be beneficial, in particular in case of statistically non-significant results. This is especially pressing when limited clinical efficacy data is available.


Assuntos
Monofosfato de Adenosina/análogos & derivados , Alanina/análogos & derivados , Tratamento Farmacológico da COVID-19 , COVID-19/epidemiologia , SARS-CoV-2 , Monofosfato de Adenosina/administração & dosagem , Alanina/administração & dosagem , Ensaios Clínicos como Assunto , Europa (Continente)/epidemiologia , Humanos , Resultado do Tratamento , Estados Unidos/epidemiologia
14.
R Soc Open Sci ; 8(5): 201697, 2021 May 19.
Artigo em Inglês | MEDLINE | ID: mdl-34017596

RESUMO

To overcome the frequently debated crisis of confidence, replicating studies is becoming increasingly more common. Multiple frequentist and Bayesian measures have been proposed to evaluate whether a replication is successful, but little is known about which method best captures replication success. This study is one of the first attempts to compare a number of quantitative measures of replication success with respect to their ability to draw the correct inference when the underlying truth is known, while taking publication bias into account. Our results show that Bayesian metrics seem to slightly outperform frequentist metrics across the board. Generally, meta-analytic approaches seem to slightly outperform metrics that evaluate single studies, except in the scenario of extreme publication bias, where this pattern reverses.

15.
R Soc Open Sci ; 7(4): 181351, 2020 Apr.
Artigo em Inglês | MEDLINE | ID: mdl-32431853

RESUMO

The crisis of confidence has undermined the trust that researchers place in the findings of their peers. In order to increase trust in research, initiatives such as preregistration have been suggested, which aim to prevent various questionable research practices. As it stands, however, no empirical evidence exists that preregistration does increase perceptions of trust. The picture may be complicated by a researcher's familiarity with the author of the study, regardless of the preregistration status of the research. This registered report presents an empirical assessment of the extent to which preregistration increases the trust of 209 active academics in the reported outcomes, and how familiarity with another researcher influences that trust. Contrary to our expectations, we report ambiguous Bayes factors and conclude that we do not have strong evidence towards answering our research questions. Our findings are presented along with evidence that our manipulations were ineffective for many participants, leading to the exclusion of 68% of complete datasets, and an underpowered design as a consequence. We discuss other limitations and confounds which may explain why the findings of the study deviate from a previously conducted pilot study. We reflect on the benefits of using the registered report submission format in light of our results. The OSF page for this registered report and its pilot can be found here: http://dx.doi.org/10.17605/OSF.IO/B3K75.

17.
Psychol Rev ; 127(2): 186-215, 2020 03.
Artigo em Inglês | MEDLINE | ID: mdl-31580104

RESUMO

Independent racing evidence-accumulator models have proven fruitful in advancing understanding of rapid decisions, mainly in the case of binary choice, where they can be relatively easily estimated and are known to account for a range of benchmark phenomena. Typically, such models assume a one-to-one mapping between accumulators and responses. We explore an alternative independent-race framework where more than one accumulator can be associated with each response, and where a response is triggered when a sufficient number of accumulators associated with that response reach their thresholds. Each accumulator is primarily driven by the difference in evidence supporting one versus another response (i.e., that response's "advantage"), with secondary inputs corresponding to the total evidence for both responses and a constant term. We use Brown and Heathcote's (2008) linear ballistic accumulator (LBA) to instantiate the framework in a mathematically tractable measurement model (i.e., a model whose parameters can be successfully recovered from data). We show this "advantage LBA" model provides a detailed quantitative account of a variety of benchmark binary and multiple choice phenomena that traditional independent accumulator models struggle with; in binary choice the effects of additive versus multiplicative changes to input values, and in multiple choice the effects of manipulations of the strength of lure (i.e., nontarget) stimuli and Hick's law. We conclude that the advantage LBA provides a tractable new avenue for understanding the dynamics of decisions among multiple choices. (PsycINFO Database Record (c) 2020 APA, all rights reserved).


Assuntos
Comportamento de Escolha , Modelos Teóricos , Inibição Neural , Tempo de Reação , Humanos
18.
BMC Med Res Methodol ; 19(1): 218, 2019 11 27.
Artigo em Inglês | MEDLINE | ID: mdl-31775644

RESUMO

BACKGROUND: Until recently a typical rule that has often been used for the endorsement of new medications by the Food and Drug Administration has been the existence of at least two statistically significant clinical trials favoring the new medication. This rule has consequences for the true positive (endorsement of an effective treatment) and false positive rates (endorsement of an ineffective treatment). METHODS: In this paper, we compare true positive and false positive rates for different evaluation criteria through simulations that rely on (1) conventional p-values; (2) confidence intervals based on meta-analyses assuming fixed or random effects; and (3) Bayes factors. We varied threshold levels for statistical evidence, thresholds for what constitutes a clinically meaningful treatment effect, and number of trials conducted. RESULTS: Our results show that Bayes factors, meta-analytic confidence intervals, and p-values often have similar performance. Bayes factors may perform better when the number of trials conducted is high and when trials have small sample sizes and clinically meaningful effects are not small, particularly in fields where the number of non-zero effects is relatively large. CONCLUSIONS: Thinking about realistic effect sizes in conjunction with desirable levels of statistical evidence, as well as quantifying statistical evidence with Bayes factors may help improve decision-making in some circumstances.


Assuntos
Teorema de Bayes , Ensaios Clínicos como Assunto , Interpretação Estatística de Dados , Aprovação de Drogas , Reações Falso-Negativas , Reações Falso-Positivas , Humanos , Valor Preditivo dos Testes , Tamanho da Amostra
19.
Int J Methods Psychiatr Res ; 28(4): e1795, 2019 12.
Artigo em Inglês | MEDLINE | ID: mdl-31264326

RESUMO

OBJECTIVES: In this study, we examined the consequences of ignoring violations of assumptions underlying the use of sum scores in assessing attention problems (AP) and if psychometrically more refined models improve predictions of relevant outcomes in adulthood. METHODS: Tracking Adolescents' Individual Lives data were used. AP symptom properties were examined using the AP scale of the Child Behavior Checklist at age 11. Consequences of model violations were evaluated in relation to psychopathology, educational attainment, financial status, and ability to form relationships in adulthood. RESULTS: Results showed that symptoms differed with respect to information and difficulty. Moreover, evidence of multidimensionality was found, with two groups of items measuring sluggish cognitive tempo and attention deficit hyperactivity disorder symptoms. Item response theory analyses indicated that a bifactor model fitted these data better than other competing models. In terms of accuracy of predicting functional outcomes, sum scores were robust against violations of assumptions in some situations. Nevertheless, AP scores derived from the bifactor model showed some superiority over sum scores. CONCLUSION: These findings show that more accurate predictions of later-life difficulties can be made if one uses a more suitable psychometric model to assess AP severity in children. This has important implications for research and clinical practice.


Assuntos
Transtorno do Deficit de Atenção com Hiperatividade/diagnóstico , Escala de Avaliação Comportamental/normas , Transtornos do Comportamento Infantil/diagnóstico , Escalas de Graduação Psiquiátrica/normas , Psicometria/normas , Adolescente , Adulto , Criança , Feminino , Humanos , Estudos Longitudinais , Masculino , Modelos Estatísticos , Índice de Gravidade de Doença , Adulto Jovem
20.
BMC Med Res Methodol ; 19(1): 71, 2019 03 29.
Artigo em Inglês | MEDLINE | ID: mdl-30925900

RESUMO

BACKGROUND: In clinical trials, study designs may focus on assessment of superiority, equivalence, or non-inferiority, of a new medicine or treatment as compared to a control. Typically, evidence in each of these paradigms is quantified with a variant of the null hypothesis significance test. A null hypothesis is assumed (null effect, inferior by a specific amount, inferior by a specific amount and superior by a specific amount, for superiority, non-inferiority, and equivalence respectively), after which the probabilities of obtaining data more extreme than those observed under these null hypotheses are quantified by p-values. Although ubiquitous in clinical testing, the null hypothesis significance test can lead to a number of difficulties in interpretation of the results of the statistical evidence. METHODS: We advocate quantifying evidence instead by means of Bayes factors and highlight how these can be calculated for different types of research design. RESULTS: We illustrate Bayes factors in practice with reanalyses of data from existing published studies. CONCLUSIONS: Bayes factors for superiority, non-inferiority, and equivalence designs allow for explicit quantification of evidence in favor of the null hypothesis. They also allow for interim testing without the need to employ explicit corrections for multiple testing.


Assuntos
Algoritmos , Teorema de Bayes , Medicina Baseada em Evidências/estatística & dados numéricos , Avaliação de Resultados em Cuidados de Saúde/estatística & dados numéricos , Projetos de Pesquisa , Biometria/métodos , Medicina Baseada em Evidências/métodos , Humanos , Avaliação de Resultados em Cuidados de Saúde/métodos , Equivalência Terapêutica
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA
...