Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 20 de 52
Filter
1.
BMC Med Res Methodol ; 23(1): 279, 2023 11 24.
Article in English | MEDLINE | ID: mdl-38001458

ABSTRACT

BACKGROUND: Clinical trials often seek to determine the superiority, equivalence, or non-inferiority of an experimental condition (e.g., a new drug) compared to a control condition (e.g., a placebo or an already existing drug). The use of frequentist statistical methods to analyze data for these types of designs is ubiquitous even though they have several limitations. Bayesian inference remedies many of these shortcomings and allows for intuitive interpretations, but are currently difficult to implement for the applied researcher. RESULTS: We outline the frequentist conceptualization of superiority, equivalence, and non-inferiority designs and discuss its disadvantages. Subsequently, we explain how Bayes factors can be used to compare the relative plausibility of competing hypotheses. We present baymedr, an R package and web application, that provides user-friendly tools for the computation of Bayes factors for superiority, equivalence, and non-inferiority designs. Instructions on how to use baymedr are provided and an example illustrates how existing results can be reanalyzed with baymedr. CONCLUSIONS: Our baymedr R package and web application enable researchers to conduct Bayesian superiority, equivalence, and non-inferiority tests. baymedr is characterized by a user-friendly implementation, making it convenient for researchers who are not statistical experts. Using baymedr, it is possible to calculate Bayes factors based on raw data and summary statistics.


Subject(s)
Research Design , Humans , Bayes Theorem
2.
Psychol Med ; 51(16): 2752-2761, 2021 12.
Article in English | MEDLINE | ID: mdl-34620261

ABSTRACT

Approval and prescription of psychotropic drugs should be informed by the strength of evidence for efficacy. Using a Bayesian framework, we examined (1) whether psychotropic drugs are supported by substantial evidence (at the time of approval by the Food and Drug Administration), and (2) whether there are systematic differences across drug groups. Data from short-term, placebo-controlled phase II/III clinical trials for 15 antipsychotics, 16 antidepressants for depression, nine antidepressants for anxiety, and 20 drugs for attention deficit hyperactivity disorder (ADHD) were extracted from FDA reviews. Bayesian model-averaged meta-analysis was performed and strength of evidence was quantified (i.e. BFBMA). Strength of evidence and trialling varied between drugs. Median evidential strength was extreme for ADHD medication (BFBMA = 1820.4), moderate for antipsychotics (BFBMA = 365.4), and considerably lower and more frequently classified as weak or moderate for antidepressants for depression (BFBMA = 94.2) and anxiety (BFBMA = 49.8). Varying median effect sizes (ESschizophrenia = 0.45, ESdepression = 0.30, ESanxiety = 0.37, ESADHD = 0.72), sample sizes (Nschizophrenia = 324, Ndepression = 218, Nanxiety = 254, NADHD = 189.5), and numbers of trials (kschizophrenia = 3, kdepression = 5.5, kanxiety = 3, kADHD = 2) might account for differences. Although most drugs were supported by strong evidence at the time of approval, some only had moderate or ambiguous evidence. These results show the need for more systematic quantification and classification of statistical evidence for psychotropic drugs. Evidential strength should be communicated transparently and clearly towards clinical decision makers.


Subject(s)
Antipsychotic Agents , Attention Deficit Disorder with Hyperactivity , Humans , Antipsychotic Agents/therapeutic use , Bayes Theorem , Psychotropic Drugs/therapeutic use , Antidepressive Agents/therapeutic use , Attention Deficit Disorder with Hyperactivity/drug therapy
3.
Proc Natl Acad Sci U S A ; 115(11): 2607-2612, 2018 03 13.
Article in English | MEDLINE | ID: mdl-29531092

ABSTRACT

We describe and demonstrate an empirical strategy useful for discovering and replicating empirical effects in psychological science. The method involves the design of a metastudy, in which many independent experimental variables-that may be moderators of an empirical effect-are indiscriminately randomized. Radical randomization yields rich datasets that can be used to test the robustness of an empirical claim to some of the vagaries and idiosyncrasies of experimental protocols and enhances the generalizability of these claims. The strategy is made feasible by advances in hierarchical Bayesian modeling that allow for the pooling of information across unlike experiments and designs and is proposed here as a gold standard for replication research and exploratory research. The practical feasibility of the strategy is demonstrated with a replication of a study on subliminal priming.


Subject(s)
Biomedical Research/standards , Research Design/standards , Bayes Theorem , Data Interpretation, Statistical , Humans , Random Allocation
4.
BMC Med Res Methodol ; 19(1): 218, 2019 11 27.
Article in English | MEDLINE | ID: mdl-31775644

ABSTRACT

BACKGROUND: Until recently a typical rule that has often been used for the endorsement of new medications by the Food and Drug Administration has been the existence of at least two statistically significant clinical trials favoring the new medication. This rule has consequences for the true positive (endorsement of an effective treatment) and false positive rates (endorsement of an ineffective treatment). METHODS: In this paper, we compare true positive and false positive rates for different evaluation criteria through simulations that rely on (1) conventional p-values; (2) confidence intervals based on meta-analyses assuming fixed or random effects; and (3) Bayes factors. We varied threshold levels for statistical evidence, thresholds for what constitutes a clinically meaningful treatment effect, and number of trials conducted. RESULTS: Our results show that Bayes factors, meta-analytic confidence intervals, and p-values often have similar performance. Bayes factors may perform better when the number of trials conducted is high and when trials have small sample sizes and clinically meaningful effects are not small, particularly in fields where the number of non-zero effects is relatively large. CONCLUSIONS: Thinking about realistic effect sizes in conjunction with desirable levels of statistical evidence, as well as quantifying statistical evidence with Bayes factors may help improve decision-making in some circumstances.


Subject(s)
Bayes Theorem , Clinical Trials as Topic , Data Interpretation, Statistical , Drug Approval , False Negative Reactions , False Positive Reactions , Humans , Predictive Value of Tests , Sample Size
5.
BMC Med Res Methodol ; 19(1): 71, 2019 03 29.
Article in English | MEDLINE | ID: mdl-30925900

ABSTRACT

BACKGROUND: In clinical trials, study designs may focus on assessment of superiority, equivalence, or non-inferiority, of a new medicine or treatment as compared to a control. Typically, evidence in each of these paradigms is quantified with a variant of the null hypothesis significance test. A null hypothesis is assumed (null effect, inferior by a specific amount, inferior by a specific amount and superior by a specific amount, for superiority, non-inferiority, and equivalence respectively), after which the probabilities of obtaining data more extreme than those observed under these null hypotheses are quantified by p-values. Although ubiquitous in clinical testing, the null hypothesis significance test can lead to a number of difficulties in interpretation of the results of the statistical evidence. METHODS: We advocate quantifying evidence instead by means of Bayes factors and highlight how these can be calculated for different types of research design. RESULTS: We illustrate Bayes factors in practice with reanalyses of data from existing published studies. CONCLUSIONS: Bayes factors for superiority, non-inferiority, and equivalence designs allow for explicit quantification of evidence in favor of the null hypothesis. They also allow for interim testing without the need to employ explicit corrections for multiple testing.


Subject(s)
Algorithms , Bayes Theorem , Evidence-Based Medicine/statistics & numerical data , Outcome Assessment, Health Care/statistics & numerical data , Research Design , Biometry/methods , Evidence-Based Medicine/methods , Humans , Outcome Assessment, Health Care/methods , Therapeutic Equivalency
6.
Aust N Z J Psychiatry ; 52(5): 435-445, 2018 05.
Article in English | MEDLINE | ID: mdl-29103308

ABSTRACT

OBJECTIVE: Parenthood is central to the personal and social identity of many people. For individuals with psychotic disorders, parenthood is often associated with formidable challenges. We aimed to identify predictors of adequate parenting among parents with psychotic disorders. METHODS: Data pertaining to 234 parents with psychotic disorders living with dependent children were extracted from a population-based prevalence study, the 2010 second Australian national survey of psychosis, and analysed using confirmatory factor analysis. Parenting outcome was defined as quality of care of children, based on participant report and interviewer enquiry/exploration, and included level of participation, interest and competence in childcare during the last 12 months. RESULTS: Five hypothesis-driven latent variables were constructed and labelled psychosocial support, illness severity, substance abuse/dependence, adaptive functioning and parenting role. Importantly, 75% of participants were not identified to have any dysfunction in the quality of care provided to their child(ren). Severity of illness and adaptive functioning were reliably associated with quality of childcare. Psychosocial support, substance abuse/dependence and parenting role had an indirect relationship to the outcome variable via their association with either severity of illness and/or adaptive functioning. CONCLUSION: The majority of parents in the current sample provided adequate parenting. However, greater symptom severity and poorer adaptive functioning ultimately leave parents with significant difficulties and in need of assistance to manage their parenting obligations. As symptoms and functioning can change episodically for people with psychotic illness, provision of targeted and flexible support that can deliver temporary assistance during times of need is necessary. This would maximise the quality of care provided to vulnerable children, with potential long-term benefits.


Subject(s)
Adaptation, Psychological , Child Rearing , Child of Impaired Parents , Parenting , Parents , Psychotic Disorders , Severity of Illness Index , Adult , Australia , Child , Factor Analysis, Statistical , Female , Health Surveys , Humans , Male , Middle Aged , Social Support , Young Adult
7.
Cogn Psychol ; 78: 78-98, 2015 May.
Article in English | MEDLINE | ID: mdl-25868112

ABSTRACT

In a world of limited resources, scarcity and rivalry are central challenges for decision makers-animals foraging for food, corporations seeking maximal profits, and athletes training to win, all strive against others competing for the same goals. In this article, we establish the role of competitive pressures for the facilitation of optimal decision making in simple sequential binary choice tasks. In two experiments, competition was introduced with a computerized opponent whose choice behavior reinforced one of two strategies: If the opponent probabilistically imitated participant choices, probability matching was optimal; if the opponent was indifferent, probability maximizing was optimal. We observed accurate asymptotic strategy use in both conditions irrespective of the provision of outcome probabilities, suggesting that participants were sensitive to the differences in opponent behavior. An analysis of reinforcement learning models established that computational conceptualizations of opponent behavior are critical to account for the observed divergence in strategy adoption. Our results provide a novel appraisal of probability matching and show how this individually 'irrational' choice phenomenon can be socially adaptive under competition.


Subject(s)
Choice Behavior , Competitive Behavior , Risk , Uncertainty , Adolescent , Decision Making , Female , Humans , Male , Probability , Reinforcement, Psychology , Young Adult
8.
Behav Brain Sci ; 37(1): 32-3, 2014 Feb.
Article in English | MEDLINE | ID: mdl-24461083

ABSTRACT

Newell & Shanks (N&S) show that there is no convincing evidence that processes assumed to be unconscious and superior are indeed unconscious. We take their argument one step further by showing that there is also no convincing evidence that these processes are superior. We review alternative paradigms that may provide more convincing tests of the superiority of (presumed) unconscious processes.


Subject(s)
Decision Making , Unconscious, Psychology , Humans
9.
Sci Rep ; 14(1): 12120, 2024 May 27.
Article in English | MEDLINE | ID: mdl-38802451

ABSTRACT

A large amount of scientific literature in social and behavioural sciences bases their conclusions on one or more hypothesis tests. As such, it is important to obtain more knowledge about how researchers in social and behavioural sciences interpret quantities that result from hypothesis test metrics, such as p-values and Bayes factors. In the present study, we explored the relationship between obtained statistical evidence and the degree of belief or confidence that there is a positive effect in the population of interest. In particular, we were interested in the existence of a so-called cliff effect: A qualitative drop in the degree of belief that there is a positive effect around certain threshold values of statistical evidence (e.g., at p = 0.05). We compared this relationship for p-values to the relationship for corresponding degrees of evidence quantified through Bayes factors, and we examined whether this relationship was affected by two different modes of presentation (in one mode the functional form of the relationship across values was implicit to the participant, whereas in the other mode it was explicit). We found evidence for a higher proportion of cliff effects in p-value conditions than in BF conditions (N = 139), but we did not get a clear indication whether presentation mode had an effect on the proportion of cliff effects. PROTOCOL REGISTRATION: The stage 1 protocol for this Registered Report was accepted in principle on 2 June 2023. The protocol, as accepted by the journal, can be found at: https://doi.org/10.17605/OSF.IO/5CW6P .

10.
Psychol Methods ; 29(3): 603-605, 2024 Jun.
Article in English | MEDLINE | ID: mdl-39311828

ABSTRACT

Linde et al. (2021) compared the "two one-sided tests" the "highest density interval-region of practical equivalence", and the "interval Bayes factor" approaches to establishing equivalence in terms of power and Type I error rate using typical decision thresholds. They found that the interval Bayes factor approach exhibited a higher power but also a higher Type I error rate than the other approaches. In response, Campbell and Gustafson (2022) showed that the performances of the three approaches can approximate one another when they are calibrated to have the same Type I error rate. In this article, we argue that these results have little bearing on how these approaches are used in practice; a concrete example is used to highlight this important point. (PsycInfo Database Record (c) 2024 APA, all rights reserved).


Subject(s)
Bayes Theorem , Humans , Psychology/methods , Psychology/standards , Data Interpretation, Statistical
11.
Front Med (Lausanne) ; 11: 1409259, 2024.
Article in English | MEDLINE | ID: mdl-39086943

ABSTRACT

Medicine regulators need to judge whether a drug's favorable effects outweigh its unfavorable effects based on a dossier submitted by an applicant, such as a pharmaceutical company. Because scientific knowledge is inherently uncertain, regulators also need to judge the credibility of these effects by identifying and evaluating uncertainties. We performed an ethnographic study of assessment procedures at the Dutch Medicines Evaluation Board (MEB) and describe how regulators evaluate the credibility of an applicant's claims about the benefits and risks of a drug in practice. Our analysis shows that regulators use an investigative approach, which illustrates the effort required to identify uncertainties. Moreover, we show that regulators' expectations about the presentation, the design, and the results of studies can shape how they perceive a medicine's dossier. We highlight the importance of regulatory experience and expertise in the identification and evaluation of uncertainties. In light of our observations, we provide two recommendations to reduce avoidable uncertainty: less reliance on evidence generated by the applicant; and better communication about, and enforcement of, regulatory frameworks toward drug developers.

12.
J Clin Epidemiol ; 174: 111479, 2024 Jul 23.
Article in English | MEDLINE | ID: mdl-39047916

ABSTRACT

OBJECTIVES: To quantify the strength of statistical evidence of randomized controlled trials (RCTs) for novel cancer drugs approved by the Food and Drug Administration in the last 2 decades. STUDY DESIGN AND SETTING: We used data on overall survival (OS), progression-free survival, and tumor response for novel cancer drugs approved for the first time by the Food and Drug Administration between January 2000 and December 2020. We assessed strength of statistical evidence by calculating Bayes factors (BFs) for all available endpoints, and we pooled evidence using Bayesian fixed-effect meta-analysis for indications approved based on 2 RCTs. Strength of statistical evidence was compared among endpoints, approval pathways, lines of treatment, and types of cancer. RESULTS: We analysed the available data from 82 RCTs corresponding to 68 indications supported by a single RCT and 7 indications supported by 2 RCTs. Median strength of statistical evidence was ambiguous for OS (BF = 1.9; interquartile range [IQR] 0.5-14.5), and strong for progression-free survival (BF = 24,767.8; IQR 109.0-7.3 × 106) and tumor response (BF = 113.9; IQR 3.0-547,100). Overall, 44 indications (58.7%) were approved without clear statistical evidence for OS improvements and 7 indications (9.3%) were approved without statistical evidence for improvements on any endpoint. Strength of statistical evidence was lower for accelerated approval compared to nonaccelerated approval across all 3 endpoints. No meaningful differences were observed for line of treatment and cancer type. This analysis is limited to statistical evidence. We did not consider nonstatistical factors (eg, risk of bias, quality of the evidence). CONCLUSION: BFs offer novel insights into the strength of statistical evidence underlying cancer drug approvals. Most novel cancer drugs lack strong statistical evidence that they improve OS, and a few lack statistical evidence for efficacy altogether. These cases require a transparent and clear explanation. When evidence is ambiguous, additional postmarketing trials could reduce uncertainty.

13.
R Soc Open Sci ; 11(7): 240125, 2024 Jul.
Article in English | MEDLINE | ID: mdl-39050728

ABSTRACT

Many-analysts studies explore how well an empirical claim withstands plausible alternative analyses of the same dataset by multiple, independent analysis teams. Conclusions from these studies typically rely on a single outcome metric (e.g. effect size) provided by each analysis team. Although informative about the range of plausible effects in a dataset, a single effect size from each team does not provide a complete, nuanced understanding of how analysis choices are related to the outcome. We used the Delphi consensus technique with input from 37 experts to develop an 18-item subjective evidence evaluation survey (SEES) to evaluate how each analysis team views the methodological appropriateness of the research design and the strength of evidence for the hypothesis. We illustrate the usefulness of the SEES in providing richer evidence assessment with pilot data from a previous many-analysts study.

14.
Mem Cognit ; 41(3): 329-38, 2013 Apr.
Article in English | MEDLINE | ID: mdl-23135749

ABSTRACT

Probability matching in sequential decision making is a striking violation of rational choice that has been observed in hundreds of experiments. Recent studies have demonstrated that matching persists even in described tasks in which all the information required for identifying a superior alternative strategy-maximizing-is present before the first choice is made. These studies have also indicated that maximizing increases when (1) the asymmetry in the availability of matching and maximizing strategies is reduced and (2) normatively irrelevant outcome feedback is provided. In the two experiments reported here, we examined the joint influences of these factors, revealing that strategy availability and outcome feedback operate on different time courses. Both behavioral and modeling results showed that while availability of the maximizing strategy increases the choice of maximizing early during the task, feedback appears to act more slowly to erode misconceptions about the task and to reinforce optimal responding. The results illuminate the interplay between "top-down" identification of choice strategies and "bottom-up" discovery of those strategies via feedback.


Subject(s)
Decision Making , Feedback, Psychological , Probability , Problem Solving , Adult , Choice Behavior , Female , Humans , Male , Models, Psychological , Random Allocation , Young Adult
15.
Behav Brain Sci ; 36(3): 300-2, 2013 Jun.
Article in English | MEDLINE | ID: mdl-23673047

ABSTRACT

We focus on two issues: (1) an unusual, counterintuitive prediction that quantum probability (QP) theory appears to make regarding multiple sequential judgments, and (2) the extent to which QP is an appropriate and comprehensive benchmark for assessing judgment. These issues highlight how QP theory can fall prey to the same problems of arbitrariness that Pothos & Busemeyer (P&B) discuss as plaguing other models.


Subject(s)
Cognition , Models, Psychological , Probability Theory , Quantum Theory , Humans
16.
PLoS One ; 18(10): e0292279, 2023.
Article in English | MEDLINE | ID: mdl-37788282

ABSTRACT

BACKGROUND: Publishing study results in scientific journals has been the standard way of disseminating science. However, getting results published may depend on their statistical significance. The consequence of this is that the representation of scientific knowledge might be biased. This type of bias has been called publication bias. The main objective of the present study is to get more insight into publication bias by examining it at the author, reviewer, and editor level. Additionally, we make a direct comparison between publication bias induced by authors, by reviewers, and by editors. We approached our participants by e-mail, asking them to fill out an online survey. RESULTS: Our findings suggest that statistically significant findings have a higher likelihood to be published than statistically non-significant findings, because (1) authors (n = 65) are more likely to write up and submit articles with significant results compared to articles with non-significant results (median effect size 1.10, BF10 = 1.09*107); (2) reviewers (n = 60) give more favourable reviews to articles with significant results compared to articles with non-significant results (median effect size 0.58, BF10 = 4.73*102); and (3) editors (n = 171) are more likely to accept for publication articles with significant results compared to articles with non-significant results (median effect size, 0.94, BF10 = 7.63*107). Evidence on differences in the relative contributions to publication bias by authors, reviewers, and editors is ambiguous (editors vs reviewers: BF10 = 0.31, reviewers vs authors: BF10 = 3.11, and editors vs authors: BF10 = 0.42). DISCUSSION: One of the main limitations was that rather than investigating publication bias directly, we studied potential for publication bias. Another limitation was the low response rate to the survey.


Subject(s)
Authorship , Writing , Humans , Publication Bias , Surveys and Questionnaires , Electronic Mail
17.
Psychol Methods ; 28(3): 740-755, 2023 Jun.
Article in English | MEDLINE | ID: mdl-34735173

ABSTRACT

Some important research questions require the ability to find evidence for two conditions being practically equivalent. This is impossible to accomplish within the traditional frequentist null hypothesis significance testing framework; hence, other methodologies must be utilized. We explain and illustrate three approaches for finding evidence for equivalence: The frequentist two one-sided tests procedure, the Bayesian highest density interval region of practical equivalence procedure, and the Bayes factor interval null procedure. We compare the classification performances of these three approaches for various plausible scenarios. The results indicate that the Bayes factor interval null approach compares favorably to the other two approaches in terms of statistical power. Critically, compared with the Bayes factor interval null procedure, the two one-sided tests and the highest density interval region of practical equivalence procedures have limited discrimination capabilities when the sample size is relatively small: Specifically, in order to be practically useful, these two methods generally require over 250 cases within each condition when rather large equivalence margins of approximately .2 or .3 are used; for smaller equivalence margins even more cases are required. Because of these results, we recommend that researchers rely more on the Bayes factor interval null approach for quantifying evidence for equivalence, especially for studies that are constrained on sample size. (PsycInfo Database Record (c) 2023 APA, all rights reserved).


Subject(s)
Research Design , Humans , Bayes Theorem , Sample Size
18.
R Soc Open Sci ; 10(2): 210586, 2023 Feb.
Article in English | MEDLINE | ID: mdl-36756069

ABSTRACT

Increased execution of replication studies contributes to the effort to restore credibility of empirical research. However, a second generation of problems arises: the number of potential replication targets is at a serious mismatch with available resources. Given limited resources, replication target selection should be well-justified, systematic and transparently communicated. At present the discussion on what to consider when selecting a replication target is limited to theoretical discussion, self-reported justifications and a few formalized suggestions. In this Registered Report, we proposed a study involving the scientific community to create a list of considerations for consultation when selecting a replication target in psychology. We employed a modified Delphi approach. First, we constructed a preliminary list of considerations. Second, we surveyed psychologists who previously selected a replication target with regards to their considerations. Third, we incorporated the results into the preliminary list of considerations and sent the updated list to a group of individuals knowledgeable about concerns regarding replication target selection. Over the course of several rounds, we established consensus regarding what to consider when selecting a replication target. The resulting checklist can be used for transparently communicating the rationale for selecting studies for replication.

19.
PLoS One ; 18(1): e0274429, 2023.
Article in English | MEDLINE | ID: mdl-36701303

ABSTRACT

As replications of individual studies are resource intensive, techniques for predicting the replicability are required. We introduce the repliCATS (Collaborative Assessments for Trustworthy Science) process, a new method for eliciting expert predictions about the replicability of research. This process is a structured expert elicitation approach based on a modified Delphi technique applied to the evaluation of research claims in social and behavioural sciences. The utility of processes to predict replicability is their capacity to test scientific claims without the costs of full replication. Experimental data supports the validity of this process, with a validation study producing a classification accuracy of 84% and an Area Under the Curve of 0.94, meeting or exceeding the accuracy of other techniques used to predict replicability. The repliCATS process provides other benefits. It is highly scalable, able to be deployed for both rapid assessment of small numbers of claims, and assessment of high volumes of claims over an extended period through an online elicitation platform, having been used to assess 3000 research claims over an 18 month period. It is available to be implemented in a range of ways and we describe one such implementation. An important advantage of the repliCATS process is that it collects qualitative data that has the potential to provide insight in understanding the limits of generalizability of scientific claims. The primary limitation of the repliCATS process is its reliance on human-derived predictions with consequent costs in terms of participant fatigue although careful design can minimise these costs. The repliCATS process has potential applications in alternative peer review and in the allocation of effort for replication studies.


Subject(s)
Behavioral Sciences , Data Accuracy , Humans , Reproducibility of Results , Costs and Cost Analysis , Peer Review
20.
Psychol Methods ; 28(3): 558-579, 2023 Jun.
Article in English | MEDLINE | ID: mdl-35298215

ABSTRACT

The last 25 years have shown a steady increase in attention for the Bayes factor as a tool for hypothesis evaluation and model selection. The present review highlights the potential of the Bayes factor in psychological research. We discuss six types of applications: Bayesian evaluation of point null, interval, and informative hypotheses, Bayesian evidence synthesis, Bayesian variable selection and model averaging, and Bayesian evaluation of cognitive models. We elaborate what each application entails, give illustrative examples, and provide an overview of key references and software with links to other applications. The article is concluded with a discussion of the opportunities and pitfalls of Bayes factor applications and a sketch of corresponding future research lines. (PsycInfo Database Record (c) 2023 APA, all rights reserved).


Subject(s)
Bayes Theorem , Behavioral Research , Psychology , Humans , Behavioral Research/methods , Psychology/methods , Software , Research Design
SELECTION OF CITATIONS
SEARCH DETAIL