RESUMO
Mediation analysis investigates the covariation of variables in a population of interest. In contrast, the resolution level of psychological theory, at its core, aims to reach all the way to the behaviors, mental processes, and relationships of individual persons. It would be a logical error to presume that the population-level pattern of behavior revealed by a mediation analysis directly describes all, or even many, individual members of the population. Instead, to reconcile collective covariation with theoretical claims about individual behavior, one needs to look beyond abstract aggregate trends. Taking data quality as a given and a mediation model's estimated parameters as accurate population-level depictions, what can one say about the number of people properly described by the linkages in that mediation analysis? How many individuals are exceptions to that pattern or pathway? How can we bridge the gap between psychological theory and analytic method? We provide a simple framework for understanding how many people actually align with the pattern of relationships revealed by a population-level mediation. Additionally, for those individuals who are exceptions to that pattern, we tabulate how many people mismatch which features of the mediation pattern. Consistent with the person-oriented research paradigm, understanding the distribution of alignment and mismatches goes beyond the realm of traditional variable-level mediation analysis. Yet, such a tabulation is key to designing potential interventions. It provides the basis for predicting how many people stand to either benefit from, or be disadvantaged by, which type of intervention.
Assuntos
Análise de Mediação , Humanos , Teoria PsicológicaRESUMO
Psychology and neighboring disciplines are currently consumed with a replication crisis. Recent work has shown that replication can have the unintended consequence of perpetuating unwarranted conclusions when repeating an incorrect line of scientific reasoning from one study to another. This tutorial shows how decision researchers can derive logically coherent predictions from their theory by keeping track of the heterogeneity of preference the theory permits, rather than dismissing such heterogeneity as a nuisance. As an illustration, we reanalyze data of Barron and Ursino (2013). By keeping track of the heterogeneity of preferences permitted by Cumulative Prospect Theory, we show how the analysis and conclusions of Barron and Ursino (2013) change. This tutorial is intended as a blue-print for graduate student projects that dig deeply into the merits of prior studies and/or that supplement replication studies with a quality check.
Assuntos
Interpretação Estatística de Dados , Teoria Psicológica , Psicologia , Projetos de Pesquisa , Humanos , Psicologia/métodos , Psicologia/normas , Projetos de Pesquisa/normasRESUMO
We investigated whether older adults are more likely than younger adults to violate a foundational property of rational decision making, the axiom of transitive preference. Our experiment consisted of two groups, older (ages 60-75; 21 participants) and younger (ages 18-30; 20 participants) adults. We used Bayesian model selection to investigate whether individuals were better described via (transitive) weak order-based decision strategies or (possibly intransitive) lexicographic semiorder decision strategies. We found weak evidence for the hypothesis that older adults violate transitivity at a higher rate than younger adults. At the same time, a hierarchical Bayesian analysis suggests that, in this study, the distribution of decision strategies across individuals is similar for both older and younger adults.
Assuntos
Envelhecimento Cognitivo/fisiologia , Tomada de Decisões/fisiologia , Adolescente , Adulto , Fatores Etários , Idoso , Feminino , Humanos , Masculino , Pessoa de Meia-Idade , Adulto JovemRESUMO
We explore the implication of viewing a psychological theory as the logical conjunction of all its predictions. Even if several predictions derived from a theory are descriptive of behavior in separate studies, the theory as a whole may fail to be descriptive of any single individual. We discuss what proportion of a population satisfies a theory's joint predictions as a function of the true effect sizes and the proportion of variance attributable to individual differences. Unless there are no individual differences, even very well replicated effects may fail to establish that the combination of predictions that have been tested accurately describes even one person. Every additional study that contributes another effect, rather than strengthening support for the theory, may further limit its scope. Using four illustrative examples from cognitive and social psychology, we show how, in particular, small effect sizes dramatically limit the scope of psychological theories unless every small effect coincides with little to no individual differences. In some cases, this 'paradox' can be overcome by casting theories in such a way that they apply to everyone in a target population, without exception. Rather than relegating heterogeneity to a nuisance component of statistical models and data analysis, explicitly keeping track of heterogeneity in hypothetical constructs makes it possible to understand and quantify theoretical scope. (PsycINFO Database Record (c) 2019 APA, all rights reserved).
Assuntos
Interpretação Estatística de Dados , Individualidade , Modelos Psicológicos , Modelos Estatísticos , Teoria Psicológica , Psicologia/normas , Projetos de Pesquisa/normas , HumanosRESUMO
The most eye-catching feature of Hertwig and Pleskac's (2018) comment is their virtual silence about Regenwetter and Robinson's (2017) core message. Regenwetter and Robinson warn of a logical disconnect between some psychological constructs and certain types of theoretical predictions about human behavior. Scientific "predictions" that do not actually follow from the underlying theory can, in turn, lead to completely uninformative behavioral measures. Regenwetter and Robinson trace this construct-behavior gap to logical reasoning fallacies that seem common in behavioral decision research. They also document how a logically flawed line of scientific reasoning is often immune to discovery by replication. Hence, 'successful' replication can perpetuate unwarranted conclusions and, consequently, obfuscate science. Hertwig and Pleskac's commentary is striking in that it says almost nothing about the construct-behavior gap, it barely touches on logical reasoning fallacies, and it ignores Regenwetter and Robinson's core warning that replication and repetition of unsubstantiated conclusions hinder science. In this reply, we also point out errors and misinterpretations in Hertwig and Pleskac's commentary, and we rebut alleged problems with Regenwetter and Robinson's approach and findings. (PsycINFO Database Record (c) 2019 APA, all rights reserved).
Assuntos
Tomada de Decisões , Resolução de Problemas , Pesquisa Comportamental , HumanosRESUMO
Statistical analyses of data often add some additional constraints to a theory and leave out others, so as to convert the theory into a testable hypothesis. In the case of binary data, such as yes/no responses, or such as the presence/absence of a symptom or a behavior, theories often actually predict that certain response probabilities change monotonically in a specific direction and/or that certain response probabilities are bounded from above or below in specific ways. A regression analysis is not really true to such a theory in that it may leave out parsimonious constraints and in that extraneous assumptions like linearity or log-linearity, or even the assumption of a functional relationship, are dictated by the method rather than the theory. That mismatch may well bias the results of empirical analysis and jeopardize attempts at meaningful replication of psychological research. This tutorial shows how contemporary order-constrained methods can shed more light on such questions, using far weaker auxiliary assumptions, while also formulating more detailed, nuanced, and concise hypotheses, and allowing for quantitative model selection. (PsycINFO Database Record (c) 2019 APA, all rights reserved).
Assuntos
Pesquisa Comportamental/métodos , Pesquisa Biomédica/métodos , Interpretação Estatística de Dados , Modelos Estatísticos , Psicologia/métodos , Análise de Regressão , HumanosRESUMO
In so-called random preference models of probabilistic choice, a decision maker chooses according to an unspecified probability distribution over preference states. The most prominent case arises when preference states are linear orders or weak orders of the choice alternatives. The literature has documented that actually evaluating whether decision makers' observed choices are consistent with such a probabilistic model of choice poses computational difficulties. This severely limits the possible scale of empirical work in behavioral economics and related disciplines. We propose a family of column generation based algorithms for performing such tests. We evaluate our algorithms on various sets of instances. We observe substantial improvements in computation time and conclude that we can efficiently test substantially larger data sets than previously possible.
RESUMO
After more then 50 years of probabilistic choice modeling in Economics, Marketing, Political Science, Psychology, and related disciplines, theoretical and computational advances give scholars access to a sophisticated array of modeling and inference resources. We review some important, but perhaps often overlooked, properties of major classes of probabilistic choice models. For within-respondent applications, we discuss which models require repeated choices by an individual to be independent and response probabilities to be stationary. We show how some model classes, but not others, are invariant over variable preferences, variable utilities, or variable choice probabilities. These models, but not others, accommodate pooling of responses or averaging of choice proportions within participant when underlying parameters vary across observations. These, but not others, permit pooling/averaging across respondents in the presence of individual differences. We also review the role of independence and stationarity in statistical inference, including for probabilistic choice models that, themselves, do not require those properties.
RESUMO
Mathematical psychology has a long tradition of modeling probabilistic choice via distribution-free random utility models and associated random preference models. For such models, the predicted choice probabilities often form a bounded and convex polyhedral set, or polytope. Polyhedral combinatorics have thus played a key role in studying the mathematical structure of these models. However, standard methods for characterizing the polytopes of such models are subject to a combinatorial explosion in complexity as the number of choice alternatives increases. Specifically, this is the case for random preference models based on linear, weak, semi- and interval orders. For these, a complete, linear description of the polytope is currently known only for, at most, 5-8 choice alternatives. We leverage the method of extended formulations to break through those boundaries. For each of the four types of preferences, we build an appropriate network, and show that the associated network flow polytope provides an extended formulation of the polytope of the choice model. This extended formulation has a simple linear description that is more parsimonious than descriptions obtained by standard methods for large numbers of choice alternatives. The result is a computationally less demanding way of testing the probabilistic choice model on data. We sketch how the latter interfaces with recent developments in contemporary statistics.
RESUMO
The selective integration model of Tsetsos et al. (2016a) is a biologically motivated computational framework that aims to model intransitive preference and choice. Tsetsos et al. (2016a) concluded that a noisy system can lead to violations of transitivity in otherwise rational agents optimizing a task. We show how their model can be interpreted from a Fechnerian perspective and within a random utility framework. Specifically, we spell out the connection between the selective integration model and two probabilistic models of transitive preference, weak stochastic transitivity and the triangle inequalities, tested by Tsetsos et al. (2016a).
RESUMO
Behavioral decision research compares theoretical constructs like preferences to behavior such as observed choices. Three fairly common links from constructs to behavior are (1) to tally, across participants and decision problems, the number of choices consistent with one predicted pattern of pairwise preferences; (2) to compare what most people choose in each decision problem against a predicted preference pattern; or (3) to enumerate the decision problems in which two experimental conditions generate a 1-sided significant difference in choice frequency 'consistent' with the theory. Although simple, these theoretical links are heuristics. They are subject to well-known reasoning fallacies, most notably the fallacy of sweeping generalization and the fallacy of composition. No amount of replication can alleviate these fallacies. On the contrary, reiterating logically inconsistent theoretical reasoning over and again across studies obfuscates science. As a case in point, we consider pairwise choices among simple lotteries and the hypotheses of overweighting or underweighting of small probabilities, as well as the description-experience gap. We discuss ways to avoid reasoning fallacies in bridging the conceptual gap between hypothetical constructs, such as, for example, "overweighting" to observable pairwise choice data. Although replication is invaluable, successful replication of hard-to-interpret results is not. Behavioral decision research stands to gain much theoretical and empirical clarity by spelling out precise and formally explicit theories of how hypothetical constructs translate into observable behavior. (PsycINFO Database Record
Assuntos
Pesquisa Comportamental , Comportamento de Escolha , Tomada de Decisões , Assunção de Riscos , Humanos , Modelos Psicológicos , ProbabilidadeRESUMO
Loomes (2010, Psychological Review) proposed the Perceived Relative Argument Model (PRAM) as a novel descriptive theory for risky choice. PRAM differs from models like prospect theory in that decision makers do not compare 2 prospects by first assigning each prospect an overall utility and then choosing the prospect with the higher overall utility. Instead, the decision maker determines the relative argument for one or the other prospect separately for outcomes and probabilities, before reaching an overall pairwise preference. Loomes (2010) did not model variability in choice behavior. We consider 2 types of "stochastic specification" of PRAM. In one, a decision maker has a fixed preference, and choice variability is caused by occasional errors/trembles. In the other, the parameters of the perception functions for outcomes and for probabilities are random, with no constraints on their joint distribution. State-of-the-art frequentist and Bayesian "order-constrained" inference suggest that PRAM accounts poorly for individual subject laboratory data from 67 participants. This conclusion is robust across 7 different utility functions for money and remains largely unaltered also when considering a prior unpublished version of PRAM (Loomes, 2006) that featured an additional free parameter in the perception function for probabilities. (PsycINFO Database Record (c) 2014 APA, all rights reserved).
Assuntos
Comportamento de Escolha/fisiologia , Modelos Psicológicos , HumanosRESUMO
The goal of this paper is to make modeling and quantitative testing accessible to behavioral decision researchers interested in substantive questions. We provide a novel, rigorous, yet very general, quantitative diagnostic framework for testing theories of binary choice. This permits the nontechnical scholar to proceed far beyond traditionally rather superficial methods of analysis, and it permits the quantitatively savvy scholar to triage theoretical proposals before investing effort into complex and specialized quantitative analyses. Our theoretical framework links static algebraic decision theory with observed variability in behavioral binary choice data. The paper is supplemented with a custom-designed public-domain statistical analysis package, the QTest software. We illustrate our approach with a quantitative analysis using published laboratory data, including tests of novel versions of "Random Cumulative Prospect Theory." A major asset of the approach is the potential to distinguish decision makers who have a fixed preference and commit errors in observed choices from decision makers who waver in their preferences.
RESUMO
Theories of rational choice often make the structural consistency assumption that every decision maker's binary strict preference among choice alternatives forms a strict weak order. Likewise, the very concept of a utility function over lotteries in normative, prescriptive, and descriptive theory is mathematically equivalent to strict weak order preferences over those lotteries, while intransitive heuristic models violate such weak orders. Using new quantitative interdisciplinary methodologies, we dissociate the variability of choices from the structural inconsistency of preferences. We show that laboratory choice behavior among stimuli of a classical "intransitivity" paradigm is, in fact, consistent with variable strict weak order preferences. We find that decision makers act in accordance with a restrictive mathematical model that, for the behavioral sciences, is extraordinarily parsimonious. Our findings suggest that the best place to invest future behavioral decision research is not in the development of new intransitive decision models but rather in the specification of parsimonious models consistent with strict weak order(s), as well as heuristics and other process models that explain why preferences appear to be weakly ordered.
Assuntos
Pesquisa Comportamental/estatística & dados numéricos , Tomada de Decisões , Modelos Psicológicos , Probabilidade , Teoria Psicológica , Comportamento de Escolha , Interpretação Estatística de Dados , HumanosRESUMO
Transitivity of preferences is a fundamental principle shared by most major contemporary rational, prescriptive, and descriptive models of decision making. To have transitive preferences, a person, group, or society that prefers choice option x to y and y to z must prefer x to z. Any claim of empirical violations of transitivity by individual decision makers requires evidence beyond a reasonable doubt. We discuss why unambiguous evidence is currently lacking and how to clarify the issue. In counterpoint to Tversky's (1969) seminal "Intransitivity of Preferences," we reconsider his data as well as those from more than 20 other studies of intransitive human or animal decision makers. We challenge the standard operationalizations of transitive preferences and discuss pervasive methodological problems in the collection, modeling, and analysis of relevant empirical data. For example, violations of weak stochastic transitivity do not imply violations of transitivity of preference. Building on past multidisciplinary work, we use parsimonious mixture models, where the space of permissible preference states is the family of (transitive) strict linear orders. We show that the data from many of the available studies designed to elicit intransitive choice are consistent with transitive preferences.
Assuntos
Comportamento de Escolha , Tomada de Decisões , Feminino , Humanos , Illinois , Masculino , Modelos Psicológicos , Processos EstocásticosRESUMO
As Duncan Luce and other prominent scholars have pointed out on several occasions, testing algebraic models against empirical data raises difficult conceptual, mathematical, and statistical challenges. Empirical data often result from statistical sampling processes, whereas algebraic theories are nonprobabilistic. Many probabilistic specifications lead to statistical boundary problems and are subject to nontrivial order constrained statistical inference. The present paper discusses Luce's challenge for a particularly prominent axiom: Transitivity. The axiom of transitivity is a central component in many algebraic theories of preference and choice. We offer the currently most complete solution to the challenge in the case of transitivity of binary preference on the theory side and two-alternative forced choice on the empirical side, explicitly for up to five, and implicitly for up to seven, choice alternatives. We also discuss the relationship between our proposed solution and weak stochastic transitivity. We recommend to abandon the latter as a model of transitive individual preferences.
RESUMO
For centuries, the mathematical aggregation of preferences by groups, organizations, or society itself has received keen interdisciplinary attention. Extensive theoretical work in economics and political science throughout the second half of the 20th century has highlighted the idea that competing notions of rational social choice intrinsically contradict each other. This has led some researchers to consider coherent democratic decision making to be a mathematical impossibility. Recent empirical work in psychology qualifies that view. This nontechnical review sketches a quantitative research paradigm for the behavioral investigation of mathematical social choice rules on real ballots, experimental choices, or attitudinal survey data. The article poses a series of open questions. Some classical work sometimes makes assumptions about voter preferences that are descriptively invalid. Do such technical assumptions lead the theory astray? How can empirical work inform the formulation of meaningful theoretical primitives? Classical "impossibility results" leverage the fact that certain desirable mathematical properties logically cannot hold in all conceivable electorates. Do these properties nonetheless hold true in empirical distributions of preferences? Will future behavioral analyses continue to contradict the expectations of established theory? Under what conditions do competing consensus methods yield identical outcomes and why do they do so?
RESUMO
Behavioural social choice has been proposed as a social choice parallel to seminal developments in other decision sciences, such as behavioural decision theory, behavioural economics, behavioural finance and behavioural game theory. Behavioural paradigms compare how rational actors should make certain types of decisions with how real decision makers behave empirically. We highlight that important theoretical predictions in social choice theory change dramatically under even minute violations of standard assumptions. Empirical data violate those critical assumptions. We argue that the nature of preference distributions in electorates is ultimately an empirical question, which social choice theory has often neglected. We also emphasize important insights for research on decision making by individuals. When researchers aggregate individual choice behaviour in laboratory experiments to report summary statistics, they are implicitly applying social choice rules. Thus, they should be aware of the potential for aggregation paradoxes. We hypothesize that such problems may substantially mar the conclusions of a number of (sometimes seminal) papers in behavioural decision research.