Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 196
Filtrar
1.
Synthese ; 204(1): 3, 2024.
Artigo em Inglês | MEDLINE | ID: mdl-38911049

RESUMO

Our best current science seems to suggest the laws of physics and the initial conditions of our universe are fine-tuned for the possibility of life. A significant number of scientists and philosophers believe that the fine-tuning is evidence for the multiverse hypothesis. This paper will focus on a much-discussed objection to the inference from the fine-tuning to the multiverse: the charge that this line of reasoning commits the inverse gambler's fallacy. Despite the existence of a literature going back decades, this philosophical debate has made little contact with scientific discussion of fine-tuning and the multiverse, which mainly revolves around a specific form of the multiverse hypothesis rooted in eternal inflation combined with string theory. Because of this, potentially important implications from science to philosophy, and vice versa, have been left underexplored. In this paper, I will take a first step at joining up these two discussions, by arguing that attention to the eternal inflation + string theory conception of the multiverse supports the inverse gambler's fallacy charge. It does this by supporting the idea that our universe is contingently fine-tuned, thus addressing the concern that proponents of the inverse gambler's fallacy charge have assumed this without argument.

2.
Heliyon ; 10(9): e30094, 2024 May 15.
Artigo em Inglês | MEDLINE | ID: mdl-38694114

RESUMO

Opportunity actualization is a critical competency attributed to entrepreneurs, which has received widespread attention in the entrepreneurship literature. However, the knowledge of Entrepreneurial Opportunity Abandonment (EOA) decisions is limited. We, therefore, explore the relatively under-studied EOA, analyzing why entrepreneurs commit decision errors, abandon potentially viable opportunities (type I error) or pursue non-opportunity spaces (type II error), and ultimately forsake them later. Through a scoping literature review, we highlight more profound psychological variables that shape entrepreneurial opportunity behavior triggering EOA decisions. We discuss entrepreneurial cognitive limitations in articulating, concretizing, and communicating the opportunity. We argue that varying construal mindsets cause reification fallacies and create perceptual blocks in enunciating an opportunity idea. Further, subjective stakeholder feedback and biased information exchange largely shape EOA decisions, which are mediated through the information processing capacity of entrepreneurs. Finally, we propose four entrepreneurial decision-limiting hypotheses which require an empirical investigation.

3.
J Clin Transl Sci ; 8(1): e78, 2024.
Artigo em Inglês | MEDLINE | ID: mdl-38745875

RESUMO

Introduction: Screening for health-related social needs (HRSNs) within health systems is a widely accepted recommendation, however challenging to implement. Aggregate area-level metrics of social determinants of health (SDoH) are easily accessible and have been used as proxies in the interim. However, gaps remain in our understanding of the relationships between these measurement methodologies. This study assesses the relationships between three area-level SDoH measures, Area Deprivation Index (ADI), Social Deprivation Index (SDI) and Social Vulnerability Index (SVI), and individual HRSNs among patients within one large urban health system. Methods: Patients screened for HRSNs between 2018 and 2019 (N = 45,312) were included in the analysis. Multivariable logistic regression models assessed the association between area-level SDoH scores and individual HRSNs. Bivariate choropleth maps displayed the intersection of area-level SDoH and individual HRSNs, and the sensitivity, specificity, and positive and negative predictive values of the three area-level metrics were assessed in relation to individual HRSNs. Results: The SDI and SVI were significantly associated with HRSNs in areas with high SDoH scores, with strong specificity and positive predictive values (∼83% and ∼78%) but poor sensitivity and negative predictive values (∼54% and 62%). The strength of these associations and predictive values was poor in areas with low SDoH scores. Conclusions: While limitations exist in utilizing area-level SDoH metrics as proxies for individual social risk, understanding where and how these data can be useful in combination is critical both for meeting the immediate needs of individuals and for strengthening the advocacy platform needed for resource allocation across communities.

4.
Digit Health ; 10: 20552076241254019, 2024.
Artigo em Inglês | MEDLINE | ID: mdl-38766362

RESUMO

The growing and ubiquitous digitalization trends embodied in eHealth initiatives have led to the widespread adoption of digital solutions in the healthcare sector. These initiatives have been heralded as a potent transformative force aiming to improve healthcare delivery, enhance patient outcomes and increase the efficiency of healthcare systems. However, despite the significant potential and possibilities offered by eHealth initiatives, the article highlights the importance of critically examining their implications and cautions against the misconception that technology alone can solve complex public health concerns and healthcare challenges. It emphasizes the need to critically consider the sociocultural context, education and training, organizational and institutional aspects, regulatory frameworks, user involvement and other important factors when implementing eHealth initiatives. Disregarding these crucial elements can render eHealth initiatives inefficient or even counterproductive. In view of that, the article identifies failures and fallacies that can hinder the success of eHealth initiatives and highlights areas where they often fall short of meeting rising and unjustified expectations. To address these challenges, the article recommends a more realistic and evidence-based approach to planning and implementing eHealth initiatives. It calls for consistent research agendas, appropriate evaluation methodologies and strategic orientations within eHealth initiatives. By adopting this approach, eHealth initiatives can contribute to the achievement of societal goals and the realization of the key health priorities and development imperatives of healthcare systems on a global scale.

5.
Environ Manage ; 2024 May 29.
Artigo em Inglês | MEDLINE | ID: mdl-38811434

RESUMO

Local actors have growing prominence in climate governance but key capacities and powers remain with national policymakers. Coordination between national and local climate action is therefore of increasing importance. Underappreciated in existing academic and policy literature, coordination between actors at different scales can be affected not only by politics and institutional arrangements, but also by methods of data analysis. Exploring two datasets of GHG emissions by local area in England-one of consumption-based emissions and the other of territorial emissions-this paper shows the potential for a data scaling problem known as the modifiable areal unit problem and its possible consequences for the efficacy and equity implications of climate action. While this analysis is conceptual and does not identify specific instances of the modifiable areal unit problem or its consequences, it calls attention to methods of data analysis as possible contributors to climate governance challenges. Among other areas, future analysis is needed to explore how data scaling and other aspects of data processing and analysis may affect our understanding of non-state actors' contribution to climate action.

7.
Endeavour ; 48(1): 100919, 2024 Mar.
Artigo em Inglês | MEDLINE | ID: mdl-38520917

RESUMO

This article is both a comment on the collection of papers, "Specialists with Spirit: Re-Enchanting the Vocation of Science," offered as a tribute to Klaas van Berkel, and an attempt to add historical depth to present-day sensibilities about the academic discipline called the history of science: Is it a special sort of inquiry? Is science as its subject matter a special sort of culture? Max Weber's 1917 Science as a Vocation lecture, and its continuing appropriations, is a focal point for addressing these questions.

8.
Eur J Cancer ; 194: 113357, 2023 11.
Artigo em Inglês | MEDLINE | ID: mdl-37827064

RESUMO

BACKGROUND: The 'Table 1 Fallacy' refers to the unsound use of significance testing for comparing the distributions of baseline variables between randomised groups to draw erroneous conclusions about balance or imbalance. We performed a cross-sectional study of the Table 1 Fallacy in phase III oncology trials. METHODS: From ClinicalTrials.gov, 1877 randomised trials were screened. Multivariable logistic regressions evaluated predictors of the Table 1 Fallacy. RESULTS: A total of 765 randomised controlled trials involving 553,405 patients were analysed. The Table 1 Fallacy was observed in 25% of trials (188 of 765), with 3% of comparisons deemed significant (59 of 2353), approximating the typical 5% type I error assertion probability. Application of trial-level multiplicity corrections reduced the rate of significant findings to 0.3% (six of 2345 tests). Factors associated with lower odds of the Table 1 Fallacy included industry sponsorship (adjusted odds ratio [aOR] 0.29, 95% confidence interval [CI] 0.18-0.47; multiplicity-corrected P < 0.0001), larger trial size (≥795 versus <280 patients; aOR 0.32, 95% CI 0.19-0.53; multiplicity-corrected P = 0.0008), and publication in a European versus American journal (aOR 0.06, 95% CI 0.03-0.13; multiplicity-corrected P < 0.0001). CONCLUSIONS: This study highlights the persistence of the Table 1 Fallacy in contemporary oncology randomised controlled trials, with one of every four trials testing for baseline differences after randomisation. Significance testing is a suboptimal method for identifying unsound randomisation procedures and may encourage misleading inferences. Journal-level enforcement is a possible strategy to help mitigate this fallacy.


Assuntos
Neoplasias , Humanos , Prevalência , Estudos Transversais , Neoplasias/epidemiologia , Neoplasias/terapia , Ensaios Clínicos Controlados Aleatórios como Assunto
9.
J Intell ; 11(7)2023 Jul 22.
Artigo em Inglês | MEDLINE | ID: mdl-37504792

RESUMO

In some instances, such as in sports, individuals will cheer on the player with the "hot hand". But is the hot hand phenomenon a fallacy? The current research investigated (1) whether the hot hand fallacy (HHF) was related to risky decisions during a gambling scenario, and (2) whether metacognitive awareness might be related to optimal decisions. After measuring for baseline tendencies of using the hot hand heuristic, participants were presented with a series of prior card gambling results that included either winning streaks or losing streaks and asked to choose one of two cards: a good card or a bad card. In addition, we examined whether high metacognitive awareness-as measured by the ability to discriminate between correct and incorrect responses-would be negatively related to the risky decisions induced by the hot hand heuristic. The results showed that our predictions were partially supported. For winning streaks, individuals who had a weak tendency for using the heuristic exhibited fewer risky decisions with higher metacognitive awareness. However, those with a strong baseline tendency for using the hot hand showed no sign of decrease with metacognitive awareness. On the whole, the complex data suggest that further research on the HHF would be helpful for implementing novel ways of avoiding the fallacy, if needed.

10.
Cureus ; 15(6): e40242, 2023 Jun.
Artigo em Inglês | MEDLINE | ID: mdl-37440801

RESUMO

This manuscript presents a concise approach to tackle the widespread misuse of statistical significance in scientific research, focusing on public health. It offers practical guidance for conducting accurate statistical evaluations and promoting easily understandable results based on actual evidence. When conducting a statistical study to inform decision-making, it is recommended to follow a step-by-step sequence while considering various factors. Firstly, multiple target hypotheses should be adopted to assess the compatibility of experimental data with different models. Reporting all P-values in full, rounded in order to have a single non-zero significant digit, enhances transparency and reduces the likelihood of exaggerating the state of the evidence. Detailed documentation of the procedures used to evaluate the compatibility between test assumptions and data should be provided for rigorous assessment. A descriptive evaluation of results can be aided by using statistical compatibility ranges, which help avoid misrepresenting the evidence. Separately evaluating and reporting statistical compatibility and effect size prevents the magnitude fallacy. Additionally, reporting measures of statistical effect size enables evaluation of sectoral relevance, such as clinical significance. Multiple compatibility intervals, such as 99%, 95%, and 90% confidence intervals, should be reported to allow readers to assess the variation of P-values based on the width of the interval. These recommendations aim to enhance the robustness and interpretability of statistical analyses and promote transparent reporting of findings. The author encourages journal adoption of similar frameworks to enhance scientific rigor, particularly in the field of medical science.

11.
Behav Brain Sci ; : 1-68, 2023 Jun 26.
Artigo em Inglês | MEDLINE | ID: mdl-37357710

RESUMO

When a measure becomes a target, it ceases to be a good measure. For example, when standardized test scores in education become targets, teachers may start 'teaching to the test', leading to breakdown of the relationship between the measure--test performance--and the underlying goal--quality education. Similar phenomena have been named and described across a broad range of contexts, such as economics, academia, machine-learning, and ecology. Yet it remains unclear whether these phenomena bear only superficial similarities, or if they derive from some fundamental unifying mechanism. Here, we propose such a unifying mechanism, which we label proxy failure. We first review illustrative examples and their labels, such as the 'Cobra effect', 'Goodhart's law', and 'Campbell's law'. Second, we identify central prerequisites and constraints of proxy failure, noting that it is often only a partial failure or divergence. We argue that whenever incentivization or selection is based on an imperfect proxy measure of the underlying goal, a pressure arises which tends to make the proxy a worse approximation of the goal. Third, we develop this perspective for three concrete contexts, namely neuroscience, economics and ecology, highlighting similarities and differences. Fourth, we outline consequences of proxy failure, suggesting it is key to understanding the structure and evolution of goal-oriented systems. Our account draws on a broad range of disciplines, but we can only scratch the surface within each. We thus hope the present account elicits a collaborative enterprise, entailing both critical discussion as well as extensions in contexts we have missed.

12.
Front Psychol ; 14: 1132168, 2023.
Artigo em Inglês | MEDLINE | ID: mdl-37063564

RESUMO

In real life, we often have to make judgements under uncertainty. One such judgement task is estimating the probability of a given event based on uncertain evidence for the event, such as estimating the chances of actual fire when the fire alarm goes off. On the one hand, previous studies have shown that human subjects often significantly misestimate the probability in such cases. On the other hand, these studies have offered divergent explanations as to the exact causes of these judgment errors (or, synonymously, biases). For instance, different studies have attributed the errors to the neglect (or underweighting) of the prevalence (or base rate) of the given event, or the overweighting of the evidence for the individual event ('individuating information'), etc. However, whether or to what extent any such explanation can fully account for the observed errors remains unclear. To help fill this gap, we studied the probability estimation performance of non-professional subjects under four different real-world problem scenarios: (i) Estimating the probability of cancer in a mammogram given the relevant evidence from a computer-aided cancer detection system, (ii) estimating the probability of drunkenness based on breathalyzer evidence, and (iii & iv) estimating the probability of an enemy sniper based on two different sets of evidence from a drone reconnaissance system. In each case, we quantitatively characterized the contributions of the various potential explanatory variables to the subjects' probability judgements. We found that while the various explanatory variables together accounted for about 30 to 45% of the overall variance of the subjects' responses depending on the problem scenario, no single factor was sufficient to account for more than 53% of the explainable variance (or about 16 to 24% of the overall variance), let alone all of it. Further analyses of the explained variance revealed the surprising fact that no single factor accounted for significantly more than its 'fair share' of the variance. Taken together, our results demonstrate quantitatively that it is statistically untenable to attribute the errors of probabilistic judgement to any single cause, including base rate neglect. A more nuanced and unifying explanation would be that the actual biases reflect a weighted combination of multiple contributing factors, the exact mix of which depends on the particular problem scenario.

13.
Forensic Sci Med Pathol ; 19(4): 605-612, 2023 12.
Artigo em Inglês | MEDLINE | ID: mdl-37099196

RESUMO

de Boer et al. criticize the conclusions in our 2020 paper on the validity of Excited Delirium Syndrome (ExDS) as "egregiously misleading." Our conclusion was that there "is no existing evidence that indicates that ExDS is inherently lethal in the absence of aggressive restraint." The basis for de Boer and colleague's criticism of our paper is that the ExDS literature does not provide an unbiased view of the lethality of the condition, and therefore the true epidemiologic features of ExDS cannot be determined from what has been published. The criticism is unrelated to the goals or methods of the study, however. Our stated purpose was to investigate "how the term ExDS has evolved in the literature and been endowed with a uniquely lethal quality," and whether there is "evidence for ExDS as a unique cause of a death that would have occurred regardless of restraint, or a label used when a restrained and agitated person dies, and which erroneously directs attention away from the role of restraint in explaining the death." We cannot fathom how de Boer et al. missed this clearly stated description of the study rationale, or why they would endorse a series of fallacious and meaningless claims that gave the appearance that they failed to grasp the basic design of the study. We do acknowledge and thank these authors for pointing out 3 minor citation errors and an equally minor table formatting error (neither of which altered the reported results and conclusions in the slightest), however.


Assuntos
Delírio , Polícia , Humanos , Agressão , Causalidade , Restrição Física/efeitos adversos
14.
Psychon Bull Rev ; 30(4): 1564-1574, 2023 Aug.
Artigo em Inglês | MEDLINE | ID: mdl-36795245

RESUMO

Humans and other animals are capable of reasoning. However, there are overwhelming examples of errors or anomalies in reasoning. In two experiments, we studied if rats, like humans, estimate the conjunction of two events as more likely than each event independently, a phenomenon that has been called conjunction fallacy. In both experiments, rats learned through food reinforcement to press a lever under some cue conditions but not others. Sound B was rewarded whereas Sound A was not. However, when B was presented with the visual cue Y was not rewarded, whereas AX was rewarded (i.e., A-, AX+, B+, BY-). Both visual cues were presented in the same bulb. After training, rats received test sessions in which A and B were presented with the bulb explicitly off or occluded by a metal piece. Thus, on the occluded condition, it was ambiguous whether the trials were of the elements alone (A or B) or of the compounds (AX or BY). Rats responded on the occluded condition as if the compound cues were most likely present. The second experiment investigated if this error in probability estimation in Experiment 1, could be due to a conjunction fallacy, and if this could be attenuated by increasing the ratio of element/compound trials from the original 50-50 to 70-30 and 90-10. Only the 90-10 condition (where 90% of the training trials were of just A or just B) did not show a conjunction fallacy, though it emerged in all groups with additional training. These findings open new avenues for exploring the mechanisms behind the conjunction fallacy effect.


Assuntos
Sinais (Psicologia) , Resolução de Problemas , Humanos , Ratos , Animais , Probabilidade , Reforço Psicológico , Recompensa
15.
Argumentation ; 37(2): 253-267, 2023.
Artigo em Inglês | MEDLINE | ID: mdl-36817945

RESUMO

This appearance condition of fallacies refers to the phenomenon of weak arguments, or moves in argumentation, appearing to be okay when really they aren't. Not all theorists agree that the appearance condition should be part of the conception of fallacies but this essay explores some of the consequences of including it. In particular, the differences between committing a fallacy, causing a fallacy and observing a fallacy are identified. The remainder of the paper is given over to discussing possible causes of mistakenly perceiving weak argumentation moves as okay. Among these are argument caused misperception, perspective caused misperception, discursive environment caused misperception and perceiver caused misperception. The discussion aims to be sufficiently general so that it can accommodate different models and standards of argumentation that make a place for fallacies.

16.
Cogn Sci ; 47(1): e13211, 2023 Jan.
Artigo em Inglês | MEDLINE | ID: mdl-36680427

RESUMO

Beliefs like the Gambler's Fallacy and the Hot Hand have interested cognitive scientists, economists, and philosophers for centuries. We propose that these judgment patterns arise from the observer's mental models of the sequence-generating mechanism, moderated by the strength of belief in an a priori base rate. In six behavioral experiments, participants observed one of three mechanisms generating sequences of eight binary events: a random mechanical device, an intentional goal-directed actor, and a financial market. We systematically manipulated participants' beliefs about the base rate probabilities at which different outcomes were generated by each mechanism. Participants judged 18 sequences of outcomes produced by a mechanism with either an unknown base rate, a specified distribution of three equiprobable base rates, or a precise, fixed base rate. Six target sequences ended in streaks of between two and seven identical outcomes. The most common predictions for subsequent events were best described as pragmatic belief updating, expressed as an increasingly strong expectation that a streak of identical signals would repeat as the length of that streak increased. The exception to this pattern was for sequences generated by a random mechanical device with a fixed base rate of .50. Under this specific condition, participants exhibited a bias toward reversal of streaks, and this bias was larger when participants were asked to make a dichotomous choice versus a numerical probability rating. We review alternate accounts for the anomalous judgments of sequences and conclude with our favored interpretation that is based on Rabin's version of Tversky & Kahneman's Law of Small Numbers.


Assuntos
Jogo de Azar , Humanos , Jogo de Azar/psicologia , Resolução de Problemas , Julgamento , Probabilidade , Modelos Psicológicos
17.
Hum Factors ; 65(4): 592-617, 2023 06.
Artigo em Inglês | MEDLINE | ID: mdl-34233530

RESUMO

OBJECTIVE: Three experiments sought to understand performance limitations in controlling a ship attempting to meet another moving ship that approached from various trajectories. The influence of uncertainty, resulting from occasional unpredictable delays in one's own movement, was examined. BACKGROUND: Cognitive elements of rendezvous have been little studied. Related work such as the planning fallacy and bias toward underestimating time-to-contact imply a tendency toward late arrival at a rendezvous. METHODS: In a simplified simulation, participants controlled the speed and/or heading of their own ship once per scenario to try to rendezvous with another ship. Forty-five scenarios of approximately 30 s were conducted with different starting geometries and, in two of three experiments, with different frequencies and lengths of the unexpected delays. RESULTS: Perfect rendezvous were hard to obtain, with a general tendency to arrive late and pass behind the target vessel, although this was dependent on the angle of approach and relative speed. When occasional delays were introduced, less frequent but longer delays disrupted performance more than shorter but more frequent delays. Where delays were possible, but no delay occurred, there was no longer evidence of a general tendency to more frequently pass behind the target ship. Additionally, people did not wait to see if the unpredictable delays would occur before executing a course of action. Different control strategies were deployed and dual axis control was preferred. CONCLUSIONS: The tendency to arrive late and the influence of the possibility of uncertain delays are discussed in relationship to control strategies.


Assuntos
Incerteza , Humanos , Simulação por Computador
18.
Theor Med Bioeth ; 44(1): 41-56, 2023 02.
Artigo em Inglês | MEDLINE | ID: mdl-36273366

RESUMO

Research into cognitive enhancement is highly controversial, and arguments for and against it have failed to identify the logical fallacy underlying this debate: the fallacy of composition. The fallacy of composition is a lesser-known fallacy of ambiguity, but it has been explored and applied extensively to other fields, including economics. The fallacy of composition, which occurs when the characteristics of the parts of the whole are incorrectly extended to apply to the whole itself, and the conclusion is false, should be addressed in the debate on cognitive enhancement and within education. Within cognitive enhancement, the premise that individual distinct cognitive processes can be enhanced by cognitive enhancers leads to the conclusion that they must enhance cognition overall, and this idea is pervasive in the literature. If the goal of cognitive enhancement is to enhance cognition or learning, and not merely individual cognitive processes, then this is a clear example of the fallacy of composition. The ambiguity of "cognitive," "cognition," and "enhancement" only perpetuates this fallacy and creates more confusion surrounding the purposes and goals of enhancement. Identifying this fallacy does not threaten the existing body of research; however, it provides a novel framework to explore new avenues for research, education, and enhancement, particularly through education reform initiatives. Education enhances and facilitates learning, and improvements to education could be considered cognitive enhancements. Furthermore, the same fallacy is ubiquitous in education; educators commit it by "teaching to the test" and prioritizing memorization over generalizable skills such as critical thinking and problem solving. We will explore these new avenues for research and highlight principles of learning success from other disciplines to create a clearer understanding of the means and ends of cognitive enhancement. Recognizing the pervasiveness of composition fallacy in cognitive enhancement and education will lead to greater clarity of normative positions and insights into student learning that steer away from fallacious reasoning.


Assuntos
Cognição , Lógica , Humanos
19.
J Postgrad Med ; 69(1): 35-40, 2023.
Artigo em Inglês | MEDLINE | ID: mdl-36255018

RESUMO

The McNamara fallacy refers to the tendency to focus on numbers, metrics, and quantifiable data while disregarding the meaningful qualitative aspects. The existence of such a fallacy in medical education is reviewed in this paper. Competency-based medical education (CBME) has been introduced in India with the goal of having Indian Medical Graduates competent in five different roles - Clinician, Communicator, Leader and member of the health care team, Professional, and Lifelong learner. If we only focus on numbers and structure to assess the competencies pertaining to these roles, we would be falling prey to the McNamara fallacy. To assess these roles in the real sense, we need to embrace the qualitative assessment methods and appreciate their value in competency-based education. This can be done by using various workplace-based assessments, choosing tools based on educational impact rather than psychometric properties, using narratives and descriptive evaluation, giving grades instead of marks, and improving the quality of the questions asked in various exams. There are challenges in adopting qualitative assessment starting with being able to move past the objective-subjective debate, to developing expertise in conducting and documenting such assessment, and adding the rigor of qualitative research methods to enhance its credibility. The perspective on assessment thus needs a paradigm shift - we need to assess the important rather than just making the assessed important; and this would be crucial for the success of the CBME curriculum.


Assuntos
Educação Baseada em Competências , Educação Médica , Humanos , Educação Baseada em Competências/métodos , Currículo , Competência Clínica , Índia
20.
Assessment ; 30(8): 2626-2643, 2023 12.
Artigo em Inglês | MEDLINE | ID: mdl-36129155

RESUMO

This study examines the congruency between the recently introduced Dark Factor of Personality (D) and Antagonism (A; low Agreeableness) from the Five-Factor Model of personality. Using two samples (Ns of 365 and 600), we examined simple zero-order correlations between D and A (rs of .69 and .64). In addition, we used a range of relevant external criteria (e.g., antisocial behavior, aggression, domains and facets of personality, Diagnostic and Statistical Manual of Mental Disorders [DSM] personality disorders [PDs], impulsivity, and political skill) to examine the degree of absolute similarity in the relations that D and A bear to these criteria. These similarity coefficients were then compared with the similarities produced by measures of constructs different from D and A but similar among themselves (i.e., psychopathy and narcissism in both samples, plus depression in Sample 1). The degree of similarity between D and A (rICCs = .96 and .93) is consistent with what is observed between other measures of the same construct. We conclude that D and A yield largely identical empirical correlates and thus likely represents an instance of the jangle fallacy. We believe that future efforts would be better spent furthering the literature around the well-established Agreeableness versus Antagonism construct.


Assuntos
Maquiavelismo , Personalidade , Humanos , Transtorno da Personalidade Antissocial/diagnóstico , Transtornos da Personalidade/diagnóstico , Narcisismo
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA
...