Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 14 de 14
Filtrar
1.
Personal Disord ; 14(1): 19-28, 2023 01.
Artigo em Inglês | MEDLINE | ID: mdl-36848070

RESUMO

The field of personality disorder research has grown since the publication of the Diagnostic and Statistical Manual of Mental Disorders, Third Edition, in 1980; with a notable evolution in the way that personality disorders are defined and operationalized. In evaluating this research, it is necessary to consider the range of sampling practices used. The goal of this study was to describe current sampling methods in personality disorder research and provide recommendations to guide sample design in future personality disorder research. To do this, we coded sampling practices described in recent empirical articles published in four journals that showcase research on personality disorders. We summarized aspects of sampling design including the combination of study question and sample characteristics (e.g., sample size, sample source, and use of screening), study design, and demographic representation of samples. Findings reveal a need for studies to better consider whether their samples are fit for purpose and to make explicit their target population and sampling frame, as well as the specific procedures (i.e., recruitment) used to carry out sampling. We also discuss issues that arise when attempting to capture low-base rate pathology, which is often associated with high comorbidity. We emphasize a process-oriented approach to developing a sampling strategy for personality disorders research. (PsycInfo Database Record (c) 2023 APA, all rights reserved).


Assuntos
Transtornos da Personalidade , Personalidade , Humanos , Transtornos da Personalidade/diagnóstico , Manual Diagnóstico e Estatístico de Transtornos Mentais , Projetos de Pesquisa
2.
Eval Rev ; 44(4): 325-353, 2020 08.
Artigo em Inglês | MEDLINE | ID: mdl-30866674

RESUMO

BACKGROUND: Bayesian statistics have become popular in the social sciences, in part because they are thought to present more useful information than traditional frequentist statistics. Unfortunately, little is known about whether or how interpretations of frequentist and Bayesian results differ. OBJECTIVES: We test whether presenting Bayesian or frequentist results based on the same underlying data influences the decisions people made. RESEARCH DESIGN: Participants were randomly assigned to read Bayesian and frequentist interpretations of hypothetical evaluations of new education technologies of various degrees of uncertainty, ranging from posterior probabilities of 99.8% to 52.9%, which have equivalent frequentist p values of .001 and .65, respectively. SUBJECTS: Across three studies, 933 U.S. adults were recruited from Amazon Mechanical Turk. MEASURES: The primary outcome was the proportion of participants who recommended adopting the new technology. We also measured respondents' certainty in their choice and (in Study 3) how easy it was to understand the results. RESULTS: When presented with Bayesian results, participants were more likely to recommend switching to the new technology. This finding held across all degrees of uncertainty, but especially when the frequentist results reported a p value >.05. Those who recommended change based on Bayesian results were more certain about their choice. All respondents reported that the Bayesian display was easier to understand. CONCLUSIONS: Presenting the same data in either frequentist or Bayesian terms can influence the decisions that people make. This finding highlights the importance of understanding the impact of the statistical results on how audiences interpret evaluation results.


Assuntos
Teorema de Bayes , Comportamento de Escolha , Pesquisadores/psicologia , Educação , Avaliação de Programas e Projetos de Saúde/estatística & dados numéricos , Tecnologia , Estados Unidos
3.
J Abnorm Psychol ; 129(1): 49-55, 2020 Jan.
Artigo em Inglês | MEDLINE | ID: mdl-31868387

RESUMO

Clinical psychological research studies often require individuals with specific characteristics. The Internet can be used to recruit broadly, enabling the recruitment of rare groups such as people with specific psychological disorders. However, Internet-based research relies on participant self-report to determine eligibility, and thus, data quality depends on participant honesty. For those rare groups, even low levels of participant dishonesty can lead to a substantial proportion of fraudulent survey responses, and all studies will include careless respondents who do not pay attention to questions, do not understand them, or provide intentionally wrong responses. Poor-quality responses should be thought of as categorically different from high-quality responses. Including these responses will lead to the overestimation of the prevalence of rare groups and incorrect estimates of scale reliability, means, and correlations between constructs. We demonstrate that for these reasons, including poor-quality responses-which are usually positively skewed-will lead to several data-quality problems including spurious associations between measures. We provide recommendations about how to ensure that fraudulent participants are detected and excluded from self-report research studies. (PsycINFO Database Record (c) 2019 APA, all rights reserved).


Assuntos
Fraude , Sujeitos da Pesquisa , Pesquisa , Humanos , Reprodutibilidade dos Testes , Autorrelato , Inquéritos e Questionários
4.
Behav Res Methods ; 51(5): 2022-2038, 2019 10.
Artigo em Inglês | MEDLINE | ID: mdl-31512174

RESUMO

Amazon Mechanical Turk (MTurk) is widely used by behavioral scientists to recruit research participants. MTurk offers advantages over traditional student subject pools, but it also has important limitations. In particular, the MTurk population is small and potentially overused, and some groups of interest to behavioral scientists are underrepresented and difficult to recruit. Here we examined whether online research panels can avoid these limitations. Specifically, we compared sample composition, data quality (measured by effect sizes, internal reliability, and attention checks), and the non-naivete of participants recruited from MTurk and Prime Panels-an aggregate of online research panels. Prime Panels participants were more diverse in age, family composition, religiosity, education, and political attitudes. Prime Panels participants also reported less exposure to classic protocols and produced larger effect sizes, but only after screening out several participants who failed a screening task. We conclude that online research panels offer a unique opportunity for research, yet one with some important trade-offs.


Assuntos
Ciências Sociais , Atenção , Pesquisa Comportamental/métodos , Crowdsourcing , Confiabilidade dos Dados , Humanos , Internet , Programas de Rastreamento , Reprodutibilidade dos Testes , Estudantes
6.
Behav Brain Sci ; 41: e144, 2018 01.
Artigo em Inglês | MEDLINE | ID: mdl-31064537

RESUMO

Data collection in psychology increasingly relies on "open populations" of participants recruited online, which presents both opportunities and challenges for replication. Reduced costs and the possibility to access the same populations allows for more informative replications. However, researchers should ensure the directness of their replications by dealing with the threats of participant nonnaiveté and selection effects.


Assuntos
Pesquisadores , Coleta de Dados
7.
Trends Cogn Sci ; 21(10): 736-748, 2017 Oct.
Artigo em Inglês | MEDLINE | ID: mdl-28803699

RESUMO

Crowdsourcing data collection from research participants recruited from online labor markets is now common in cognitive science. We review who is in the crowd and who can be reached by the average laboratory. We discuss reproducibility and review some recent methodological innovations for online experiments. We consider the design of research studies and arising ethical issues. We review how to code experiments for the web, what is known about video and audio presentation, and the measurement of reaction times. We close with comments about the high levels of experience of many participants and an emerging tragedy of the commons.


Assuntos
Ciência Cognitiva , Crowdsourcing , Coleta de Dados/métodos , Humanos , Reprodutibilidade dos Testes
8.
Sci Data ; 3: 160082, 2016 Oct 11.
Artigo em Inglês | MEDLINE | ID: mdl-27727246

RESUMO

We present the data from a crowdsourced project seeking to replicate findings in independent laboratories before (rather than after) they are published. In this Pre-Publication Independent Replication (PPIR) initiative, 25 research groups attempted to replicate 10 moral judgment effects from a single laboratory's research pipeline of unpublished findings. The 10 effects were investigated using online/lab surveys containing psychological manipulations (vignettes) followed by questionnaires. Results revealed a mix of reliable, unreliable, and culturally moderated findings. Unlike any previous replication project, this dataset includes the data from not only the replications but also from the original studies, creating a unique corpus that researchers can use to better understand reproducibility and irreproducibility in science.


Assuntos
Princípios Morais , Reprodutibilidade dos Testes , Humanos
9.
Science ; 351(6277): 1037, 2016 Mar 04.
Artigo em Inglês | MEDLINE | ID: mdl-26941312

RESUMO

Gilbert et al. conclude that evidence from the Open Science Collaboration's Reproducibility Project: Psychology indicates high reproducibility, given the study methodology. Their very optimistic assessment is limited by statistical misconceptions and by causal inferences from selectively interpreted, correlational data. Using the Reproducibility Project: Psychology data, both optimistic and pessimistic conclusions about reproducibility are possible, and neither are yet warranted.


Assuntos
Pesquisa Comportamental , Psicologia , Editoração , Pesquisa
10.
Annu Rev Clin Psychol ; 12: 53-81, 2016.
Artigo em Inglês | MEDLINE | ID: mdl-26772208

RESUMO

Crowdsourcing has had a dramatic impact on the speed and scale at which scientific research can be conducted. Clinical scientists have particularly benefited from readily available research study participants and streamlined recruiting and payment systems afforded by Amazon Mechanical Turk (MTurk), a popular labor market for crowdsourcing workers. MTurk has been used in this capacity for more than five years. The popularity and novelty of the platform have spurred numerous methodological investigations, making it the most studied nonprobability sample available to researchers. This article summarizes what is known about MTurk sample composition and data quality with an emphasis on findings relevant to clinical psychological research. It then addresses methodological issues with using MTurk--many of which are common to other nonprobability samples but unfamiliar to clinical science researchers--and suggests concrete steps to avoid these issues or minimize their impact.


Assuntos
Pesquisa Biomédica/estatística & dados numéricos , Crowdsourcing/estatística & dados numéricos , Pesquisa Biomédica/normas , Crowdsourcing/normas , Humanos
11.
Psychol Sci ; 26(7): 1131-9, 2015 Jul.
Artigo em Inglês | MEDLINE | ID: mdl-26063440

RESUMO

Although researchers often assume their participants are naive to experimental materials, this is not always the case. We investigated how prior exposure to a task affects subsequent experimental results. Participants in this study completed the same set of 12 experimental tasks at two points in time, first as a part of the Many Labs replication project and again a few days, a week, or a month later. Effect sizes were markedly lower in the second wave than in the first. The reduction was most pronounced when participants were assigned to a different condition in the second wave. We discuss the methodological implications of these findings.


Assuntos
Participação do Paciente/métodos , Seleção de Pacientes , Adulto , Feminino , Humanos , Masculino
12.
Behav Res Methods ; 46(1): 112-30, 2014 Mar.
Artigo em Inglês | MEDLINE | ID: mdl-23835650

RESUMO

Crowdsourcing services--particularly Amazon Mechanical Turk--have made it easy for behavioral scientists to recruit research participants. However, researchers have overlooked crucial differences between crowdsourcing and traditional recruitment methods that provide unique opportunities and challenges. We show that crowdsourced workers are likely to participate across multiple related experiments and that researchers are overzealous in the exclusion of research participants. We describe how both of these problems can be avoided using advanced interface features that also allow prescreening and longitudinal data collection. Using these techniques can minimize the effects of previously ignored drawbacks and expand the scope of crowdsourcing as a tool for psychological research.


Assuntos
Pesquisa Comportamental/métodos , Crowdsourcing/métodos , Coleta de Dados/métodos , Coleta de Dados/normas , Vigilância da População/métodos , Sujeitos da Pesquisa/psicologia , Análise e Desempenho de Tarefas , Análise de Variância , Letramento em Saúde , Voluntários Saudáveis , Humanos , Disseminação de Informação , Internet , Estudos Longitudinais , Masculino , Prática Psicológica , Pesquisadores , Jogos de Vídeo/psicologia , Jogos de Vídeo/estatística & dados numéricos
13.
Conscious Cogn ; 22(4): 1195-205, 2013 Dec.
Artigo em Inglês | MEDLINE | ID: mdl-24021848

RESUMO

This research examined how and why group membership diminishes the attribution of mind to individuals. We found that mind attribution was inversely related to the size of the group to which an individual belonged (Experiment 1). Mind attribution was affected by group membership rather than the total number of entities perceived at once (Experiment 2). Moreover, mind attribution to an individual varied with the perception that the individual was a group member. Participants attributed more mind to an individual that appeared distinct or distant from other group members than to an individual that was perceived to be similar or proximal to a cohesive group (Experiments 3 and 4). This effect occurred for both human and nonhuman targets, and was driven by the perception of the target as an entitative group member rather than by the knowledge that the target was an entitative group member (Experiment 5).


Assuntos
Identificação Social , Percepção Social , Adulto , Feminino , Humanos , Masculino , Pessoa de Meia-Idade , Teoria da Mente , Adulto Jovem
14.
Psychol Sci ; 23(4): 370-4, 2012 Apr.
Artigo em Inglês | MEDLINE | ID: mdl-22395129

RESUMO

In two experiments, we tested for a causal link between thought speed and risk taking. In Experiment 1, we manipulated thought speed by presenting neutral-content text at either a fast or a slow pace and having participants read the text aloud. In Experiment 2, we manipulated thought speed by presenting fast-, medium-, or slow-paced movie clips that contained similar content. Participants who were induced to think more quickly took more risks with actual money in Experiment 1 and reported greater intentions to engage in real-world risky behaviors, such as unprotected sex and illegal drug use, in Experiment 2. These experiments provide evidence that faster thinking induces greater risk taking.


Assuntos
Tomada de Decisões , Assunção de Riscos , Pensamento , Adulto , Humanos , Fatores de Tempo
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA