Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 22
Filtrar
1.
Annu Rev Psychol ; 73: 719-748, 2022 01 04.
Artigo em Inglês | MEDLINE | ID: mdl-34665669

RESUMO

Replication-an important, uncommon, and misunderstood practice-is gaining appreciation in psychology. Achieving replicability is important for making research progress. If findings are not replicable, then prediction and theory development are stifled. If findings are replicable, then interrogation of their meaning and validity can advance knowledge. Assessing replicability can be productive for generating and testing hypotheses by actively confronting current understandings to identify weaknesses and spur innovation. For psychology, the 2010s might be characterized as a decade of active confrontation. Systematic and multi-site replication projects assessed current understandings and observed surprising failures to replicate many published findings. Replication efforts highlighted sociocultural challenges such as disincentives to conduct replications and a tendency to frame replication as a personal attack rather than a healthy scientific practice, and they raised awareness that replication contributes to self-correction. Nevertheless, innovation in doing and understanding replication and its cousins, reproducibility and robustness, has positioned psychology to improve research practices and accelerate progress.


Assuntos
Projetos de Pesquisa , Humanos , Reprodutibilidade dos Testes
2.
PLoS Biol ; 14(5): e1002456, 2016 05.
Artigo em Inglês | MEDLINE | ID: mdl-27171007

RESUMO

Beginning January 2014, Psychological Science gave authors the opportunity to signal open data and materials if they qualified for badges that accompanied published articles. Before badges, less than 3% of Psychological Science articles reported open data. After badges, 23% reported open data, with an accelerating trend; 39% reported open data in the first half of 2015, an increase of more than an order of magnitude from baseline. There was no change over time in the low rates of data sharing among comparison journals. Moreover, reporting openness does not guarantee openness. When badges were earned, reportedly available data were more likely to be actually available, correct, usable, and complete than when badges were not earned. Open materials also increased to a weaker degree, and there was more variability among comparison journals. Badges are simple, effective signals to promote open practices and improve preservation of data and materials by using independent repositories.


Assuntos
Psicologia , Editoração/organização & administração , Publicações Seriadas/estatística & dados numéricos , Análise Custo-Benefício , Disseminação de Informação , Internet , Editoração/tendências , Publicações Seriadas/economia
3.
Proc Natl Acad Sci U S A ; 113(19): 5206-11, 2016 May 10.
Artigo em Inglês | MEDLINE | ID: mdl-27114514

RESUMO

Reconsolidation theory proposes that retrieval can destabilize an existing memory trace, opening a time-dependent window during which that trace is amenable to modification. Support for the theory is largely drawn from nonhuman animal studies that use invasive pharmacological or electroconvulsive interventions to disrupt a putative postretrieval restabilization ("reconsolidation") process. In human reconsolidation studies, however, it is often claimed that postretrieval new learning can be used as a means of "updating" or "rewriting" existing memory traces. This proposal warrants close scrutiny because the ability to modify information stored in the memory system has profound theoretical, clinical, and ethical implications. The present study aimed to replicate and extend a prominent 3-day motor-sequence learning study [Walker MP, Brakefield T, Hobson JA, Stickgold R (2003) Nature 425(6958):616-620] that is widely cited as a convincing demonstration of human reconsolidation. However, in four direct replication attempts (n = 64), we did not observe the critical impairment effect that has previously been taken to indicate disruption of an existing motor memory trace. In three additional conceptual replications (n = 48), we explored the broader validity of reconsolidation-updating theory by using a declarative recall task and sequences similar to phone numbers or computer passwords. Rather than inducing vulnerability to interference, memory retrieval appeared to aid the preservation of existing sequence knowledge relative to a no-retrieval control group. These findings suggest that memory retrieval followed by new learning does not reliably induce human memory updating via reconsolidation.


Assuntos
Aprendizagem/fisiologia , Memória/fisiologia , Rememoração Mental/fisiologia , Reforço Psicológico , Adolescente , Adulto , Feminino , Humanos , Masculino , Pessoa de Meia-Idade , Adulto Jovem
4.
Psychol Sci ; : 9567976231221573, 2023 Dec 27.
Artigo em Inglês | MEDLINE | ID: mdl-38150599
5.
Behav Brain Sci ; 41: e132, 2018 01.
Artigo em Inglês | MEDLINE | ID: mdl-31064517

RESUMO

Replication is the cornerstone of science - but when and why? Not all studies need replication, especially when resources are limited. We propose that a decision-making framework based on Bayesian philosophy of science provides a basis for choosing which studies to replicate.


Assuntos
Tomada de Decisões , Filosofia , Teorema de Bayes , Pesquisa
8.
Nat Hum Behav ; 7(1): 15-26, 2023 01.
Artigo em Inglês | MEDLINE | ID: mdl-36707644

RESUMO

Flexibility in the design, analysis and interpretation of scientific studies creates a multiplicity of possible research outcomes. Scientists are granted considerable latitude to selectively use and report the hypotheses, variables and analyses that create the most positive, coherent and attractive story while suppressing those that are negative or inconvenient. This creates a risk of bias that can lead to scientists fooling themselves and fooling others. Preregistration involves declaring a research plan (for example, hypotheses, design and statistical analyses) in a public registry before the research outcomes are known. Preregistration (1) reduces the risk of bias by encouraging outcome-independent decision-making and (2) increases transparency, enabling others to assess the risk of bias and calibrate their confidence in research outcomes. In this Perspective, we briefly review the historical evolution of preregistration in medicine, psychology and other domains, clarify its pragmatic functions, discuss relevant meta-research, and provide recommendations for scientists and journal editors.


Assuntos
Processos Mentais , Projetos de Pesquisa , Humanos , Sistema de Registros
9.
R Soc Open Sci ; 10(10): 230568, 2023 Oct.
Artigo em Inglês | MEDLINE | ID: mdl-37830032

RESUMO

Background. Although preregistration can reduce researcher bias and increase transparency in primary research settings, it is less applicable to secondary data analysis. An alternative method that affords additional protection from researcher bias, which cannot be gained from conventional forms of preregistration alone, is an Explore and Confirm Analysis Workflow (ECAW). In this workflow, a data management organization initially provides access to only a subset of their dataset to researchers who request it. The researchers then prepare an analysis script based on the subset of data, upload the analysis script to a registry, and then receive access to the full dataset. ECAWs aim to achieve similar goals to preregistration, but make access to the full dataset contingent on compliance. The present survey aimed to garner information from the research community where ECAWs could be applied-employing the Avon Longitudinal Study of Parents and Children (ALSPAC) as a case example. Methods. We emailed a Web-based survey to researchers who had previously applied for access to ALSPAC's transgenerational observational dataset. Results. We received 103 responses, for a 9% response rate. The results suggest that-at least among our sample of respondents-ECAWs hold the potential to serve their intended purpose and appear relatively acceptable. For example, only 10% of respondents disagreed that ALSPAC should run a study on ECAWs (versus 55% who agreed). However, as many as 26% of respondents agreed that they would be less willing to use ALSPAC data if they were required to use an ECAW (versus 45% who disagreed). Conclusion. Our data and findings provide information for organizations and individuals interested in implementing ECAWs and related interventions. Preregistration. https://osf.io/g2fw5 Deviations from the preregistration are outlined in electronic supplementary material A.

10.
Perspect Psychol Sci ; 17(1): 239-251, 2022 01.
Artigo em Inglês | MEDLINE | ID: mdl-33682488

RESUMO

Psychologists are navigating an unprecedented period of introspection about the credibility and utility of their discipline. Reform initiatives emphasize the benefits of transparency and reproducibility-related research practices; however, adoption across the psychology literature is unknown. Estimating the prevalence of such practices will help to gauge the collective impact of reform initiatives, track progress over time, and calibrate future efforts. To this end, we manually examined a random sample of 250 psychology articles published between 2014 and 2017. Over half of the articles were publicly available (154/237, 65%, 95% confidence interval [CI] = [59%, 71%]); however, sharing of research materials (26/183; 14%, 95% CI = [10%, 19%]), study protocols (0/188; 0%, 95% CI = [0%, 1%]), raw data (4/188; 2%, 95% CI = [1%, 4%]), and analysis scripts (1/188; 1%, 95% CI = [0%, 1%]) was rare. Preregistration was also uncommon (5/188; 3%, 95% CI = [1%, 5%]). Many articles included a funding disclosure statement (142/228; 62%, 95% CI = [56%, 69%]), but conflict-of-interest statements were less common (88/228; 39%, 95% CI = [32%, 45%]). Replication studies were rare (10/188; 5%, 95% CI = [3%, 8%]), and few studies were included in systematic reviews (21/183; 11%, 95% CI = [8%, 16%]) or meta-analyses (12/183; 7%, 95% CI = [4%, 10%]). Overall, the results suggest that transparency and reproducibility-related research practices were far from routine. These findings establish baseline prevalence estimates against which future progress toward increasing the credibility and utility of psychology research can be compared.


Assuntos
Publicações , Projetos de Pesquisa , Humanos , Prevalência , Reprodutibilidade dos Testes , Revisões Sistemáticas como Assunto
11.
R Soc Open Sci ; 9(8): 220139, 2022 Aug.
Artigo em Inglês | MEDLINE | ID: mdl-36039285

RESUMO

Journals exert considerable control over letters, commentaries and online comments that criticize prior research (post-publication critique). We assessed policies (Study One) and practice (Study Two) related to post-publication critique at 15 top-ranked journals in each of 22 scientific disciplines (N = 330 journals). Two-hundred and seven (63%) journals accepted post-publication critique and often imposed limits on length (median 1000, interquartile range (IQR) 500-1200 words) and time-to-submit (median 12, IQR 4-26 weeks). The most restrictive limits were 175 words and two weeks; some policies imposed no limits. Of 2066 randomly sampled research articles published in 2018 by journals accepting post-publication critique, 39 (1.9%, 95% confidence interval [1.4, 2.6]) were linked to at least one post-publication critique (there were 58 post-publication critiques in total). Of the 58 post-publication critiques, 44 received an author reply, of which 41 asserted that original conclusions were unchanged. Clinical Medicine had the most active culture of post-publication critique: all journals accepted post-publication critique and published the most post-publication critique overall, but also imposed the strictest limits on length (median 400, IQR 400-550 words) and time-to-submit (median 4, IQR 4-6 weeks). Our findings suggest that top-ranked academic journals often pose serious barriers to the cultivation, documentation and dissemination of post-publication critique.

12.
R Soc Open Sci ; 8(1): 201494, 2021 Jan.
Artigo em Inglês | MEDLINE | ID: mdl-33614084

RESUMO

For any scientific report, repeating the original analyses upon the original data should yield the original outcomes. We evaluated analytic reproducibility in 25 Psychological Science articles awarded open data badges between 2014 and 2015. Initially, 16 (64%, 95% confidence interval [43,81]) articles contained at least one 'major numerical discrepancy' (>10% difference) prompting us to request input from original authors. Ultimately, target values were reproducible without author involvement for 9 (36% [20,59]) articles; reproducible with author involvement for 6 (24% [8,47]) articles; not fully reproducible with no substantive author response for 3 (12% [0,35]) articles; and not fully reproducible despite author involvement for 7 (28% [12,51]) articles. Overall, 37 major numerical discrepancies remained out of 789 checked values (5% [3,6]), but original conclusions did not appear affected. Non-reproducibility was primarily caused by unclear reporting of analytic procedures. These results highlight that open data alone is not sufficient to ensure analytic reproducibility.

13.
PLoS One ; 15(10): e0239598, 2020.
Artigo em Inglês | MEDLINE | ID: mdl-33002031

RESUMO

Scientific claims in biomedical research are typically derived from statistical analyses. However, misuse or misunderstanding of statistical procedures and results permeate the biomedical literature, affecting the validity of those claims. One approach journals have taken to address this issue is to enlist expert statistical reviewers. How many journals do this, how statistical review is incorporated, and how its value is perceived by editors is of interest. Here we report an expanded version of a survey conducted more than 20 years ago by Goodman and colleagues (1998) with the intention of characterizing contemporary statistical review policies at leading biomedical journals. We received eligible responses from 107 of 364 (28%) journals surveyed, across 57 fields, mostly from editors in chief. 34% (36/107) rarely or never use specialized statistical review, 34% (36/107) used it for 10-50% of their articles and 23% used it for all articles. These numbers have changed little since 1998 in spite of dramatically increased concern about research validity. The vast majority of editors regarded statistical review as having substantial incremental value beyond regular peer review and expressed comparatively little concern about the potential increase in reviewing time, cost, and difficulty identifying suitable statistical reviewers. Improved statistical education of researchers and different ways of employing statistical expertise are needed. Several proposals are discussed.


Assuntos
Publicações Periódicas como Assunto , Estatística como Assunto , Pesquisa Biomédica/métodos , Pesquisa Biomédica/normas , Pesquisa Biomédica/estatística & dados numéricos , Interpretação Estatística de Dados , Políticas Editoriais , Humanos , Revisão por Pares , Publicações Periódicas como Assunto/normas , Publicações Periódicas como Assunto/estatística & dados numéricos , Reprodutibilidade dos Testes , Estatística como Assunto/métodos , Estatística como Assunto/normas , Inquéritos e Questionários
14.
J Exp Psychol Appl ; 26(3): 411-421, 2020 Sep.
Artigo em Inglês | MEDLINE | ID: mdl-31971418

RESUMO

Teachers around the world hold a considerable number of misconceptions about education. Consequently, schools can become epicenters for dubious practices that might jeopardize the quality of teaching and negatively influence students' wellbeing. The main objective of this study was to assess the efficacy of refutation texts in the correction of erroneous ideas among in-service teachers. The results of Experiment 1 indicate that refutation texts can be an effective means to correct false ideas among educators, even for strongly endorsed misconceptions. However, the results of Experiment 2 suggest that these effects may be short-lived. Furthermore, attempts to correct misconceptions seemed to have no beneficial effect on teachers' intention to implement educational practices that are based on those erroneous beliefs. The implications of these results for the training of preservice and in-service teachers are discussed. (PsycInfo Database Record (c) 2020 APA, all rights reserved).


Assuntos
Comunicação , Prática Clínica Baseada em Evidências , Intenção , Professores Escolares/psicologia , Adulto , Feminino , Humanos , Masculino , Instituições Acadêmicas , Espanha , Inquéritos e Questionários
15.
R Soc Open Sci ; 7(2): 190806, 2020 Feb.
Artigo em Inglês | MEDLINE | ID: mdl-32257301

RESUMO

Serious concerns about research quality have catalysed a number of reform initiatives intended to improve transparency and reproducibility and thus facilitate self-correction, increase efficiency and enhance research credibility. Meta-research has evaluated the merits of some individual initiatives; however, this may not capture broader trends reflecting the cumulative contribution of these efforts. In this study, we manually examined a random sample of 250 articles in order to estimate the prevalence of a range of transparency and reproducibility-related indicators in the social sciences literature published between 2014 and 2017. Few articles indicated availability of materials (16/151, 11% [95% confidence interval, 7% to 16%]), protocols (0/156, 0% [0% to 1%]), raw data (11/156, 7% [2% to 13%]) or analysis scripts (2/156, 1% [0% to 3%]), and no studies were pre-registered (0/156, 0% [0% to 1%]). Some articles explicitly disclosed funding sources (or lack of; 74/236, 31% [25% to 37%]) and some declared no conflicts of interest (36/236, 15% [11% to 20%]). Replication studies were rare (2/156, 1% [0% to 3%]). Few studies were included in evidence synthesis via systematic review (17/151, 11% [7% to 16%]) or meta-analysis (2/151, 1% [0% to 3%]). Less than half the articles were publicly available (101/250, 40% [34% to 47%]). Minimal adoption of transparency and reproducibility-related research practices could be undermining the credibility and efficiency of social science research. The present study establishes a baseline that can be revisited in the future to assess progress.

16.
Trends Cogn Sci ; 23(10): 815-818, 2019 10.
Artigo em Inglês | MEDLINE | ID: mdl-31421987

RESUMO

Preregistration clarifies the distinction between planned and unplanned research by reducing unnoticed flexibility. This improves credibility of findings and calibration of uncertainty. However, making decisions before conducting analyses requires practice. During report writing, respecting both what was planned and what actually happened requires good judgment and humility in making claims.


Assuntos
Sistema de Registros , Pesquisa , Humanos , Reprodutibilidade dos Testes , Projetos de Pesquisa
17.
PLoS One ; 13(8): e0201856, 2018.
Artigo em Inglês | MEDLINE | ID: mdl-30071110

RESUMO

The vast majority of scientific articles published to-date have not been accompanied by concomitant publication of the underlying research data upon which they are based. This state of affairs precludes the routine re-use and re-analysis of research data, undermining the efficiency of the scientific enterprise, and compromising the credibility of claims that cannot be independently verified. It may be especially important to make data available for the most influential studies that have provided a foundation for subsequent research and theory development. Therefore, we launched an initiative-the Data Ark-to examine whether we could retrospectively enhance the preservation and accessibility of important scientific data. Here we report the outcome of our efforts to retrieve, preserve, and liberate data from 111 of the most highly-cited articles published in psychology and psychiatry between 2006-2011 (n = 48) and 2014-2016 (n = 63). Most data sets were not made available (76/111, 68%, 95% CI [60, 77]), some were only made available with restrictions (20/111, 18%, 95% CI [10, 27]), and few were made available in a completely unrestricted form (15/111, 14%, 95% CI [5, 22]). Where extant data sharing systems were in place, they usually (17/22, 77%, 95% CI [54, 91]) did not allow unrestricted access. Authors reported several barriers to data sharing, including issues related to data ownership and ethical concerns. The Data Ark initiative could help preserve and liberate important scientific data, surface barriers to data sharing, and advance community discussions on data stewardship.


Assuntos
Curadoria de Dados , Disseminação de Informação , Psiquiatria , Psicologia , Editoração , Comunicação Acadêmica , Bibliometria , Conjuntos de Dados como Assunto , Humanos , Propriedade , Publicações Periódicas como Assunto , Comunicação Acadêmica/ética
18.
R Soc Open Sci ; 5(8): 180448, 2018 Aug.
Artigo em Inglês | MEDLINE | ID: mdl-30225032

RESUMO

Access to data is a critical feature of an efficient, progressive and ultimately self-correcting scientific ecosystem. But the extent to which in-principle benefits of data sharing are realized in practice is unclear. Crucially, it is largely unknown whether published findings can be reproduced by repeating reported analyses upon shared data ('analytic reproducibility'). To investigate this, we conducted an observational evaluation of a mandatory open data policy introduced at the journal Cognition. Interrupted time-series analyses indicated a substantial post-policy increase in data available statements (104/417, 25% pre-policy to 136/174, 78% post-policy), although not all data appeared reusable (23/104, 22% pre-policy to 85/136, 62%, post-policy). For 35 of the articles determined to have reusable data, we attempted to reproduce 1324 target values. Ultimately, 64 values could not be reproduced within a 10% margin of error. For 22 articles all target values were reproduced, but 11 of these required author assistance. For 13 articles at least one value could not be reproduced despite author assistance. Importantly, there were no clear indications that original conclusions were seriously impacted. Mandatory open data policies can increase the frequency and quality of data sharing. However, suboptimal data curation, unclear analysis specification and reporting errors can impede analytic reproducibility, undermining the utility of data sharing and the credibility of scientific findings.

19.
J Exp Psychol Gen ; 145(5): 655-63, 2016 May.
Artigo em Inglês | MEDLINE | ID: mdl-27077759

RESUMO

When a series of studies fails to replicate a well-documented effect, researchers might be tempted to use a "vote counting" approach to decide whether the effect is reliable-that is, simply comparing the number of successful and unsuccessful replications. Vohs's (2015) response to the absence of money priming effects reported by Rohrer, Pashler, and Harris (2015) provides an example of this approach. Unfortunately, vote counting is a poor strategy to assess the reliability of psychological findings because it neglects the impact of selection bias and questionable research practices. In the present comment, we show that a range of meta-analytic tools indicate irregularities in the money priming literature discussed by Rohrer et al. and Vohs, which all point to the conclusion that these effects are distorted by selection bias, reporting biases, or p-hacking. This could help to explain why money-priming effects have proven unreliable in a number of direct replication attempts in which biases have been minimized through preregistration or transparent reporting. Our major conclusion is that the simple proportion of significant findings is a poor guide to the reliability of research and that preregistered replications are an essential means to assess the reliability of money-priming effects.


Assuntos
Emoções , Motivação , Política , Pobreza/psicologia , Pensamento , Feminino , Humanos , Masculino
20.
Vision Res ; 99: 124-33, 2014 Jun.
Artigo em Inglês | MEDLINE | ID: mdl-24231115

RESUMO

Previous experience is thought to facilitate our ability to extract spatial and temporal regularities from cluttered scenes. However, little is known about how we may use this knowledge to predict future events. Here we test whether exposure to temporal sequences facilitates the visual recognition of upcoming stimuli. We presented observers with a sequence of leftwards and rightwards oriented gratings that was interrupted by a test stimulus. Observers were asked to indicate whether the orientation of the test stimulus matched their expectation based on the preceding sequence. Our results demonstrate that exposure to temporal sequences without feedback facilitates our ability to predict an upcoming stimulus. In particular, observers' performance improved following exposure to structured but not random sequences. Improved performance lasted for a prolonged period and generalized to untrained stimulus orientations rather than sequences of different global structure, suggesting that observers acquire knowledge of the sequence structure rather than its items. Further, this learning was compromised when observers performed a dual task resulting in increased attentional load. These findings suggest that exposure to temporal regularities in a scene allows us to accumulate knowledge about its global structure and predict future events.


Assuntos
Atenção/fisiologia , Aprendizagem/fisiologia , Percepção do Tempo/fisiologia , Percepção Visual/fisiologia , Adulto , Análise de Variância , Feminino , Humanos , Masculino , Estimulação Luminosa/métodos , Tempo de Reação/fisiologia , Adulto Jovem
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA