Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 90
Filtrar
1.
Am Psychol ; 2024 May 06.
Artigo em Inglês | MEDLINE | ID: mdl-38709631

RESUMO

Open data collected from research participants creates a tension between scholarly values of transparency and sharing, on the one hand, and privacy and security, on the other hand. A common solution is to make data sets anonymous by removing personally identifying information (e.g., names or worker IDs) before sharing. However, ostensibly anonymized data sets may be at risk of re-identification if they include demographic information. In the present article, we provide researchers with broadly applicable guidance and tangible tools so that they can engage in open science practices without jeopardizing participants' privacy. Specifically, we (a) review current privacy standards, (b) describe computer science data protection frameworks and their adaptability to the social sciences, (c) provide practical guidance for assessing and addressing re-identification risk, (d) introduce two open-source algorithms developed for psychological scientists-MinBlur and MinBlurLite-to increase privacy while maintaining the integrity of open data, and (e) highlight aspects of ethical data sharing that require further attention. Ultimately, the risk of re-identification should not dissuade engagement with open science practices. Instead, technical innovations should be developed and harnessed so that science can be as open as possible to promote transparency and sharing and as closed as necessary to maintain privacy and security. (PsycInfo Database Record (c) 2024 APA, all rights reserved).

2.
Nat Hum Behav ; 8(2): 311-319, 2024 Feb.
Artigo em Inglês | MEDLINE | ID: mdl-37945809

RESUMO

Failures to replicate evidence of new discoveries have forced scientists to ask whether this unreliability is due to suboptimal implementation of methods or whether presumptively optimal methods are not, in fact, optimal. This paper reports an investigation by four coordinated laboratories of the prospective replicability of 16 novel experimental findings using rigour-enhancing practices: confirmatory tests, large sample sizes, preregistration and methodological transparency. In contrast to past systematic replication efforts that reported replication rates averaging 50%, replication attempts here produced the expected effects with significance testing (P < 0.05) in 86% of attempts, slightly exceeding the maximum expected replicability based on observed effect sizes and sample sizes. When one lab attempted to replicate an effect discovered by another lab, the effect size in the replications was 97% that in the original study. This high replication rate justifies confidence in rigour-enhancing methods to increase the replicability of new discoveries.


Assuntos
Projetos de Pesquisa , Comportamento Social , Humanos , Estudos Prospectivos , Reprodutibilidade dos Testes , Tamanho da Amostra
3.
Annu Rev Psychol ; 73: 719-748, 2022 01 04.
Artigo em Inglês | MEDLINE | ID: mdl-34665669

RESUMO

Replication-an important, uncommon, and misunderstood practice-is gaining appreciation in psychology. Achieving replicability is important for making research progress. If findings are not replicable, then prediction and theory development are stifled. If findings are replicable, then interrogation of their meaning and validity can advance knowledge. Assessing replicability can be productive for generating and testing hypotheses by actively confronting current understandings to identify weaknesses and spur innovation. For psychology, the 2010s might be characterized as a decade of active confrontation. Systematic and multi-site replication projects assessed current understandings and observed surprising failures to replicate many published findings. Replication efforts highlighted sociocultural challenges such as disincentives to conduct replications and a tendency to frame replication as a personal attack rather than a healthy scientific practice, and they raised awareness that replication contributes to self-correction. Nevertheless, innovation in doing and understanding replication and its cousins, reproducibility and robustness, has positioned psychology to improve research practices and accelerate progress.


Assuntos
Projetos de Pesquisa , Humanos , Reprodutibilidade dos Testes
4.
Elife ; 102021 12 07.
Artigo em Inglês | MEDLINE | ID: mdl-34874005

RESUMO

Replicability is an important feature of scientific research, but aspects of contemporary research culture, such as an emphasis on novelty, can make replicability seem less important than it should be. The Reproducibility Project: Cancer Biology was set up to provide evidence about the replicability of preclinical research in cancer biology by repeating selected experiments from high-impact papers. A total of 50 experiments from 23 papers were repeated, generating data about the replicability of a total of 158 effects. Most of the original effects were positive effects (136), with the rest being null effects (22). A majority of the original effect sizes were reported as numerical values (117), with the rest being reported as representative images (41). We employed seven methods to assess replicability, and some of these methods were not suitable for all the effects in our sample. One method compared effect sizes: for positive effects, the median effect size in the replications was 85% smaller than the median effect size in the original experiments, and 92% of replication effect sizes were smaller than the original. The other methods were binary - the replication was either a success or a failure - and five of these methods could be used to assess both positive and null effects when effect sizes were reported as numerical values. For positive effects, 40% of replications (39/97) succeeded according to three or more of these five methods, and for null effects 80% of replications (12/15) were successful on this basis; combining positive and null effects, the success rate was 46% (51/112). A successful replication does not definitively confirm an original finding or its theoretical interpretation. Equally, a failure to replicate does not disconfirm a finding, but it does suggest that additional investigation is needed to establish its reliability.


Assuntos
Pesquisa Biomédica/métodos , Neoplasias , Reprodutibilidade dos Testes , Animais , Humanos , Projetos de Pesquisa/normas
5.
Elife ; 102021 12 07.
Artigo em Inglês | MEDLINE | ID: mdl-34874008

RESUMO

We conducted the Reproducibility Project: Cancer Biology to investigate the replicability of preclinical research in cancer biology. The initial aim of the project was to repeat 193 experiments from 53 high-impact papers, using an approach in which the experimental protocols and plans for data analysis had to be peer reviewed and accepted for publication before experimental work could begin. However, the various barriers and challenges we encountered while designing and conducting the experiments meant that we were only able to repeat 50 experiments from 23 papers. Here we report these barriers and challenges. First, many original papers failed to report key descriptive and inferential statistics: the data needed to compute effect sizes and conduct power analyses was publicly accessible for just 4 of 193 experiments. Moreover, despite contacting the authors of the original papers, we were unable to obtain these data for 68% of the experiments. Second, none of the 193 experiments were described in sufficient detail in the original paper to enable us to design protocols to repeat the experiments, so we had to seek clarifications from the original authors. While authors were extremely or very helpful for 41% of experiments, they were minimally helpful for 9% of experiments, and not at all helpful (or did not respond to us) for 32% of experiments. Third, once experimental work started, 67% of the peer-reviewed protocols required modifications to complete the research and just 41% of those modifications could be implemented. Cumulatively, these three factors limited the number of experiments that could be repeated. This experience draws attention to a basic and fundamental concern about replication - it is hard to assess whether reported findings are credible.


Assuntos
Pesquisa Biomédica/métodos , Neoplasias , Reprodutibilidade dos Testes , Animais , Humanos , Projetos de Pesquisa
6.
Elife ; 102021 11 09.
Artigo em Inglês | MEDLINE | ID: mdl-34751133

RESUMO

Any large dataset can be analyzed in a number of ways, and it is possible that the use of different analysis strategies will lead to different results and conclusions. One way to assess whether the results obtained depend on the analysis strategy chosen is to employ multiple analysts and leave each of them free to follow their own approach. Here, we present consensus-based guidance for conducting and reporting such multi-analyst studies, and we discuss how broader adoption of the multi-analyst approach has the potential to strengthen the robustness of results and conclusions obtained from analyses of datasets in basic and applied research.


Assuntos
Consenso , Análise de Dados , Conjuntos de Dados como Assunto , Pesquisa
8.
R Soc Open Sci ; 8(7): 181308, 2021 Jul.
Artigo em Inglês | MEDLINE | ID: mdl-34295507

RESUMO

There is evidence that prediction markets are useful tools to aggregate information on researchers' beliefs about scientific results including the outcome of replications. In this study, we use prediction markets to forecast the results of novel experimental designs that test established theories. We set up prediction markets for hypotheses tested in the Defense Advanced Research Projects Agency's (DARPA) Next Generation Social Science (NGS2) programme. Researchers were invited to bet on whether 22 hypotheses would be supported or not. We define support as a test result in the same direction as hypothesized, with a Bayes factor of at least 10 (i.e. a likelihood of the observed data being consistent with the tested hypothesis that is at least 10 times greater compared with the null hypothesis). In addition to betting on this binary outcome, we asked participants to bet on the expected effect size (in Cohen's d) for each hypothesis. Our goal was to recruit at least 50 participants that signed up to participate in these markets. While this was the case, only 39 participants ended up actually trading. Participants also completed a survey on both the binary result and the effect size. We find that neither prediction markets nor surveys performed well in predicting outcomes for NGS2.

9.
Nat Hum Behav ; 5(8): 990-997, 2021 08.
Artigo em Inglês | MEDLINE | ID: mdl-34168323

RESUMO

In registered reports (RRs), initial peer review and in-principle acceptance occur before knowing the research outcomes. This combats publication bias and distinguishes planned from unplanned research. How RRs could improve the credibility of research findings is straightforward, but there is little empirical evidence. Also, there could be unintended costs such as reducing novelty. Here, 353 researchers peer reviewed a pair of papers from 29 published RRs from psychology and neuroscience and 57 non-RR comparison papers. RRs numerically outperformed comparison papers on all 19 criteria (mean difference 0.46, scale range -4 to +4) with effects ranging from RRs being statistically indistinguishable from comparison papers in novelty (0.13, 95% credible interval [-0.24, 0.49]) and creativity (0.22, [-0.14, 0.58]) to sizeable improvements in rigour of methodology (0.99, [0.62, 1.35]) and analysis (0.97, [0.60, 1.34]) and overall paper quality (0.66, [0.30, 1.02]). RRs could improve research quality while reducing publication bias and ultimately improve the credibility of the published literature.


Assuntos
Revisão da Pesquisa por Pares , Sistema de Registros , Pesquisa/normas , Análise de Dados , Humanos , Neurociências , Psicologia , Projetos de Pesquisa/normas , Relatório de Pesquisa/normas
10.
J Particip Med ; 13(1): e23011, 2021 Mar 29.
Artigo em Inglês | MEDLINE | ID: mdl-33779573

RESUMO

Sharing clinical trial data can provide value to research participants and communities by accelerating the development of new knowledge and therapies as investigators merge data sets to conduct new analyses, reproduce published findings to raise standards for original research, and learn from the work of others to generate new research questions. Nonprofit funders, including disease advocacy and patient-focused organizations, play a pivotal role in the promotion and implementation of data sharing policies. Funders are uniquely positioned to promote and support a culture of data sharing by serving as trusted liaisons between potential research participants and investigators who wish to access these participants' networks for clinical trial recruitment. In short, nonprofit funders can drive policies and influence research culture. The purpose of this paper is to detail a set of aspirational goals and forward thinking, collaborative data sharing solutions for nonprofit funders to fold into existing funding policies. The goals of this paper convey the complexity of the opportunities and challenges facing nonprofit funders and the appropriate prioritization of data sharing within their organizations and may serve as a starting point for a data sharing toolkit for nonprofit funders of clinical trials to provide the clarity of mission and mechanisms to enforce the data sharing practices their communities already expect are happening.

11.
Pers Soc Psychol Bull ; 47(2): 185-200, 2021 02.
Artigo em Inglês | MEDLINE | ID: mdl-32493120

RESUMO

This meta-analysis evaluated theoretical predictions from balanced identity theory (BIT) and evaluated the validity of zero points of Implicit Association Test (IAT) and self-report measures used to test these predictions. Twenty-one researchers contributed individual subject data from 36 experiments (total N = 12,773) that used both explicit and implicit measures of the social-cognitive constructs. The meta-analysis confirmed predictions of BIT's balance-congruity principle and simultaneously validated interpretation of the IAT's zero point as indicating absence of preference between two attitude objects. Statistical power afforded by the sample size enabled the first confirmations of balance-congruity predictions with self-report measures. Beyond these empirical results, the meta-analysis introduced a within-study statistical test of the balance-congruity principle, finding that it had greater efficiency than the previous best method. The meta-analysis's full data set has been publicly archived to enable further studies of interrelations among attitudes, stereotypes, and identities.


Assuntos
Atitude , Modelos Psicológicos , Estereotipagem , Feminino , Humanos , Masculino , Reprodutibilidade dos Testes , Autoimagem , Autorrelato , Identificação Social , Estatística como Assunto
12.
PLoS Biol ; 18(12): e3000937, 2020 12.
Artigo em Inglês | MEDLINE | ID: mdl-33296358

RESUMO

Researchers face many, often seemingly arbitrary, choices in formulating hypotheses, designing protocols, collecting data, analyzing data, and reporting results. Opportunistic use of "researcher degrees of freedom" aimed at obtaining statistical significance increases the likelihood of obtaining and publishing false-positive results and overestimated effect sizes. Preregistration is a mechanism for reducing such degrees of freedom by specifying designs and analysis plans before observing the research outcomes. The effectiveness of preregistration may depend, in part, on whether the process facilitates sufficiently specific articulation of such plans. In this preregistered study, we compared 2 formats of preregistration available on the OSF: Standard Pre-Data Collection Registration and Prereg Challenge Registration (now called "OSF Preregistration," http://osf.io/prereg/). The Prereg Challenge format was a "structured" workflow with detailed instructions and an independent review to confirm completeness; the "Standard" format was "unstructured" with minimal direct guidance to give researchers flexibility for what to prespecify. Results of comparing random samples of 53 preregistrations from each format indicate that the "structured" format restricted the opportunistic use of researcher degrees of freedom better (Cliff's Delta = 0.49) than the "unstructured" format, but neither eliminated all researcher degrees of freedom. We also observed very low concordance among coders about the number of hypotheses (14%), indicating that they are often not clearly stated. We conclude that effective preregistration is challenging, and registration formats that provide effective guidance may improve the quality of research.


Assuntos
Coleta de Dados/métodos , Projetos de Pesquisa/estatística & dados numéricos , Coleta de Dados/normas , Coleta de Dados/tendências , Humanos , Controle de Qualidade , Sistema de Registros/estatística & dados numéricos , Projetos de Pesquisa/tendências
13.
R Soc Open Sci ; 7(10): 201520, 2020 Oct.
Artigo em Inglês | MEDLINE | ID: mdl-33204484

RESUMO

Preprints increase accessibility and can speed scholarly communication if researchers view them as credible enough to read and use. Preprint services do not provide the heuristic cues of a journal's reputation, selection, and peer-review processes that, regardless of their flaws, are often used as a guide for deciding what to read. We conducted a survey of 3759 researchers across a wide range of disciplines to determine the importance of different cues for assessing the credibility of individual preprints and preprint services. We found that cues related to information about open science content and independent verification of author claims were rated as highly important for judging preprint credibility, and peer views and author information were rated as less important. As of early 2020, very few preprint services display any of the most important cues. By adding such cues, services may be able to help researchers better assess the credibility of preprints, enabling scholars to more confidently use preprints, thereby accelerating scientific communication and discovery.

15.
PLoS Biol ; 18(3): e3000691, 2020 03.
Artigo em Inglês | MEDLINE | ID: mdl-32218571

RESUMO

Credibility of scientific claims is established with evidence for their replicability using new data. According to common understanding, replication is repeating a study's procedure and observing whether the prior finding recurs. This definition is intuitive, easy to apply, and incorrect. We propose that replication is a study for which any outcome would be considered diagnostic evidence about a claim from prior research. This definition reduces emphasis on operational characteristics of the study and increases emphasis on the interpretation of possible outcomes. The purpose of replication is to advance theory by confronting existing understanding with new evidence. Ironically, the value of replication may be strongest when existing understanding is weakest. Successful replication provides evidence of generalizability across the conditions that inevitably differ from the original study; Unsuccessful replication indicates that the reliability of the finding may be more constrained than recognized previously. Defining replication as a confrontation of current theoretical expectations clarifies its important, exciting, and generative role in scientific progress.


Assuntos
Projetos de Pesquisa/estatística & dados numéricos , Projetos de Pesquisa/normas , Interpretação Estatística de Dados , Humanos , Reprodutibilidade dos Testes , Estatística como Assunto
16.
Proc Natl Acad Sci U S A ; 117(3): 1389-1394, 2020 01 21.
Artigo em Inglês | MEDLINE | ID: mdl-31919283

RESUMO

We report a randomized trial of a research ethics training intervention designed to enhance ethics communication in university science and engineering laboratories, focusing specifically on authorship and data management. The intervention is a project-based research ethics curriculum that was designed to enhance the ability of science and engineering research laboratory members to engage in reason giving and interpersonal communication necessary for ethical practice. The randomized trial was fielded in active faculty-led laboratories at two US research-intensive institutions. Here, we show that laboratory members perceived improvements in the quality of discourse on research ethics within their laboratories and enhanced awareness of the relevance and reasons for that discourse for their work as measured by a survey administered over 4 mo after the intervention. This training represents a paradigm shift compared with more typical module-based or classroom ethics instruction that is divorced from the everyday workflow and practices within laboratories and is designed to cultivate a campus culture of ethical science and engineering research in the very work settings where laboratory members interact.

17.
Gates Open Res ; 3: 1442, 2019.
Artigo em Inglês | MEDLINE | ID: mdl-31850398

RESUMO

Serious concerns about the way research is organized collectively are increasingly being raised. They include the escalating costs of research and lower research productivity, low public trust in researchers to report the truth, lack of diversity, poor community engagement, ethical concerns over research practices, and irreproducibility. Open science (OS) collaborations comprise of a set of practices including open access publication, open data sharing and the absence of restrictive intellectual property rights with which institutions, firms, governments and communities are experimenting in order to overcome these concerns. We gathered two groups of international representatives from a large variety of stakeholders to construct a toolkit to guide and facilitate data collection about OS and non-OS collaborations. Ultimately, the toolkit will be used to assess and study the impact of OS collaborations on research and innovation. The toolkit contains the following four elements: 1) an annual report form of quantitative data to be completed by OS partnership administrators; 2) a series of semi-structured interview guides of stakeholders; 3) a survey form of participants in OS collaborations; and 4) a set of other quantitative measures best collected by other organizations, such as research foundations and governmental or intergovernmental agencies. We opened our toolkit to community comment and input. We present the resulting toolkit for use by government and philanthropic grantors, institutions, researchers and community organizations with the aim of measuring the implementation and impact of OS partnership across these organizations. We invite these and other stakeholders to not only measure, but to share the resulting data so that social scientists and policy makers can analyse the data across projects.

18.
Trends Cogn Sci ; 23(10): 815-818, 2019 10.
Artigo em Inglês | MEDLINE | ID: mdl-31421987

RESUMO

Preregistration clarifies the distinction between planned and unplanned research by reducing unnoticed flexibility. This improves credibility of findings and calibration of uncertainty. However, making decisions before conducting analyses requires practice. During report writing, respecting both what was planned and what actually happened requires good judgment and humility in making claims.


Assuntos
Sistema de Registros , Pesquisa , Humanos , Reprodutibilidade dos Testes , Projetos de Pesquisa
19.
Perspect Psychol Sci ; 14(5): 711-733, 2019 09.
Artigo em Inglês | MEDLINE | ID: mdl-31260639

RESUMO

Most scientific research is conducted by small teams of investigators who together formulate hypotheses, collect data, conduct analyses, and report novel findings. These teams operate independently as vertically integrated silos. Here we argue that scientific research that is horizontally distributed can provide substantial complementary value, aiming to maximize available resources, promote inclusiveness and transparency, and increase rigor and reliability. This alternative approach enables researchers to tackle ambitious projects that would not be possible under the standard model. Crowdsourced scientific initiatives vary in the degree of communication between project members from largely independent work curated by a coordination team to crowd collaboration on shared activities. The potential benefits and challenges of large-scale collaboration span the entire research process: ideation, study design, data collection, data analysis, reporting, and peer review. Complementing traditional small science with crowdsourced approaches can accelerate the progress of science and improve the quality of scientific research.


Assuntos
Crowdsourcing/métodos , Relações Interprofissionais , Ciência/métodos , Utopias , Comportamento Cooperativo , Análise de Dados , Coleta de Dados/métodos , Humanos , Revisão por Pares , Editoração , Projetos de Pesquisa , Redação
20.
J Pers Soc Psychol ; 117(3): 522-559, 2019 Sep.
Artigo em Inglês | MEDLINE | ID: mdl-31192631

RESUMO

Using a novel technique known as network meta-analysis, we synthesized evidence from 492 studies (87,418 participants) to investigate the effectiveness of procedures in changing implicit measures, which we define as response biases on implicit tasks. We also evaluated these procedures' effects on explicit and behavioral measures. We found that implicit measures can be changed, but effects are often relatively weak (|ds| < .30). Most studies focused on producing short-term changes with brief, single-session manipulations. Procedures that associate sets of concepts, invoke goals or motivations, or tax mental resources changed implicit measures the most, whereas procedures that induced threat, affirmation, or specific moods/emotions changed implicit measures the least. Bias tests suggested that implicit effects could be inflated relative to their true population values. Procedures changed explicit measures less consistently and to a smaller degree than implicit measures and generally produced trivial changes in behavior. Finally, changes in implicit measures did not mediate changes in explicit measures or behavior. Our findings suggest that changes in implicit measures are possible, but those changes do not necessarily translate into changes in explicit measures or behavior. (PsycINFO Database Record (c) 2019 APA, all rights reserved).


Assuntos
Metanálise em Rede , Testes Psicológicos , Psicologia Social , Percepção Social , Humanos
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA
...