Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 6 de 6
Filtrar
Más filtros












Base de datos
Intervalo de año de publicación
1.
Elife ; 122023 May 22.
Artículo en Inglés | MEDLINE | ID: mdl-37211820

RESUMEN

Supervision is one important way to socialize Ph.D. candidates into open and responsible research. We hypothesized that one should be more likely to identify open science practices (here publishing open access and sharing data) in empirical publications that were part of a Ph.D. thesis when the Ph.D. candidates' supervisors engaged in these practices compared to those whose supervisors did not or less often did. Departing from thesis repositories at four Dutch University Medical centers, we included 211 pairs of supervisors and Ph.D. candidates, resulting in a sample of 2062 publications. We determined open access status using UnpaywallR and Open Data using Oddpub, where we also manually screened publications with potential open data statements. Eighty-three percent of our sample was published openly, and 9% had open data statements. Having a supervisor who published open access more often than the national average was associated with an odds of 1.99 to publish open access. However, this effect became nonsignificant when correcting for institutions. Having a supervisor who shared data was associated with 2.22 (CI:1.19-4.12) times the odds to share data compared to having a supervisor that did not. This odds ratio increased to 4.6 (CI:1.86-11.35) after removing false positives. The prevalence of open data in our sample was comparable to international studies; open access rates were higher. Whilst Ph.D. candidates spearhead initiatives to promote open science, this study adds value by investigating the role of supervisors in promoting open science.

2.
PLoS One ; 17(6): e0269492, 2022.
Artículo en Inglés | MEDLINE | ID: mdl-35749396

RESUMEN

Concerns about research waste have fueled debate about incentivizing individual researchers and research institutions to conduct responsible research. We showed stakeholders a proof-of-principle dashboard with quantitative metrics of responsible research practices at University Medical Centers (UMCs). Our research question was: What are stakeholders' views on a dashboard that displays the adoption of responsible research practices on a UMC-level? We recruited stakeholders (UMC leadership, support staff, funders, and experts in responsible research) to participate in online interviews. We applied content analysis to understand what stakeholders considered the strengths, weaknesses, opportunities, and threats of the dashboard and its metrics. Twenty-eight international stakeholders participated in online interviews. Stakeholders considered the dashboard helpful in providing a baseline before designing interventions and appreciated the focus on concrete behaviors. Main weaknesses concerned the lack of an overall narrative justifying the choice of metrics. Stakeholders hoped the dashboard would be supplemented with other metrics in the future but feared that making the dashboard public might put UMCs in a bad light. Our findings furthermore suggest a need for discussion with stakeholders to develop an overarching framework for responsible research evaluation and to get research institutions on board.


Asunto(s)
Benchmarking , Humanos
3.
Res Integr Peer Rev ; 4: 25, 2019.
Artículo en Inglés | MEDLINE | ID: mdl-31819806

RESUMEN

BACKGROUND: There is increasing evidence that research misbehaviour is common, especially the minor forms. Previous studies on research misbehaviour primarily focused on biomedical and social sciences, and evidence from natural sciences and humanities is scarce. We investigated what academic researchers in Amsterdam perceived to be detrimental research misbehaviours in their respective disciplinary fields. METHODS: We used an explanatory sequential mixed methods design. First, survey participants from four disciplinary fields rated perceived frequency and impact of research misbehaviours from a list of 60. We then combined these into a top five ranking of most detrimental research misbehaviours at the aggregate level, stratified by disciplinary field. Second, in focus group interviews, participants from each academic rank and disciplinary field were asked to reflect on the most relevant research misbehaviours for their disciplinary field. We used participative ranking methodology inducing participants to obtain consensus on which research misbehaviours are most detrimental. RESULTS: In total, 1080 researchers completed the survey (response rate: 15%) and 61 participated in the focus groups (3 three to 8 eight researchers per group). Insufficient supervision consistently ranked highest in the survey regardless of disciplinary field and the focus groups confirmed this. Important themes in the focus groups were insufficient supervision, sloppy science, and sloppy peer review. Biomedical researchers and social science researchers were primarily concerned with sloppy science and insufficient supervision. Natural sciences and humanities researchers discussed sloppy reviewing and theft of ideas by reviewers, a form of plagiarism. Focus group participants further provided examples of particular research misbehaviours they were confronted with and how these impacted their work as a researcher. CONCLUSION: We found insufficient supervision and various forms of sloppy science to score highly on aggregate detrimental impact throughout all disciplinary fields. Researchers from the natural sciences and humanities also perceived nepotism to be of major impact on the aggregate level. The natural sciences regarded fabrication of data of major impact as well. The focus group interviews helped to understand how researchers interpreted 'insufficient supervision'. Besides, the focus group participants added insight into sloppy science in practice. Researchers from the natural sciences and humanities added new research misbehaviours concerning their disciplinary fields to the list, such as the stealing of ideas before publication. This improves our understanding of research misbehaviour beyond the social and biomedical fields.

4.
PLoS One ; 14(6): e0217931, 2019.
Artículo en Inglés | MEDLINE | ID: mdl-31216293

RESUMEN

Publications determine to a large extent the possibility to stay in academia ("publish or perish"). While some pressure to publish may incentivise high quality research, too much publication pressure is likely to have detrimental effects on both the scientific enterprise and on individual researchers. Our research question was: What is the level of perceived publication pressure in the four academic institutions in Amsterdam and does the pressure to publish differ between academic ranks and disciplinary fields? Investigating researchers in Amsterdam with the revised Publication Pressure Questionnaire, we find that a negative attitude towards the current publication climate is present across academic ranks and disciplinary fields. Postdocs and assistant professors (M = 3.42) perceive the greatest publication stress and PhD-students (M = 2.44) perceive a significant lack of resources to relieve publication stress. Results indicate the need for a healthier publication climate where the quality and integrity of research is rewarded.


Asunto(s)
Rendimiento Académico/normas , Publicaciones/normas , Investigadores , Universidades , Rendimiento Académico/tendencias , Bibliometría , Empleo , Humanos , Publicaciones/tendencias , Investigación , Informe de Investigación/normas , Informe de Investigación/tendencias , Ciencias Sociales/normas , Encuestas y Cuestionarios
5.
Res Integr Peer Rev ; 4: 7, 2019.
Artículo en Inglés | MEDLINE | ID: mdl-31007948

RESUMEN

BACKGROUND: The emphasis on impact factors and the quantity of publications intensifies competition between researchers. This competition was traditionally considered an incentive to produce high-quality work, but there are unwanted side-effects of this competition like publication pressure. To measure the effect of publication pressure on researchers, the Publication Pressure Questionnaire (PPQ) was developed. Upon using the PPQ, some issues came to light that motivated a revision. METHOD: We constructed two new subscales based on work stress models using the facet method. We administered the revised PPQ (PPQr) to a convenience sample together with the Maslach Burnout Inventory (MBI) and the Work Design Questionnaire (WDQ). To assess which items best measured publication pressure, we carried out a principal component analysis (PCA). Reliability was sufficient when Cronbach's alpha > 0.7. Finally, we administered the PPQr in a larger, independent sample of researchers to check the reliability of the revised version. RESULTS: Three components were identified as 'stress', 'attitude', and 'resources'. We selected 3 × 6 = 18 items with high loadings in the three-component solution. Based on the convenience sample, Cronbach's alphas were 0.83 for stress, 0.80 for attitude, and 0.76 for resources. We checked the validity of the PPQr by inspecting the correlations with the MBI and the WDQ. Stress correlated 0.62 with MBI's emotional exhaustion. Resources correlated 0.50 with relevant WDQ subscales. To assess the internal structure of the PPQr in the independent reliability sample, we conducted the principal component analysis. The three-component solution explains 50% of the variance. Cronbach's alphas were 0.80, 0.78, and 0.75 for stress, attitude, and resources, respectively. CONCLUSION: We conclude that the PPQr is a valid and reliable instrument to measure publication pressure in academic researchers from all disciplinary fields. The PPQr strongly relates to burnout and could also be beneficial for policy makers and research institutions to assess the degree of publication pressure in their institute.

6.
PLoS One ; 14(1): e0210599, 2019.
Artículo en Inglés | MEDLINE | ID: mdl-30657778

RESUMEN

Breaches of research integrity have shocked the academic community. Initially explanations were sought at the level of individual researchers but over time increased recognition emerged of the important role that the research integrity climate may play in influencing researchers' (mis)behavior. In this study we aim to assess whether researchers from different academic ranks and disciplinary fields experience the research integrity climate differently. We sent an online questionnaire to academic researchers in Amsterdam using the Survey of Organizational Research Climate. Bonferroni corrected mean differences showed that junior researchers (PhD students, postdocs and assistant professors) perceive the research integrity climate more negatively than senior researchers (associate and full professors). Junior researchers note that their supervisors are less committed to talk about key research integrity principles compared to senior researchers (MD = -.39, CI = -.55, -.24). PhD students perceive more competition and suspicion among colleagues (MD = -.19, CI = -.35, -.05) than associate and full professors. We found that researchers from the natural sciences overall express a more positive perception of the research integrity climate. Researchers from social sciences as well as from the humanities perceive less fairness of their departments' expectations in terms of publishing and acquiring funding compared to natural sciences and biomedical sciences (MD = -.44, CI = -.74, -.15; MD = -.36, CI = -.61, -.11). Results suggest that department leaders in the humanities and social sciences should do more to set fairer expectations for their researchers and that senior scientists should ensure junior researchers are socialized into research integrity practices and foster a climate in their group where suspicion among colleagues has no place.


Asunto(s)
Investigación Biomédica/ética , Investigación Interdisciplinaria/ética , Percepción , Investigadores/estadística & datos numéricos , Encuestas y Cuestionarios , Femenino , Humanos , Masculino , Países Bajos , Análisis de Regresión
SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA
...