Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 7 de 7
Filtrar
1.
PLoS One ; 18(3): e0283893, 2023.
Artigo em Inglês | MEDLINE | ID: mdl-37000889

RESUMO

In our study, we have empirically studied the assessment of cited papers within the framework of the anchoring-and-adjustment heuristic. We are interested in the question whether the assessment of a paper can be influenced by numerical information that act as an anchor (e.g. citation impact). We have undertaken a survey of corresponding authors with an available email address in the Web of Science database. The authors were asked to assess the quality of papers that they cited in previous papers. Some authors were assigned to three treatment groups that receive further information alongside the cited paper: citation impact information, information on the publishing journal (journal impact factor) or a numerical access code to enter the survey. The control group did not receive any further numerical information. We are interested in whether possible adjustments in the assessments can not only be produced by quality-related information (citation impact or journal impact), but also by numbers that are not related to quality, i.e. the access code. Our results show that the quality assessments of papers seem to depend on the citation impact information of single papers. The other information (anchors) such as an arbitrary number (an access code) and journal impact information did not play a (important) role in the assessments of papers. The results point to a possible anchoring bias caused by insufficient adjustment: it seems that the respondents assessed cited papers in another way when they observed paper impact values in the survey. We conclude that initiatives aiming at reducing the use of journal impact information in research evaluation either were already successful or overestimated the influence of this information.


Assuntos
Fator de Impacto de Revistas , Editoração , Bases de Dados Factuais , Grupos Controle
2.
PLoS One ; 16(9): e0257307, 2021.
Artigo em Inglês | MEDLINE | ID: mdl-34587179

RESUMO

In our planned study, we shall empirically study the assessment of cited papers within the framework of the anchoring-and-adjustment heuristic. We are interested in the question whether citation decisions are (mainly) driven by the quality of cited references. The design of our study is oriented towards the study by Teplitskiy, Duede [10]. We shall undertake a survey of corresponding authors with an available email address in the Web of Science database. The authors are asked to assess the quality of papers that they cited in previous papers. Some authors will be assigned to three treatment groups that receive further information alongside the cited paper: citation information, information on the publishing journal (journal impact factor), or a numerical access code to enter the survey. The control group will not receive any further numerical information. In the statistical analyses, we estimate how (strongly) the quality assessments of the cited papers are adjusted by the respondents to the anchor value (citation, journal, or access code). Thus, we are interested in whether possible adjustments in the assessments can not only be produced by quality-related information (citation or journal), but also by numbers that are not related to quality, i.e. the access code. The results of the study may have important implications for quality assessments of papers by researchers and the role of numbers, citations, and journal metrics in assessment processes.


Assuntos
Bibliometria , Fator de Impacto de Revistas , Publicações , Editoração/estatística & dados numéricos , Pesquisadores , Gerenciamento de Dados , Bases de Dados Factuais , Humanos , Internet , Inquéritos e Questionários
3.
PLoS One ; 14(1): e0210160, 2019.
Artigo em Inglês | MEDLINE | ID: mdl-30682052

RESUMO

In research into higher education, the evaluation of completion and dropout rates has generated a steady stream of interest for decades. While most studies only calculate quotes using student and graduate numbers for both phenomena, we propose to additionally consider the budget available to universities. We transfer the idea of the excellence shift indicator [1] from the research to the teaching area, in particular to the completion rate of educational entities. The graduation shift shows the institutions' ability to produce graduates as measured against their basic academic teaching efficiency. It is an important advantage of the graduation shift that it avoids the well-known heterogeneity problem in efficiency measurements. Our study is based on German universities of applied science. Given their politically determined focus on education, this dataset is well-suited for introducing and evaluating the graduation shift. Using a comprehensive dataset covering the years 2008 to 2013, we show that the graduation shift produces results, which correlate closely with the results of the well-known graduation rate and standard Data Envelopment Analysis (DEA). Compared to the graduation rate, the graduation shift is preferable because it allows to take the budget of institutions into account. Compared to the DEA, the computation of the graduation shift is easy, the results are robust, and non-economists can understand them results. Thus, we recommend the graduation shift as an alternative method of efficiency measurement in the teaching area.


Assuntos
Avaliação Educacional/estatística & dados numéricos , Eficiência , Ciência/educação , Evasão Escolar/estatística & dados numéricos , Universidades/organização & administração , Conjuntos de Dados como Assunto , Alemanha , Humanos , Ciência/economia , Ensino/organização & administração , Ensino/estatística & dados numéricos , Universidades/economia , Universidades/estatística & dados numéricos
4.
J R Soc Med ; 106(1): 19-29, 2013 Jan.
Artigo em Inglês | MEDLINE | ID: mdl-23358276

RESUMO

OBJECTIVE: To investigate whether the h index (a bibliometric tool which is increasingly used to assess and appraise an individual's research performance) could be improved to better measure the academic performance and citation profile for individual healthcare researchers. DESIGN: Cohort study. SETTING: Faculty of Medicine, Imperial College London, UK. PARTICIPANTS: Publication lists from 1 January 2000 until 31 December 2009 for 501 academic healthcare researchers from the Faculty of Medicine. MAIN OUTCOME MEASURES: The h index for each researcher was calculated over a nine-year period. The citation count for each researcher was differentiated into high (h(2) upper), core (h(2) centre) and low (h(2) lower) visibility areas. Segmented regression model (sRM) was used to statistically estimate number of high visibility publications (sRM value). Validity of the h index and other proposed adjuncts were analysed against academic rank and conventional bibliometric indicators. RESULTS: Construct validity was demonstrated for h index, h(2) upper, h(2) centre, h(2) lower and sRM value (all P < 0.05). Convergent validity of the h index and sRM value was shown by significant correlations with total number of publications (r = 0.89 and 0.86 respectively, P < 0.05) and total number of citations (r = 0.96 and 0.65, respectively, P < 0.05). Significant differences in h index and sRM value existed between non-physician and physician researchers (P < 0.05). CONCLUSIONS: This study supports the construct validity of the h index as a measure of healthcare researcher academic rank. It also identifies the assessment value of our developed indices of h(2) upper, h(2) centre, h(2) lower and sRM. These can be applied in combination with the h index to provide additional objective evidence to appraise the performance and impact of an academic healthcare researcher.


Assuntos
Bibliometria , Pesquisa sobre Serviços de Saúde , Publicações , Editoração , Pesquisadores , Estudos de Coortes , Humanos , Londres
6.
PLoS One ; 3(10): e3480, 2008.
Artigo em Inglês | MEDLINE | ID: mdl-18941530

RESUMO

Does peer review fulfill its declared objective of identifying the best science and the best scientists? In order to answer this question we analyzed the Long-Term Fellowship and the Young Investigator programmes of the European Molecular Biology Organization. Both programmes aim to identify and support the best post doctoral fellows and young group leaders in the life sciences. We checked the association between the selection decisions and the scientific performance of the applicants. Our study involved publication and citation data for 668 applicants to the Long-Term Fellowship programme from the year 1998 (130 approved, 538 rejected) and 297 applicants to the Young Investigator programme (39 approved and 258 rejected applicants) from the years 2001 and 2002. If quantity and impact of research publications are used as a criterion for scientific achievement, the results of (zero-truncated) negative binomial models show that the peer review process indeed selects scientists who perform on a higher level than the rejected ones subsequent to application. We determined the extent of errors due to over-estimation (type I errors) and under-estimation (type 2 errors) of future scientific performance. Our statistical analyses point out that between 26% and 48% of the decisions made to award or reject an application show one of both error types. Even though for a part of the applicants, the selection committee did not correctly estimate the applicant's future performance, the results show a statistically significant association between selection decisions and the applicants' scientific achievements, if quantity and impact of research publications are used as a criterion for scientific achievement.


Assuntos
Bolsas de Estudo/métodos , Revisão da Pesquisa por Pares/normas , Logro , Europa (Continente) , Biologia Molecular , Publicações/estatística & dados numéricos
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA