Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 72
Filtrar
1.
Health Res Policy Syst ; 21(1): 136, 2023 Dec 18.
Artigo em Inglês | MEDLINE | ID: mdl-38110938

RESUMO

Research Impact Assessment (RIA) represents one of a suite of policies intended to improve the impact generated from investment in health and medical research (HMR). Positivist indicator-based approaches to RIA are widely implemented but increasingly criticised as theoretically problematic, unfair, and burdensome. This commentary proposes there are useful outcomes that emerge from the process of applying an indicator-based RIA framework, separate from those encapsulated in the metrics themselves. The aim for this commentary is to demonstrate how the act of conducting an indicator-based approach to RIA can serve to optimise the productive gains from the investment in HMR. Prior research found that the issues regarding RIA are less about the choice of indicators/metrics, and more about the discussions prompted and activities incentivised by the process. This insight provides an opportunity to utilise indicator-based methods to purposely optimise the research impact. An indicator-based RIA framework specifically designed to optimise research impacts should: focus on researchers and the research process, rather than institution-level measures; utilise a project level unit of analysis that provides control to researchers and supports collaboration and accountability; provide for prospective implementation of RIA and the prospective orientation of research; establish a line of sight to the ultimate anticipated beneficiaries and impacts; Include process metrics/indicators to acknowledge interim steps on the pathway to final impacts; integrate 'next' users and prioritise the utilisation of research outputs as a critical measure; Integrate and align the incentives for researchers/research projects arising from RIA, with those existing within the prevailing research system; integrate with existing peer-review processes; and, adopt a system-wide approach where incremental improvements in the probability of translation from individual research projects, yields higher impact across the whole funding portfolio.Optimisation of the impacts from HMR investment represents the primary purpose of Research Impact policy. The process of conducting an indicator-based approach to RIA, which engages the researcher during the inception and planning phase, can directly contribute to this goal through improvements in the probability that an individual project will generate interim impacts. The research project funding process represents a promising forum to integrate this approach within the existing research system.


Assuntos
Pesquisa Biomédica , Motivação , Humanos , Estudos Prospectivos , Eficiência , Benchmarking
2.
Health Res Policy Syst ; 21(1): 43, 2023 Jun 05.
Artigo em Inglês | MEDLINE | ID: mdl-37277824

RESUMO

BACKGROUND: In prior research, we identified and prioritized ten measures to assess research performance that comply with the San Francisco Declaration on Research Assessment, a principle adopted worldwide that discourages metrics-based assessment. Given the shift away from assessment based on Journal Impact Factor, we explored potential barriers to implementing and adopting the prioritized measures. METHODS: We identified administrators and researchers across six research institutes, conducted telephone interviews with consenting participants, and used qualitative description and inductive content analysis to derive themes. RESULTS: We interviewed 18 participants: 6 administrators (research institute business managers and directors) and 12 researchers (7 on appointment committees) who varied by career stage (2 early, 5 mid, 5 late). Participants appreciated that the measures were similar to those currently in use, comprehensive, relevant across disciplines, and generated using a rigorous process. They also said the reporting template was easy to understand and use. In contrast, a few administrators thought the measures were not relevant across disciplines. A few participants said it would be time-consuming and difficult to prepare narratives when reporting the measures, and several thought that it would be difficult to objectively evaluate researchers from a different discipline without considerable effort to read their work. Strategies viewed as necessary to overcome barriers and support implementation of the measures included high-level endorsement of the measures, an official launch accompanied by a multi-pronged communication strategy, training for both researchers and evaluators, administrative support or automated reporting for researchers, guidance for evaluators, and sharing of approaches across research institutes. CONCLUSIONS: While participants identified many strengths of the measures, they also identified a few limitations and offered corresponding strategies to address the barriers that we will apply at our organization. Ongoing work is needed to develop a framework to help evaluators translate the measures into an overall assessment. Given little prior research that identified research assessment measures and strategies to support adoption of those measures, this research may be of interest to other organizations that assess the quality and impact of research.

3.
Sci Eng Ethics ; 29(4): 26, 2023 07 04.
Artigo em Inglês | MEDLINE | ID: mdl-37403005

RESUMO

In recent years, the changing landscape for the conduct and assessment of research and of researchers has increased scrutiny of the reward systems of science. In this context, correcting the research record, including retractions, has gained attention and space in the publication system. One question is the possible influence of retractions on the careers of scientists. It might be assessed, for example, through citation patterns or productivity rates for authors who have had one or more retractions. This is an emerging issue today, with growing discussions in the research community about impact. We have explored the influence of retractions on grant review criteria. Here, we present results of a qualitative study exploring the views of a group of six representatives of funding agencies from different countries and of a follow-up survey of 224 reviewers in the US. These reviewers have served on panels for the National Science Foundation, the National Institutes of Health, and/or a few other agencies. We collected their perceptions about the influence of self-correction of the literature and of retractions on grant decisions. Our results suggest that correcting the research record, for honest error or misconduct, is perceived as an important mechanism to strengthen the reliability of science, among most respondents. However, retractions and self-correcting the literature at large are not factors influencing grant review, and dealing with retractions in reviewing grants is an open question for funders.


Assuntos
Pesquisa Biomédica , Má Conduta Científica , Estados Unidos , Reprodutibilidade dos Testes , National Institutes of Health (U.S.) , Organização do Financiamento
4.
Health Res Policy Syst ; 18(1): 6, 2020 Jan 20.
Artigo em Inglês | MEDLINE | ID: mdl-31959198

RESUMO

BACKGROUND: Public research funding agencies and research organisations are increasingly accountable for the wider impacts of the research they support. While research impact assessment (RIA) frameworks and tools exist, little is known and shared of how these organisations implement RIA activities in practice. METHODS: We conducted a review of academic literature to search for research organisations' published experiences of RIAs. We followed this with semi-structured interviews from a convenience sample (n = 7) of representatives of four research organisations deploying strategies to support and assess research impact. RESULTS: We found only five studies reporting empirical evidence on how research organisations put RIA principles into practice. From our interviews, we observed a disconnect between published RIA frameworks and tools, and the realities of organisational practices, which tended not to be reported. We observed varying maturity and readiness with respect to organisations' structural set ups for conducting RIAs, particularly relating to leadership, skills for evaluation and automating RIA data collection. Key processes for RIA included efforts to engage researcher communities to articulate and plan for impact, using a diversity of methods, frameworks and indicators, and supporting a learning approach. We observed outcomes of RIAs as having supported a dialogue to orient research to impact, underpinned shared learning from analyses of research, and provided evidence of the value of research in different domains and to different audiences. CONCLUSIONS: Putting RIA principles and frameworks into practice is still in early stages for research organisations. We recommend that organisations (1) get set up by considering upfront the resources, time and leadership required to embed impact strategies throughout the organisation and wider research 'ecosystem', and develop methodical approaches to assessing impact; (2) work together by engaging researcher communities and wider stakeholders as a core part of impact pathway planning and subsequent assessment; and (3) recognise the benefits that RIA can bring about as a means to improve mutual understanding of the research process between different actors with an interest in research.


Assuntos
Academias e Institutos/organização & administração , Apoio à Pesquisa como Assunto/estatística & dados numéricos , Academias e Institutos/normas , Participação da Comunidade , Humanos , Liderança
5.
Health Res Policy Syst ; 16(1): 28, 2018 Mar 16.
Artigo em Inglês | MEDLINE | ID: mdl-29548331

RESUMO

BACKGROUND: The question of how to measure, assess and optimise the returns from investment in health and medical research (HMR) is a highly policy-relevant issue. Research Impact Assessment Frameworks (RIAFs) provide a conceptual measurement framework to assess the impact from HMR. The aims of this study were (1) to elicit the views of Medical Research Institutes (MRIs) regarding objectives, definitions, methods, barriers, potential scope and attitudes towards RIAFs, and (2) to investigate whether an assessment framework should represent a retrospective reflection of research impact or a prospective approach integrated into the research process. The wider objective was to inform the development of a draft RIAF for Australia's MRIs. METHODS: Purposive sampling to derive a heterogeneous sample of Australian MRIs was used alongside semi-structured interviews with senior executives responsible for research translation or senior researchers affected by research impact initiatives. Thematic analysis of the interview transcriptions using the framework approach was then performed. RESULTS: Interviews were conducted with senior representatives from 15 MRIs. Participants understood the need for greater research translation/impact, but varied in their comprehension and implementation of RIAFs. Common concerns included the time lag to the generation of societal impacts from basic or discovery science, and whether impact reflected a narrow commercialisation agenda. Broad support emerged for the use of metrics, case study and economic methods. Support was also provided for the rationale of both standardised and customised metrics. Engendering cultural change in the approach to research translation was acknowledged as both a barrier to greater impact and a critical objective for the assessment process. Participants perceived that the existing research environment incentivised the generation of academic publications and track records, and often conflicted with the generation of wider impacts. The potential to improve the speed of translation through prospective implementation of impact assessment was supported, albeit that the mechanism required development. CONCLUSION: The study found that the issues raised regarding research impact assessment are less about methods and metrics, and more about the research activities that the measurement of research translation and impact may or may not incentivise. Consequently, if impact assessment is to contribute to optimisation of the health gains from the public, corporate and philanthropic investment entrusted to the institutes, then further inquiry into how the assessment process may re-align research behaviour must be prioritised.


Assuntos
Academias e Institutos , Atitude , Pesquisa Biomédica , Estudos de Avaliação como Assunto , Pesquisadores , Austrália , Política de Saúde , Humanos , Estudos Prospectivos , Pesquisa Qualitativa , Projetos de Pesquisa , Estudos Retrospectivos , Pesquisa Translacional Biomédica
6.
Malar J ; 15(1): 585, 2016 Dec 06.
Artigo em Inglês | MEDLINE | ID: mdl-27919257

RESUMO

BACKGROUND: As global research investment increases, attention inevitably turns to assessing and measuring the outcomes and impact from research programmes. Research can have many different outcomes such as producing advances in scientific knowledge, building research capacity and, ultimately, health and broader societal benefits. The aim of this study was to test the use of a Delphi methodology as a way of gathering views from malaria research experts on research priorities and eliciting relative valuations of the different types of health research impact. METHODS: An international Delphi survey of 60 malaria research experts was used to understand views on research outcomes and priorities within malaria and across global health more widely. RESULTS: The study demonstrated the application of the Delphi technique to eliciting views on malaria specific research priorities, wider global health research priorities and the values assigned to different types of research impact. In terms of the most important past research successes, the development of new anti-malarial drugs and insecticide-treated bed nets were rated as the most important. When asked about research priorities for future funding, respondents ranked tackling emerging drug and insecticide resistance the highest. With respect to research impact, the panel valued research that focuses on health and health sector benefits and informing policy and product development. Contributions to scientific knowledge, although highly valued, came lower down the ranking, suggesting that efforts to move research discoveries to health products and services are valued more highly than pure advances in scientific knowledge. CONCLUSIONS: Although the Delphi technique has been used to elicit views on research questions in global health this was the first time it has been used to assess how a group of research experts value or rank different types of research impact. The results suggest it is feasible to inject the views of a key stakeholder group into the research prioritization process and the Delphi approach is a useful tool for eliciting views on the value or importance of research impact. Future work will explore other methods for assessing and valuing research impact and test the feasibility of developing a composite tool for measuring research outcomes weighted by the values of different stakeholders.


Assuntos
Pesquisa Biomédica/tendências , Saúde Global , Malária/diagnóstico , Malária/tratamento farmacológico , Pesquisa , Técnica Delphi , Humanos , Malária/epidemiologia , Malária/prevenção & controle , Inquéritos e Questionários
7.
Sci Eng Ethics ; 22(1): 227-35, 2016 Feb.
Artigo em Inglês | MEDLINE | ID: mdl-25689931

RESUMO

This article deals with a modern disease of academic science that consists of an enormous increase in the number of scientific publications without a corresponding advance of knowledge. Findings are sliced as thin as salami and submitted to different journals to produce more papers. If we consider academic papers as a kind of scientific 'currency' that is backed by gold bullion in the central bank of 'true' science, then we are witnessing an article-inflation phenomenon, a scientometric bubble that is most harmful for science and promotes an unethical and antiscientific culture among researchers. The main problem behind the scenes is that the impact factor is used as a proxy for quality. Therefore, not only for convenience, but also based on ethical principles of scientific research, we adhere to the San Francisco Declaration on Research Assessment when it emphasizes "the need to eliminate the use of journal-based metrics in funding, appointment and promotion considerations; and the need to assess research on its own merits rather on the journal in which the research is published". Our message is mainly addressed to the funding agencies and universities that award tenures or grants and manage research programmes, especially in developing countries. The message is also addressed to well-established scientists who have the power to change things when they participate in committees for grants and jobs.


Assuntos
Bibliometria , Pesquisa Biomédica , Fator de Impacto de Revistas , Conhecimento , Editoração , Pesquisa Biomédica/ética , Pesquisa Biomédica/normas , Ética em Pesquisa , Apoio Financeiro , Humanos , Editoração/ética , Editoração/normas , Universidades
8.
Elife ; 132024 Feb 29.
Artigo em Inglês | MEDLINE | ID: mdl-38420960

RESUMO

What happened when eLife decided to eliminate accept/reject decisions after peer review?


Assuntos
Revisão da Pesquisa por Pares , Revisão por Pares
9.
Elife ; 132024 07 23.
Artigo em Inglês | MEDLINE | ID: mdl-39041434

RESUMO

When deciding which submissions should be peer reviewed, eLife editors consider whether they will be able to find high-quality reviewers, and whether the reviews will be valuable to the scientific community.


Assuntos
Revisão da Pesquisa por Pares , Políticas Editoriais , Publicações Periódicas como Assunto , Revisão por Pares/normas , Humanos
10.
Intern Emerg Med ; 19(1): 39-47, 2024 Jan.
Artigo em Inglês | MEDLINE | ID: mdl-37921985

RESUMO

Quantitative bibliometric indicators are widely used and widely misused for research assessments. Some metrics have acquired major importance in shaping and rewarding the careers of millions of scientists. Given their perceived prestige, they may be widely gamed in the current "publish or perish" or "get cited or perish" environment. This review examines several gaming practices, including authorship-based, citation-based, editorial-based, and journal-based gaming as well as gaming with outright fabrication. Different patterns are discussed, including massive authorship of papers without meriting credit (gift authorship), team work with over-attribution of authorship to too many people (salami slicing of credit), massive self-citations, citation farms, H-index gaming, journalistic (editorial) nepotism, journal impact factor gaming, paper mills and spurious content papers, and spurious massive publications for studies with demanding designs. For all of those gaming practices, quantitative metrics and analyses may be able to help in their detection and in placing them into perspective. A portfolio of quantitative metrics may also include indicators of best research practices (e.g., data sharing, code sharing, protocol registration, and replications) and poor research practices (e.g., signs of image manipulation). Rigorous, reproducible, transparent quantitative metrics that also inform about gaming may strengthen the legacy and practices of quantitative appraisals of scientific work.


Assuntos
Bibliometria , Fator de Impacto de Revistas , Humanos , Editoração , Autoria
11.
Artigo em Inglês | MEDLINE | ID: mdl-39183330

RESUMO

BACKGROUND: Despite being one of the most popular measures of borderline pathology in adolescents, only one study has evaluated clinical cut-off scores for the Borderline Personality Features Scale for Children (BPFS-C) using a small sample without a healthy comparison group (Chang B, Sharp C, Ha C. The Criterion Validity of the Borderline Personality Features Scale for Children in an Adolescent Inpatient Setting. J Personal Disord. 2011;25(4):492-503. https://doi.org/10.1521/pedi.2011.25.4.492 .). The purpose of the current study was to replicate and improve on the limitations of the prior study conducted by Chang et al. to more definitively establish clinical cut-off scores for the self- and parent-report versions of the BPFS-C to detect clinical and sub-clinical borderline personality disorder (BPD) in a large sample of adolescents with BPD, other psychopathology, and no psychopathology. METHODS: A total of 900 adolescents ranging from ages 12-17 participated in this study. The clinical sample consisted of 622 adolescents recruited from an inpatient psychiatric facility, and the healthy control sample consisted of 278 adolescents recruited from the community. All participants completed the BPFS-C and were administered the Child Interview for DSM-IV Borderline Personality Disorder (CI-BPD). RESULTS: Using three-way ROC analyses, cut-off scores on the self- and parent-report versions of the BPFS-C distinguishing adolescents with BPD from those with subclinical BPD, and those with subclinical BPD from healthy adolescents were established. CONCLUSIONS: These findings support the use of both versions of the BPFS-C to detect adolescents with BPD and sub-clinical BPD.

12.
Epilepsy Behav ; 28(3): 522-9, 2013 Sep.
Artigo em Inglês | MEDLINE | ID: mdl-23706263

RESUMO

BACKGROUND: There has been a rapid expansion in the number of research papers published on clinical epilepsy topics and the number of journals in the medical field. In this expanding publishing environment, the question arises as to how much of the published medical literature has 'enduring value' in terms of advancing knowledge in any significant way. METHODS: We developed a methodology to assess the enduring value of papers published in the field of clinical epilepsy and established its internal validity. We studied 300 research papers published in 1981, 1991, and 2001 (100 in each year) and assessed their enduring value in four domains: citations in the last year, citations in the last 10years, citations in the standard epilepsy textbook, and a subjective assessment by an experienced epileptologist. RESULTS: Of the 300 papers, 214 (71%) were categorized as having 'no enduring value', and only 11 (4%) were identified as having 'high enduring value'. The 'high enduring value' papers could generally be identified immediately on publication, by high initial citation values, and were also more likely to be published in journals with a high impact factor. The commonest characteristics of a paper with no enduring value were that they reported research that was inherently unimportant (55.6%), not novel (38.8%), or had significant methodological flaws (22.0%). CONCLUSIONS: Although there are other reasons for publishing papers, the fact that the great majority of published papers lack enduring value in terms of advancing knowledge should be a concern to the medical and scientific community.


Assuntos
Epilepsia , Fator de Impacto de Revistas , Editoração , Pesquisa , Epilepsia/terapia , Humanos , Editoração/estatística & dados numéricos
13.
Res Integr Peer Rev ; 8(1): 13, 2023 Sep 05.
Artigo em Inglês | MEDLINE | ID: mdl-37667388

RESUMO

BACKGROUND: Scientific productivity is often evaluated by means of cumulative citation metrics. Different metrics produce different incentives. The H-index assigns full credit from a citation to each coauthor, and thus may encourage multiple collaborations in mid-list author roles. In contrast, the Hm-index assigns only a fraction 1/k of citation credit to each of k coauthors of an article, and thus may encourage research done by smaller teams, and in first or last author roles. Whether H and Hm indices are influenced by different authorship patterns has not been examined. METHODS: Using a publicly available Scopus database, I examined associations between the numbers of research articles published as single, first, mid-list, or last author between 1990 and 2019, and the H-index and the Hm-index, among 18,231 leading researchers in the health sciences. RESULTS: Adjusting for career duration and other article types, the H-index was negatively associated with the number of single author articles (partial Pearson r -0.06) and first author articles (-0.08), but positively associated with the number of mid-list (0.64) and last author articles (0.21). In contrast, all associations were positive for the Hm-index (0.04 for single author articles, 0.18 for first author articles, 0.24 for mid-list articles, and 0.46 for last author articles). CONCLUSION: The H-index and the Hm-index do not reflect the same authorship patterns: the full-credit H-index is predominantly associated with mid-list authorship, whereas the partial-credit Hm-index is driven by more balanced publication patterns, and is most strongly associated with last-author articles. Since performance metrics may act as incentives, the selection of a citation metric should receive careful consideration.

14.
Heliyon ; 9(11): e21592, 2023 Nov.
Artigo em Inglês | MEDLINE | ID: mdl-38027555

RESUMO

Artificial Intelligence (AI) is a rapidly developing field of research that attracts significant funding from both the state and industry players. Such interest is driven by a wide range of AI technology applications in many fields. Since many AI research topics relate to computer science, where a significant share of research results are published in conference proceedings, the same applies to AI. The world leaders in artificial intelligence research are China and the United States. The authors conducted a comparative analysis of the bibliometric indicators of AI conference papers from these two countries based on Scopus data. The analysis aimed to identify conferences that receive above-average citation rates and suggest publication strategies for authors from these countries to participate in conferences that are likely to provide better dissemination of their research results. The results showed that, although Chinese researchers publish more AI papers than those from the United States, US conference papers are cited more frequently. The authors also conducted a correlation analysis of the MNCS index, which revealed no high correlation between MNCS USA vs. MNCS China, MNCS China/MNCS USA vs. MSAR, and MNCS China/MNCS USA vs. CORE ranking indicators.

15.
Cancer Rep (Hoboken) ; 6(1): e1650, 2023 01.
Artigo em Inglês | MEDLINE | ID: mdl-35689556

RESUMO

PURPOSE: To evaluate the cancer research effort of some major countries over two 5-year periods (2010-2014 and 2015-2019) on the basis of scientific publications and interventional clinical trial metrics and to analyze the relationship between research effort and cancer burden (incidence and mortality). MATERIALS AND METHODS: Clinical trials were extracted from ClinicalTrials.gov using a specific query. Publications were identified in Web of Science (WoS) using a query based on keywords and were then analyzed using InCites, a bibliometric tool. Bibliometric indicators were computed per country and per period. RESULTS: During 2010-2019, 1 120 821 cancer-related publications were identified in WoS, with 447 900 and 672 921 (+50%) articles respectively published in 2010-2014 and 2015-2019. Meanwhile, 38% and 7% of the articles were published in oncology and cell biology journals, respectively. Exactly 30% of the published articles were contributed by the USA. In the study period, China strongly increased its production and overspecialization. Apart from China, which had a low normalized citation impact (NCI), almost all countries increased their NCIs; in particular, France's NCI increased from 1.69 to 2.44. As for clinical trials, over 36 856 were opened worldwide during that period. Over 17 000 (46.5%) opened in the USA, which remained the leader during the study period. China ranked second worldwide in terms of the number of open trials in 2015-2019. Results revealed that the 17 cancer localizations versus cancer burden and research effort showed no evident relationship. CONCLUSION: The results may provide a scientific basis for decision making for continued research. Based on bibliometric data, this type of study will aid public health policymaking and lead to a more transparent public fund allocation.


Assuntos
Bibliometria , Oncologia , Humanos , China , Ensaios Clínicos como Assunto
16.
F1000Res ; 12: 1241, 2023.
Artigo em Inglês | MEDLINE | ID: mdl-38813348

RESUMO

Background: Research and researchers are heavily evaluated, and over the past decade it has become widely acknowledged that the consequences of evaluating the research enterprise and particularly individual researchers are considerable. This has resulted in the publishing of several guidelines and principles to support moving towards more responsible research assessment (RRA). To ensure that research evaluation is meaningful, responsible, and effective the International Network of Research Management Societies (INORMS) Research Evaluation Group created the SCOPE framework enabling evaluators to deliver on existing principles of RRA. SCOPE bridges the gap between principles and their implementation by providing a structured five-stage framework by which evaluations can be designed and implemented, as well as evaluated. Methods: SCOPE is a step-by-step process designed to help plan, design, and conduct research evaluations as well as check effectiveness of existing evaluations. In this article, four case studies are presented to show how SCOPE has been used in practice to provide value-based research evaluation. Results: This article situates SCOPE within the international work towards more meaningful and robust research evaluation practices and shows through the four case studies how it can be used by different organisations to develop evaluations at different levels of granularity and in different settings. Conclusions: The article demonstrates that the SCOPE framework is rooted firmly in the existing literature. In addition, it is argued that it does not simply translate existing principles of RRA into practice, but provides additional considerations not always addressed in existing RRA principles and practices thus playing a specific role in the delivery of RRA. Furthermore, the use cases show the value of SCOPE across a range of settings, including different institutional types, sizes, and missions.


Assuntos
Projetos de Pesquisa , Humanos , Pesquisa
17.
Front Res Metr Anal ; 8: 1179376, 2023.
Artigo em Inglês | MEDLINE | ID: mdl-37705872

RESUMO

The academic research assessment system, the academic reward system, and the academic publishing system are interrelated mechanisms that facilitate the scholarly production of knowledge. This article considers these systems using a Foucauldian lens to examine the power/knowledge relationships found within and through these systems. A brief description of the various systems is introduced followed by examples of instances where Foucault's power, knowledge, discourse, and power/knowledge concepts are useful to provide a broader understanding of the norms and rules associated with each system, how these systems form a network of power relationships that reinforce and shape one another.

18.
Front Res Metr Anal ; 8: 1067981, 2023.
Artigo em Inglês | MEDLINE | ID: mdl-37601533

RESUMO

Charities investing on rare disease research greatly contribute to generate ground-breaking knowledge with the clear goal of finding a cure for their condition of interest. Although the amount of their investments may be relatively small compared to major funders, the advocacy groups' clear mission promotes innovative research and aggregates highly motivated and mission-oriented scientists. Here, we illustrate the case of Fondazione italiana di ricerca per la Sclerosi Laterale Amiotrofica (AriSLA), the main Italian funding agency entirely dedicated to amyotrophic lateral sclerosis research. An international benchmark analysis of publications derived from AriSLA-funded projects indicated that their mean relative citation ratio values (iCite dashboard, National Institutes of Health, U.S.) were very high, suggesting a strong influence on the referring international scientific community. An interesting trend of research toward translation based on the "triangle of biomedicine" and paper citations (iCite) was also observed. Qualitative analysis on researchers' accomplishments was convergent with the bibliometric data, indicating a high level of performance of several working groups, lines of research that speak of progression toward clinical translation, and one study that has progressed from the investigation of cellular mechanisms to a Phase 2 international clinical trial. The key elements of the success of the AriSLA investment lie in: (i) the clear definition of the objectives (research with potential impact on patients, no matter how far), (ii) a rigorous peer-review process entrusted to an international panel of experts, (iii) diversification of the portfolio with ad hoc selection criteria, which also contributed to bringing new experts and younger scientists to the field, and (iv) a close interaction of AriSLA stakeholders with scientists, who developed a strong sense of belonging. Periodic review of the portfolio of investments is a vital practice for funding agencies. Sharing information between funding agencies about their own policies and research assessment methods and outcomes help guide the international debate on funding strategies and research directions to be undertaken, particularly in the field of rare diseases, where synergy is a relevant enabling factor.

19.
Front Res Metr Anal ; 8: 1064230, 2023.
Artigo em Inglês | MEDLINE | ID: mdl-36741346

RESUMO

Retractions are among the effective measures to strengthen the self-correction of science and the quality of the literature. When it comes to self-retractions for honest errors, exposing one's own failures is not a trivial matter for researchers. However, self-correcting data, results and/or conclusions has increasingly been perceived as a good research practice, although rewarding such practice challenges traditional models of research assessment. In this context, it is timely to investigate who have self-retracted for honest error in terms of country, field, and gender. We show results on these three factors, focusing on gender, as data are scarce on the representation of female scientists in efforts to set the research record straight. We collected 3,822 retraction records, including research articles, review papers, meta-analyses, and letters under the category "error" from the Retraction Watch Database for the 2010-2021 period. We screened the dataset collected for research articles (2,906) and then excluded retractions by publishers, editors, or third parties, and those mentioning any investigation issues. We analyzed the content of each retraction manually to include only those indicating that they were requested by authors and attributed solely to unintended mistakes. We categorized the records according to country, field, and gender, after selecting research articles with a sole corresponding author. Gender was predicted using Genderize, at a 90% probability threshold for the final sample (n = 281). Our results show that female scientists account for 25% of self-retractions for honest error, with the highest share for women affiliated with US institutions.

20.
Zdr Varst ; 62(3): 109-112, 2023 Sep.
Artigo em Inglês | MEDLINE | ID: mdl-37327133

RESUMO

The COVID-19 pandemic has led to a surge in scientific publications, some of which have bypassed the usual peer-review processes, leading to an increase in unsupported claims being referenced. Therefore, the need for references in scientific articles is increasingly being questioned. The practice of relying solely on quantitative measures, such as impact factor, is also considered inadequate by many experts. This can lead to researchers choosing research ideas that are likely to generate favourable metrics instead of interesting and important topics. Evaluating the quality and scientific value of articles requires a rethinking of current approaches, with a move away from purely quantitative methods. Artificial intelligence (AI)-based tools are making scientific writing easier and less time-consuming, which is likely to further increase the number of scientific publications, potentially leading to higher quality articles. AI tools for searching, analysing, synthesizing, evaluating and writing scientific literature are increasingly being developed. These tools deeply analyse the content of articles, consider their scientific impact, and prioritize the retrieved literature based on this information, presenting it in simple visual graphs. They also help authors to quickly and easily analyse and synthesize knowledge from the literature, prepare summaries of key information, aid in organizing references, and improve manuscript language. The language model ChatGPT has already greatly changed the way people communicate with computers, bringing it closer to human communication. However, while AI tools are helpful, they must be used carefully and ethically. In summary, AI has already changed the way we write articles, and its use in scientific publishing will continue to enhance and streamline the process.

SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA