Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 5 de 5
Filtrar
1.
J Med Internet Res ; 26: e52001, 2024 Jun 26.
Artículo en Inglés | MEDLINE | ID: mdl-38924787

RESUMEN

BACKGROUND: Due to recent advances in artificial intelligence (AI), language model applications can generate logical text output that is difficult to distinguish from human writing. ChatGPT (OpenAI) and Bard (subsequently rebranded as "Gemini"; Google AI) were developed using distinct approaches, but little has been studied about the difference in their capability to generate the abstract. The use of AI to write scientific abstracts in the field of spine surgery is the center of much debate and controversy. OBJECTIVE: The objective of this study is to assess the reproducibility of the structured abstracts generated by ChatGPT and Bard compared to human-written abstracts in the field of spine surgery. METHODS: In total, 60 abstracts dealing with spine sections were randomly selected from 7 reputable journals and used as ChatGPT and Bard input statements to generate abstracts based on supplied paper titles. A total of 174 abstracts, divided into human-written abstracts, ChatGPT-generated abstracts, and Bard-generated abstracts, were evaluated for compliance with the structured format of journal guidelines and consistency of content. The likelihood of plagiarism and AI output was assessed using the iThenticate and ZeroGPT programs, respectively. A total of 8 reviewers in the spinal field evaluated 30 randomly extracted abstracts to determine whether they were produced by AI or human authors. RESULTS: The proportion of abstracts that met journal formatting guidelines was greater among ChatGPT abstracts (34/60, 56.6%) compared with those generated by Bard (6/54, 11.1%; P<.001). However, a higher proportion of Bard abstracts (49/54, 90.7%) had word counts that met journal guidelines compared with ChatGPT abstracts (30/60, 50%; P<.001). The similarity index was significantly lower among ChatGPT-generated abstracts (20.7%) compared with Bard-generated abstracts (32.1%; P<.001). The AI-detection program predicted that 21.7% (13/60) of the human group, 63.3% (38/60) of the ChatGPT group, and 87% (47/54) of the Bard group were possibly generated by AI, with an area under the curve value of 0.863 (P<.001). The mean detection rate by human reviewers was 53.8% (SD 11.2%), achieving a sensitivity of 56.3% and a specificity of 48.4%. A total of 56.3% (63/112) of the actual human-written abstracts and 55.9% (62/128) of AI-generated abstracts were recognized as human-written and AI-generated by human reviewers, respectively. CONCLUSIONS: Both ChatGPT and Bard can be used to help write abstracts, but most AI-generated abstracts are currently considered unethical due to high plagiarism and AI-detection rates. ChatGPT-generated abstracts appear to be superior to Bard-generated abstracts in meeting journal formatting guidelines. Because humans are unable to accurately distinguish abstracts written by humans from those produced by AI programs, it is crucial to exercise special caution and examine the ethical boundaries of using AI programs, including ChatGPT and Bard.


Asunto(s)
Indización y Redacción de Resúmenes , Columna Vertebral , Humanos , Columna Vertebral/cirugía , Indización y Redacción de Resúmenes/normas , Indización y Redacción de Resúmenes/métodos , Reproducibilidad de los Resultados , Inteligencia Artificial , Escritura/normas
2.
Int J Cancer ; 150(8): 1233-1243, 2022 04 15.
Artículo en Inglés | MEDLINE | ID: mdl-34807460

RESUMEN

Biomedical researchers routinely use a variety of biological models and resources, such as cultured cell lines, antibodies and laboratory animals. Unfortunately, these resources are not flawless: cell lines can be misidentified; for antibodies, problems with specificity, lot-to-lot consistency and sensitivity are common; and the reliability of animal models is questioned due to poor translation of animal studies to human clinical trials. In some cases, these problems can render the results of a study meaningless. As a response, some journals have implemented guidelines regarding the use and reporting of cell lines, antibodies and laboratory animals. In our study we use a portfolio of existing and newly created datasets to investigate identification and authentication information of cell lines, antibodies and organisms before and after guideline introduction, compared to journals without guidelines. We observed a general improvement of reporting quality over time, which the implementation of guidelines accelerated only in some cases. We therefore conclude that the effectiveness of journal guidelines is likely to be context dependent, affected by factors such as implementation conditions, research community support and monitoring and resource availability. Hence, journal reporting guidelines in themselves are not a quick fix to repair shortcomings in biomedical resource documentation, even though they can be part of the solution.


Asunto(s)
Investigación Biomédica/normas , Políticas Editoriales , Publicaciones Periódicas como Asunto/normas , Edición/normas , Animales , Anticuerpos , Línea Celular , Modelos Animales de Enfermedad , Humanos
3.
Br J Soc Psychol ; 62(4): 1635-1653, 2023 Oct.
Artículo en Inglés | MEDLINE | ID: mdl-36076340

RESUMEN

Opening data promises to improve research rigour and democratize knowledge production. But it also presents practical, theoretical, and ethical considerations for qualitative researchers in particular. Discussion about open data in qualitative social psychology predates the replication crisis. However, the nuances of this ongoing discussion have not been translated into current journal guidelines on open data. In this article, we summarize ongoing debates about open data from qualitative perspectives, and through a content analysis of 261 journals we establish the state of current journal policies for open data in the domain of social psychology. We critically discuss how current common expectations for open data may not be adequate for establishing qualitative rigour, can introduce ethical challenges, and may place those who wish to use qualitative approaches at a disadvantage in peer review and publication processes. We advise that future open data guidelines should aim to reflect the nuance of arguments surrounding data sharing in qualitative research, and move away from a universal "one-size-fits-all" approach to data sharing. This article outlines the past, present, and the potential future of open data guidelines in social-psychological journals. We conclude by offering recommendations for how journals might more inclusively consider the use of open data in qualitative methods, whilst recognizing and allowing space for the diverse perspectives, needs, and contexts of all forms of social-psychological research.


Asunto(s)
Publicaciones Periódicas como Asunto , Humanos , Investigación Cualitativa , Disentimientos y Disputas , Conocimiento , Estudios Longitudinales
4.
PeerJ ; 8: e9300, 2020.
Artículo en Inglés | MEDLINE | ID: mdl-32547887

RESUMEN

BACKGROUND: Despite the widespread use of antibodies as a research tool, problems with specificity, lot-to-lot consistency and sensitivity commonly occur and may be important contributing factors to the 'replication crisis' in biomedical research. This makes the validation of antibodies and accurate reporting of this validation in the scientific literature extremely important. Therefore, some journals now require authors to comply with antibody reporting guidelines. METHODS: We used a quasi-experimental approach to assess the effectiveness of such journal guidelines in improving antibody reporting in the scientific literature. In a sample of 120 publications, we compared the reporting of antibody validation and identification information in two journals with guidelines (Nature and the Journal of Comparative Neurology) with two journals without guidelines (Science and Neuroscience), before and after the introduction of these guidelines. RESULTS: Our results suggest that the implementation of antibody reporting guidelines might have some influence on the reporting of antibody validation information. The percentage of validated antibodies per article slightly increased from 39% to 57% in journals with guidelines, whereas this percentage decreased from 23% to 14% in journals without guidelines. Furthermore, the reporting of validation information of all primary antibodies increased by 23 percentage points in the journals with guidelines (OR = 2.80, 95% CI = 0.96-INF; adjusted p = 1, one-tailed), compared to a decrease of 13 percentage points in journals without guidelines. Fortunately, the guidelines seem to be more effective in improving the reporting of antibody identification information. The reporting of identification information of all primary antibodies used in a study increased by 58 percentage points (OR = 17.8, 95% CI = 4.8-INF; adjusted p = 0.0003, one-tailed) in journals with guidelines. This percentage also slightly increased in journals without guidelines (by 18 percentage points), suggesting an overall increased awareness of the importance of antibody identifiability. Moreover, this suggests that reporting guidelines mostly have an influence on the reporting of information that is relatively easy to provide. A small increase in the reporting of validation by referencing the scientific literature or the manufacturer's data also indicates this. CONCLUSION: Combined with the results of previous studies on journal guidelines, our study suggests that the effect of journal antibody guidelines on validation practices by themselves may be limited, since they mostly seem to improve antibody identification instead of actual experimental validation. These guidelines, therefore, may require additional measures to ensure effective implementation. However, due to the explorative nature of our study and our small sample size, we must remain cautious towards other factors that might have played a role in the observed change in antibody reporting behaviour.

5.
Ecol Evol ; 6(21): 7717-7726, 2016 Nov.
Artículo en Inglés | MEDLINE | ID: mdl-30128123

RESUMEN

Most top impact factor ecology journals indicate a preference or requirement for short manuscripts; some state clearly defined word limits, whereas others indicate a preference for more concise papers. Yet evidence from a variety of academic fields indicates that within journals longer papers are both more positively reviewed by referees and more highly cited. We examine the relationship between citations received and manuscript length, number of authors, and number of references cited for papers published in 32 ecology journals between 2009 and 2012. We find that longer papers, those with more authors, and those that cite more references are cited more. Although paper length, author count, and references cited all positively covary, an increase in each independently predicts an increase in citations received, with estimated relationships positive for all the journals we examined. That all three variables covary positively with citations suggests that papers presenting more and a greater diversity of data and ideas are more impactful. We suggest that the imposition of arbitrary manuscript length limits discourages the publication of more impactful studies. We propose that journals abolish arbitrary word or page limits, avoid declining papers (or requiring shortening) on the basis of length alone (irrespective of content), and adopt the philosophy that papers should be as long as they need to be.

SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA