Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 352
Filtrar
1.
Res Integr Peer Rev ; 9(1): 9, 2024 Aug 23.
Artículo en Inglés | MEDLINE | ID: mdl-39175039

RESUMEN

BACKGROUND: This study was conducted to assess the knowledge and ongoing practices of plagiarism among the journal editors of Nepal. METHODS: This web-based questionnaire analytical cross-sectional was conducted among journal editors working across various journals in Nepal. All journal editors from NepJOL-indexed journals in Nepal who provided e-consent were included in the study using a convenience sampling technique. A final set of questionnaires was prepared using Google Forms, including six knowledge questions, three practice questions (with subsets) for authors, and four (with subsets) for editors. These were distributed to journal editors in Nepal via email, Facebook Messenger, Viber, and WhatsApp. Reminders were sent weekly, up to three times. Data analysis was done in R software. Frequencies and percentages were calculated for the demographic variables, correct responses regarding knowledge, and practices related to plagiarism. Independent t-test and one-way ANOVA were used to compare mean knowledge with demographic variables. For all tests, statistical significance was set at p < 0.05. RESULTS: A total of 147 participants completed the survey.The mean age of the participants was found to be 43.61 ± 8.91 years. Nearly all participants were aware of plagiarism, and most had heard of both Turnitin and iThenticate. Slightly more than three-fourths correctly identified that citation and referencing can avoid plagiarism. The overall mean knowledge score was 5.32 ± 0.99, with no significant differences across demographic variables. As authors, 4% admitted to copying sections of others' work without acknowledgment and reusing their own published work without proper citations. Just over one-fifth did not use plagiarism detection software when writing research articles. Fewer than half reported that their journals used authentic plagiarism detection software. Four-fifths of them suspected plagiarism in the manuscripts assigned through their journal. Three out of every five participants reported the plagiarism used in the manuscript to the respective authors. Nearly all participants believe every journal must have plagiarism-detection software. CONCLUSIONS: Although journal editors' knowledge and practices regarding plagiarism appear to be high, they are still not satisfactory. It is strongly recommended to use authentic plagiarism detection software by the journals and editors should be adequately trained and update their knowledge about it.

2.
Account Res ; : 1-19, 2024 Aug 17.
Artículo en Inglés | MEDLINE | ID: mdl-39153004

RESUMEN

BACKGROUND: The study examines the prevalence of plagiarism in hijacked journals, a category of problematic journals that have proliferated over the past decade. METHODS: A quasi-random sample of 936 papers published in 58 hijacked journals that provided free access to their archive as of June 2021 was selected for the analysis. The study utilizes Urkund (Ouriginal) software and manual verification to investigate plagiarism and finds a significant prevalence of plagiarism in hijacked journals. RESULTS: Out of the analyzed sample papers, 618 (66%) were found to contain instances of plagiarism, and 28% of papers from the sample (n = 259) displayed text similarities of 25% or more. The analysis reveals that a majority of authors originate from developing and ex-Soviet countries, with limited affiliation ties to developed countries and scarce international cooperation in papers submitted to hijacked journals. The absence of rigorous publication requirements, peer review processes, and plagiarism checks in hijacked journals creates an environment where authors can publish texts with a significant amount of plagiarism. CONCLUSIONS: These findings suggest a tendency for fraudulent journals to attract authors who do not uphold scientific integrity principles. The legitimization of papers from hijacked journals in bibliographic databases, along with their citation, poses significant challenges to scientific integrity.

3.
Cureus ; 16(7): e64513, 2024 Jul.
Artículo en Inglés | MEDLINE | ID: mdl-39139346

RESUMEN

Introduction Plagiarism is appropriating another person's ideas, words, results, or processes without giving appropriate credit and usually claiming them to be one's own. Thus, plagiarism is a dishonest act of fraud or cheating. Objectives The objective of this study is to assess the perception of plagiarism among medical postgraduate (PG) students. Materials & Methods: An educational observational study was conducted among second-year PG students about the perception of plagiarism by using pre-test and post-test questionnaires after an orientation session on plagiarism and data analysis before the start of dissertation analysis. Questions included were on awareness and attitude towards plagiarism.  Results A survey involving 91 PG students assessed their understanding of plagiarism. Remarkably, the majority (97.7%) demonstrated awareness of plagiarism, yet only 18.6% had authored a published article. It was discovered that about 30% of the students had resorted to plagiarism at some point during their academic pursuits. Approximately 70.9% of the PG students were acquainted with the University's plagiarism policy. The survey highlighted a notable enhancement in plagiarism awareness among PG students, with their attitudes toward plagiarism evolving after participating in the session. Conclusion Plagiarism can be avoided by implementing rigorous guidelines, ensuring strict policy adherence, and providing comprehensive training before commencing work. Training, retraining, and strict institute policies will help increase awareness about plagiarism and reduce the percentage of plagiarism in scientific writing.

4.
JMIR Med Educ ; 10: e53308, 2024 Jul 08.
Artículo en Inglés | MEDLINE | ID: mdl-38989841

RESUMEN

Background: The introduction of ChatGPT by OpenAI has garnered significant attention. Among its capabilities, paraphrasing stands out. Objective: This study aims to investigate the satisfactory levels of plagiarism in the paraphrased text produced by this chatbot. Methods: Three texts of varying lengths were presented to ChatGPT. ChatGPT was then instructed to paraphrase the provided texts using five different prompts. In the subsequent stage of the study, the texts were divided into separate paragraphs, and ChatGPT was requested to paraphrase each paragraph individually. Lastly, in the third stage, ChatGPT was asked to paraphrase the texts it had previously generated. Results: The average plagiarism rate in the texts generated by ChatGPT was 45% (SD 10%). ChatGPT exhibited a substantial reduction in plagiarism for the provided texts (mean difference -0.51, 95% CI -0.54 to -0.48; P<.001). Furthermore, when comparing the second attempt with the initial attempt, a significant decrease in the plagiarism rate was observed (mean difference -0.06, 95% CI -0.08 to -0.03; P<.001). The number of paragraphs in the texts demonstrated a noteworthy association with the percentage of plagiarism, with texts consisting of a single paragraph exhibiting the lowest plagiarism rate (P<.001). Conclusions: Although ChatGPT demonstrates a notable reduction of plagiarism within texts, the existing levels of plagiarism remain relatively high. This underscores a crucial caution for researchers when incorporating this chatbot into their work.


Asunto(s)
Plagio , Humanos , Escritura
5.
Heliyon ; 10(12): e32976, 2024 Jun 30.
Artículo en Inglés | MEDLINE | ID: mdl-38984302

RESUMEN

Extensive use of AI-generated texts culminated recently after the advent of large language models. Although the use of AI text generators, such as ChatGPT, is beneficial, it also threatens the academic level as students may resort to it. In this work, we propose a technique leveraging the intrinsic stylometric features of documents to detect ChatGPT-based plagiarism. The stylometric features were normalized and fed to classical classifiers, such as k-Nearest Neighbors, Decision Tree, and Naïve Bayes, as well as ensemble classifiers, such as XGBoost and Stacking. A thorough examination of the classifier was conducted by using Cross-Fold validation, hyperparameters tuning, and multiple training iterations. The results show the efficacy of both classical and ensemble learning classifiers in distinguishing between human and ChatGPT writing styles with a noteworthy performance of XGBoost where 100 % was achieved for accuracy, recall, and precision metrics. Moreover, the proposed XGBoost classifier outperformed the state-of-the-art result on the same dataset and same classifier highlighting the superiority of the proposed feature style extraction method over TF-IDF techniques. The ensemble learning classifiers were also applied to the generated dataset with mixed texts, where paragraphs are written by ChatGPT and humans. The results show that 98 % of the documents were classified correctly as either mixed or human. The last contribution consists in the authorship attribution of the paragraphs of a single document where the accuracy reached 92.3 %.

6.
J Med Internet Res ; 26: e52001, 2024 Jun 26.
Artículo en Inglés | MEDLINE | ID: mdl-38924787

RESUMEN

BACKGROUND: Due to recent advances in artificial intelligence (AI), language model applications can generate logical text output that is difficult to distinguish from human writing. ChatGPT (OpenAI) and Bard (subsequently rebranded as "Gemini"; Google AI) were developed using distinct approaches, but little has been studied about the difference in their capability to generate the abstract. The use of AI to write scientific abstracts in the field of spine surgery is the center of much debate and controversy. OBJECTIVE: The objective of this study is to assess the reproducibility of the structured abstracts generated by ChatGPT and Bard compared to human-written abstracts in the field of spine surgery. METHODS: In total, 60 abstracts dealing with spine sections were randomly selected from 7 reputable journals and used as ChatGPT and Bard input statements to generate abstracts based on supplied paper titles. A total of 174 abstracts, divided into human-written abstracts, ChatGPT-generated abstracts, and Bard-generated abstracts, were evaluated for compliance with the structured format of journal guidelines and consistency of content. The likelihood of plagiarism and AI output was assessed using the iThenticate and ZeroGPT programs, respectively. A total of 8 reviewers in the spinal field evaluated 30 randomly extracted abstracts to determine whether they were produced by AI or human authors. RESULTS: The proportion of abstracts that met journal formatting guidelines was greater among ChatGPT abstracts (34/60, 56.6%) compared with those generated by Bard (6/54, 11.1%; P<.001). However, a higher proportion of Bard abstracts (49/54, 90.7%) had word counts that met journal guidelines compared with ChatGPT abstracts (30/60, 50%; P<.001). The similarity index was significantly lower among ChatGPT-generated abstracts (20.7%) compared with Bard-generated abstracts (32.1%; P<.001). The AI-detection program predicted that 21.7% (13/60) of the human group, 63.3% (38/60) of the ChatGPT group, and 87% (47/54) of the Bard group were possibly generated by AI, with an area under the curve value of 0.863 (P<.001). The mean detection rate by human reviewers was 53.8% (SD 11.2%), achieving a sensitivity of 56.3% and a specificity of 48.4%. A total of 56.3% (63/112) of the actual human-written abstracts and 55.9% (62/128) of AI-generated abstracts were recognized as human-written and AI-generated by human reviewers, respectively. CONCLUSIONS: Both ChatGPT and Bard can be used to help write abstracts, but most AI-generated abstracts are currently considered unethical due to high plagiarism and AI-detection rates. ChatGPT-generated abstracts appear to be superior to Bard-generated abstracts in meeting journal formatting guidelines. Because humans are unable to accurately distinguish abstracts written by humans from those produced by AI programs, it is crucial to exercise special caution and examine the ethical boundaries of using AI programs, including ChatGPT and Bard.


Asunto(s)
Indización y Redacción de Resúmenes , Columna Vertebral , Humanos , Columna Vertebral/cirugía , Indización y Redacción de Resúmenes/normas , Indización y Redacción de Resúmenes/métodos , Reproducibilidad de los Resultados , Inteligencia Artificial , Escritura/normas
7.
Am J Obstet Gynecol ; 231(2): 276.e1-276.e10, 2024 08.
Artículo en Inglés | MEDLINE | ID: mdl-38710267

RESUMEN

BACKGROUND: ChatGPT, a publicly available artificial intelligence large language model, has allowed for sophisticated artificial intelligence technology on demand. Indeed, use of ChatGPT has already begun to make its way into medical research. However, the medical community has yet to understand the capabilities and ethical considerations of artificial intelligence within this context, and unknowns exist regarding ChatGPT's writing abilities, accuracy, and implications for authorship. OBJECTIVE: We hypothesize that human reviewers and artificial intelligence detection software differ in their ability to correctly identify original published abstracts and artificial intelligence-written abstracts in the subjects of Gynecology and Urogynecology. We also suspect that concrete differences in writing errors, readability, and perceived writing quality exist between original and artificial intelligence-generated text. STUDY DESIGN: Twenty-five articles published in high-impact medical journals and a collection of Gynecology and Urogynecology journals were selected. ChatGPT was prompted to write 25 corresponding artificial intelligence-generated abstracts, providing the abstract title, journal-dictated abstract requirements, and select original results. The original and artificial intelligence-generated abstracts were reviewed by blinded Gynecology and Urogynecology faculty and fellows to identify the writing as original or artificial intelligence-generated. All abstracts were analyzed by publicly available artificial intelligence detection software GPTZero, Originality, and Copyleaks, and were assessed for writing errors and quality by artificial intelligence writing assistant Grammarly. RESULTS: A total of 157 reviews of 25 original and 25 artificial intelligence-generated abstracts were conducted by 26 faculty and 4 fellows; 57% of original abstracts and 42.3% of artificial intelligence-generated abstracts were correctly identified, yielding an average accuracy of 49.7% across all abstracts. All 3 artificial intelligence detectors rated the original abstracts as less likely to be artificial intelligence-written than the ChatGPT-generated abstracts (GPTZero, 5.8% vs 73.3%; P<.001; Originality, 10.9% vs 98.1%; P<.001; Copyleaks, 18.6% vs 58.2%; P<.001). The performance of the 3 artificial intelligence detection software differed when analyzing all abstracts (P=.03), original abstracts (P<.001), and artificial intelligence-generated abstracts (P<.001). Grammarly text analysis identified more writing issues and correctness errors in original than in artificial intelligence abstracts, including lower Grammarly score reflective of poorer writing quality (82.3 vs 88.1; P=.006), more total writing issues (19.2 vs 12.8; P<.001), critical issues (5.4 vs 1.3; P<.001), confusing words (0.8 vs 0.1; P=.006), misspelled words (1.7 vs 0.6; P=.02), incorrect determiner use (1.2 vs 0.2; P=.002), and comma misuse (0.3 vs 0.0; P=.005). CONCLUSION: Human reviewers are unable to detect the subtle differences between human and ChatGPT-generated scientific writing because of artificial intelligence's ability to generate tremendously realistic text. Artificial intelligence detection software improves the identification of artificial intelligence-generated writing, but still lacks complete accuracy and requires programmatic improvements to achieve optimal detection. Given that reviewers and editors may be unable to reliably detect artificial intelligence-generated texts, clear guidelines for reporting artificial intelligence use by authors and implementing artificial intelligence detection software in the review process will need to be established as artificial intelligence chatbots gain more widespread use.


Asunto(s)
Inteligencia Artificial , Ginecología , Urología , Humanos , Indización y Redacción de Resúmenes , Publicaciones Periódicas como Asunto , Programas Informáticos , Escritura , Autoria
8.
J Plast Reconstr Aesthet Surg ; 93: 136-139, 2024 Jun.
Artículo en Inglés | MEDLINE | ID: mdl-38691949

RESUMEN

BACKGROUND: Various studies regarding retractions of publications have determined the rate of retraction has increased in recent years. Although this trend may apply to any field, there is a paucity of literature exploring the publication of erroneous studies within plastic and reconstructive surgery. The present study aims to identify trends in frequency and reasons for retraction of plastic and reconstructive surgery studies, with analysis of subspecialty and journals. METHODS: A database search was conducted for retracted papers within plastic and reconstructive surgery. The initial search yielded 2347 results, which were analyzed by two independent reviewers. 77 studies were jointly identified for data collection. RESULTS: The most common reasons for retractions were duplication (n = 20, 25.9 %), request of author (n = 15, 19.5 %), plagiarism (n = 9, 11.6 %), error (n = 9, 11.6 %), fraud (n = 2, 2.6 %), and conflict of interest (n = 1, 1.3 %). 15 were basic science studies (19.4 %), 58 were clinical science studies (75.3 %), and 4 were not categorized (5.2 %). Subspecialties of retracted papers were maxillofacial (n = 29, 37.7 %), reconstructive (n = 17, 22.0 %), wound healing (n = 8, 10.4 %), burn (n = 6, 7.8 %), esthetics (n = 5, 6.5 %), breast (n = 3, 3.9 %), and trauma (n = 1, 1.3 %). Mean impact factor was 2.9 and average time from publication to retraction was 32 months. CONCLUSION: Analysis of retracted plastic surgery studies revealed a recent rise in frequency of retractions, spanning a wide spectrum of journals and subspecialties.


Asunto(s)
Procedimientos de Cirugía Plástica , Retractación de Publicación como Asunto , Cirugía Plástica , Humanos , Cirugía Plástica/tendencias , Procedimientos de Cirugía Plástica/tendencias , Procedimientos de Cirugía Plástica/métodos , Mala Conducta Científica/estadística & datos numéricos , Investigación Biomédica , Plagio , Publicaciones Periódicas como Asunto/estadística & datos numéricos
9.
J Empir Res Hum Res Ethics ; 19(1-2): 58-70, 2024 Feb.
Artículo en Inglés | MEDLINE | ID: mdl-38404000

RESUMEN

The main purpose of this study was to translate the Plagiarism Attitude Scale into Turkish and validate it for use in Turkish settings, in order to better understand research integrity attitudes and awareness of the Turkish academic and student community, while also contributing an instrument for research in this area. The research was designed and conducted with 483 participants. In the process of adapting the scale to Turkish, language, content, and construct validity analyses were performed. Following the completion of the validity phase, the reliability of the scale was examined using Cronbach's alpha coefficient and the split-half method. The results indicate that the scale's language and content validity are deemed sufficient. According to the findings of the research, the Plagiarism Attitude Scale, in its adapted Turkish version, is considered a valid and reliable tool. The use of this Turkish scale will assist local researchers in sharing their unique perspectives and help the international community better understand research ethics concerns in Türkiye. Additionally, this scale will serve as a valuable resource for planning educational programs.


Asunto(s)
Lenguaje , Plagio , Humanos , Reproducibilidad de los Resultados , Turquía , Encuestas y Cuestionarios , Psicometría
10.
J Osteopath Med ; 124(5): 187-194, 2024 May 01.
Artículo en Inglés | MEDLINE | ID: mdl-38407191

RESUMEN

CONTEXT: This narrative review article explores research integrity and the implications of scholarly work in medical education. The paper describes how the current landscape of medical education emphasizes research and scholarly activity for medical students, resident physicians, and faculty physician educators. There is a gap in the existing literature that fully explores research integrity, the challenges surrounding the significant pressure to perform scholarly activity, and the potential for ethical lapses by those involved in medical education. OBJECTIVES: The objectives of this review article are to provide a background on authorship and publication safeguards, outline common types of research misconduct, describe the implications of publication in medical education, discuss the consequences of ethical breaches, and outline possible solutions to promote research integrity in academic medicine. METHODS: To complete this narrative review, the authors explored the current literature utilizing multiple databases beginning in June of 2021, and they completed the literature review in January of 2023. To capture the wide scope of the review, numerous searches were performed. A number of Medical Subject Headings (MeSH) terms were utilized to identify relevant articles. The MeSH terms included "scientific misconduct," "research misconduct," "authorship," "plagiarism," "biomedical research/ethics," "faculty, medical," "fellowships and scholarships," and "internship and residency." Additional references were accessed to include medical school and residency accreditation standards, residency match statistics, regulatory guidelines, and standard definitions. RESULTS: Within the realm of academic medicine, research misconduct and misrepresentation continue to occur without clear solutions. There is a wide range of severity in breaches of research integrity, ranging from minor infractions to fraud. Throughout the medical education system in the United States, there is pressure to publish research and scholarly work. Higher rates of publications are associated with a successful residency match for students and academic promotion for faculty physicians. For those who participate in research misconduct, there is a multitude of potential adverse consequences. Potential solutions to ensure research integrity exist but are not without barriers to implementation. CONCLUSIONS: Pressure in the world of academic medicine to publish contributes to the potential for research misconduct and authorship misrepresentation. Lapses in research integrity can result in a wide range of potentially adverse consequences for the offender, their institution, the scientific community, and the public. If adopted, universal research integrity policies and procedures could make major strides in eliminating research misconduct in the realm of academic medicine.


Asunto(s)
Edición , Mala Conducta Científica , Mala Conducta Científica/ética , Edición/ética , Edición/normas , Humanos , Autoria , Investigación Biomédica/ética , Investigación Biomédica/normas , Educación Médica/normas , Ética en Investigación
11.
Sci Eng Ethics ; 30(1): 4, 2024 Feb 12.
Artículo en Inglés | MEDLINE | ID: mdl-38345671

RESUMEN

The past decade has seen extensive research carried out on the systematic causes of research misconduct. Simultaneously, less attention has been paid to the variation in academic misconduct between research fields, as most empirical studies focus on one particular discipline. We propose that academic discipline is one of several systematic factors that might contribute to academic misbehavior. Drawing on a neo-institutional approach, we argue that in the developing countries, the norm of textual originality has not drawn equal support across different research fields depending on its level of internationalization. Using plagiarism detection software, we analyzed 2,405 doctoral dissertations randomly selected from all dissertations defended in Russia between 2007 and 2015. We measured the globalization of each academic discipline by calculating the share of publications indexed in the global citation database in relation to overall output. Our results showed that, with an average share of detected borrowings of over 19%, the incidence of plagiarism in Russia is remarkably higher than in Western countries. Overall, disciplines closely follow the pattern of higher globalization associated with a lower percentage of borrowed text. We also found that plagiarism is less prevalent at research-oriented institutions supporting global ethical standards. Our findings suggest that it might be misleading to measure the prevalence of academic misconduct in developing countries without paying attention to variations at the disciplinary level.


Asunto(s)
Plagio , Mala Conducta Científica , Organizaciones , Programas Informáticos
12.
Data Brief ; 52: 109857, 2024 Feb.
Artículo en Inglés | MEDLINE | ID: mdl-38161660

RESUMEN

Plagiarism detection (PD) is a process of identifying instances where someone has presented another person's work or ideas as their own. Plagiarism detection is categorized into two types (i) Intrinsic plagiarism detection primarily concerns the assessment of authorship consistency within a single document, aiming to identify instances where portions of the text may have been copied or paraphrased from elsewhere within the same document. Author clustering, closely related to intrinsic plagiarism detection, involves grouping documents based on their stylistic and linguistic characteristics to identify common authors or sources within a given dataset. On the other hand, (ii) extrinsic plagiarism detection delves into the comparative analysis of a suspicious document against a set of external source documents, seeking instances of shared phrases, sentences, or paragraphs between them, which is often referred to as text reuse or verbatim copying. Detection of plagiarism from documents is a long-established task in the area of NLP with remarkable contributions in multiple applications. A lot of research has already been conducted in the English and other foreign languages but Urdu language needs a lot of attention especially in intrinsic plagiarism detection domain. The major reason is that Urdu is a low resource language and unfortunately there is no high-quality benchmark corpus available for intrinsic plagiarism detection in Urdu language. This study presents a high-quality benchmark Corpus comprising 10,872 documents. The corpus is structured into two granularity levels: sentence level and paragraph level. This dataset serves multifaceted purposes, facilitating intrinsic plagiarism detection, verbatim text reuse identification, and author clustering in the Urdu language. Also, it holds significance for natural language processing researchers and practitioners as it facilitates the development of specialized plagiarism detection models tailored to the Urdu language. These models can play a vital role in education and publishing by improving the accuracy of plagiarism detection, effectively addressing a gap and enhancing the overall ability to identify copied content in Urdu writing.

13.
Dev World Bioeth ; 2024 Jan 09.
Artículo en Inglés | MEDLINE | ID: mdl-38193632

RESUMEN

We aimed to conduct a scoping review to assess the profile of retracted health sciences articles authored by individuals affiliated with academic institutions in Latin America and the Caribbean (LAC). We systematically searched seven databases (PubMed, Scopus, Web of Science, Embase, Medline/Ovid, Scielo, and LILACS). We included articles published in peer-reviewed journals between 2003 and 2022 that had at least one author with an institutional affiliation in LAC. Data were collected on the year of publication, study design, authors' countries of origin, number of authors, subject matter of the manuscript, scientific journals of publication, retraction characteristics, and reasons for retraction. We included 147 articles, the majority being observational studies (41.5%). The LAC countries with the highest number of retractions were Brazil (n = 69), Colombia (n = 16), and Mexico (n = 15). The areas of study with the highest number of retractions were infectology (n = 21) and basic sciences (n = 15). A retraction label was applied to 89.1% of the articles, 70.7% were retracted by journal editors, and 89.1% followed international retraction guidelines. The primary reasons for retraction included errors in procedures or data collection (n = 39), inconsistency in results or conclusions (n = 37), plagiarism (n = 21), and suspected scientific fraud (n = 19). In conclusion, most retractions of scientific publications in health sciences in LAC adhered to international guidelines and were linked to methodological issues in execution and scientific misconduct. Efforts should be directed toward ensuring the integrity of scientific research in the field of health.

14.
Account Res ; : 1-16, 2024 Jan 30.
Artículo en Inglés | MEDLINE | ID: mdl-38290700

RESUMEN

The present study explores the major reasons for committing plagiarism, as reported in published literature. One hundred sixty-six peer-reviewed articles, which were retrieved from the Scopus database, were carefully examined to find out the research studies conducted to explore the most common reasons for academic cheating among students and researchers in different disciplines in higher education. An analysis of collected literature reveals that 19 studies were conducted to identify the perceived reasons of committing plagiarism. Four studies with similar constructs of perceived reasons of committing plagiarism, namely busy schedule, overload of homework and laziness, easy accessibility of electronic resources, poor knowledge in research writing and correct citation and lack of serious penalty, were conducted. The pooled mean and standard deviation of the four studies reveal that easy accessibility of electronic resources (Mean = 3.6, SD = 0.81), unawareness of instructions (Mean = 3.0, SD = 0.89), and busy schedule, overload of homework and laziness (Mean = 2.89, SD = 1.0) are important perceived reasons for committing plagiarism. The study findings could help create an effective intervention and a robust anti-plagiarism policy for academic institutions, administrators, and policymakers in detecting academic dishonesty while emphasizing the value of integrity in academic pursuit.

16.
Int J Qual Stud Health Well-being ; 19(1): 2295151, 2024 Dec.
Artículo en Inglés | MEDLINE | ID: mdl-38126140

RESUMEN

Purpose: The purpose of this article is to explore the interrelationship between research ethics and research integrity with a focus on the primary forms of research misconduct, including plagiarism, fabrication, and falsification. It also details the main factors for their occurrence, and the possible ways for mitigating their use among scholars.Methods: The method employed a detailed examination of the main ethical dilemmas, as delineated in literature, as well as the factors leading to these ethical breaches and the strategies to mitigate them. Further, the teaching experiences of the primary author are reflected in the development of the model.Results: The results of this article are represented in a model illustrating the interrelationship between research ethics and research integrity. Further, a significant aspect of our article is the identification of novel forms of research misconduct concerning the use of irrelevant or forced citations or references.Conclusion: In conclusion, the article highlights the substantial positive effects that adherence to research ethics and integrity have on the academic well-being of scholars.


Asunto(s)
Investigación Biomédica , Mala Conducta Científica , Humanos , Plagio , Ética en Investigación
17.
J Nurs Scholarsh ; 56(3): 478-485, 2024 05.
Artículo en Inglés | MEDLINE | ID: mdl-38124265

RESUMEN

INTRODUCTION: The output of scholarly publications in scientific literature has increased exponentially in recent years. This increase in literature has been accompanied by an increase in retractions. Although some of these may be attributed to publishing errors, many are the result of unsavory research practices. The purposes of this study were to identify the number of retracted articles in nursing and reasons for the retractions, analyze the retraction notices, and determine the length of time for an article in nursing to be retracted. DESIGN: This was an exploratory study. METHODS: A search of PubMed/MEDLINE, the Cumulative Index to Nursing and Allied Health Literature, and Retraction Watch databases was conducted to identify retracted articles in nursing and their retraction notices. RESULTS: Between 1997 and 2022, 123 articles published in the nursing literature were retracted. Ten different reasons for retraction were used to categorize these articles with one-third of the retractions (n = 37, 30.1%) not specifying a reason. Sixty-eight percent (n = 77) were retracted because of an actual or a potential ethical concern: duplicate publication, data issues, plagiarism, authorship issues, and copyright. CONCLUSION: Nurses rely on nursing-specific scholarly literature as evidence for clinical decisions. The findings demonstrated that retractions are increasing within published nursing literature. In addition, it was evident that retraction notices do not prevent previously published work from being cited. This study addressed a gap in knowledge about article retractions specific to nursing.


Asunto(s)
Investigación en Enfermería , Retractación de Publicación como Asunto , Humanos , Mala Conducta Científica/estadística & datos numéricos , Publicaciones Periódicas como Asunto/estadística & datos numéricos , Edición/estadística & datos numéricos , Plagio
18.
Colomb Med (Cali) ; 54(3): e1015868, 2023.
Artículo en Inglés | MEDLINE | ID: mdl-38089825

RESUMEN

This statement revises our earlier "WAME Recommendations on ChatGPT and Chatbots in Relation to Scholarly Publications" (January 20, 2023). The revision reflects the proliferation of chatbots and their expanding use in scholarly publishing over the last few months, as well as emerging concerns regarding lack of authenticity of content when using chatbots. These recommendations are intended to inform editors and help them develop policies for the use of chatbots in papers published in their journals. They aim to help authors and reviewers understand how best to attribute the use of chatbots in their work and to address the need for all journal editors to have access to manuscript screening tools. In this rapidly evolving field, we will continue to modify these recommendations as the software and its applications develop.


Esta declaración revisa las anteriores "Recomendaciones de WAME sobre ChatGPT y Chatbots en Relation to Scholarly Publications" (20 de enero de 2023). La revisión refleja la proliferación de chatbots y su creciente uso en las publicaciones académicas en los últimos meses, así como la preocupación por la falta de autenticidad de los contenidos cuando se utilizan chatbots. Estas recomendaciones pretenden informar a los editores y ayudarles a desarrollar políticas para el uso de chatbots en los artículos sometidos en sus revistas. Su objetivo es ayudar a autores y revisores a entender cuál es la mejor manera de atribuir el uso de chatbots en su trabajo y a la necesidad de que todos los editores de revistas tengan acceso a herramientas de selección de manuscritos. En este campo en rápida evolución, seguiremos modificando estas recomendaciones a medida que se desarrollen el software y sus aplicaciones.


Asunto(s)
Inteligencia Artificial , Edición , Humanos
19.
J Korean Med Sci ; 38(47): e405, 2023 Dec 04.
Artículo en Inglés | MEDLINE | ID: mdl-38050915

RESUMEN

The concept of research integrity (RI) refers to a set of moral and ethical standards that serve as the foundation for the execution of research activities. Integrity in research is the incorporation of principles of honesty, transparency, and respect for ethical standards and norms throughout all stages of the research endeavor, encompassing study design, data collecting, analysis, reporting, and publishing. The preservation of RI is of utmost importance to uphold the credibility and amplify the influence of scientific research while also preventing and dealing with instances of scientific misconduct. Researchers, institutions, journals, and readers share responsibilities for preserving RI. Researchers must adhere to the highest ethical standards. Institutions have a role in establishing an atmosphere that supports integrity ideals while also providing useful guidance, instruction, and assistance to researchers. Editors and reviewers act as protectors, upholding quality and ethical standards in the dissemination of research results through publishing. Readers play a key role in the detection and reporting of fraudulent activity by critically evaluating content. The struggle against scientific misconduct has multiple dimensions and is continuous. It requires a collaborative effort and adherence to the principles of honesty, transparency, and rigorous science. By supporting a culture of RI, the scientific community may preserve its core principles and continue to contribute appropriately to society's well-being. It not only aids present research but also lays the foundation for future scientific advancements.


Asunto(s)
Investigación Biomédica , Mala Conducta Científica , Humanos , Edición , Proyectos de Investigación , Investigadores
20.
J Med Internet Res ; 25: e51229, 2023 12 25.
Artículo en Inglés | MEDLINE | ID: mdl-38145486

RESUMEN

BACKGROUND: ChatGPT may act as a research assistant to help organize the direction of thinking and summarize research findings. However, few studies have examined the quality, similarity (abstracts being similar to the original one), and accuracy of the abstracts generated by ChatGPT when researchers provide full-text basic research papers. OBJECTIVE: We aimed to assess the applicability of an artificial intelligence (AI) model in generating abstracts for basic preclinical research. METHODS: We selected 30 basic research papers from Nature, Genome Biology, and Biological Psychiatry. Excluding abstracts, we inputted the full text into ChatPDF, an application of a language model based on ChatGPT, and we prompted it to generate abstracts with the same style as used in the original papers. A total of 8 experts were invited to evaluate the quality of these abstracts (based on a Likert scale of 0-10) and identify which abstracts were generated by ChatPDF, using a blind approach. These abstracts were also evaluated for their similarity to the original abstracts and the accuracy of the AI content. RESULTS: The quality of ChatGPT-generated abstracts was lower than that of the actual abstracts (10-point Likert scale: mean 4.72, SD 2.09 vs mean 8.09, SD 1.03; P<.001). The difference in quality was significant in the unstructured format (mean difference -4.33; 95% CI -4.79 to -3.86; P<.001) but minimal in the 4-subheading structured format (mean difference -2.33; 95% CI -2.79 to -1.86). Among the 30 ChatGPT-generated abstracts, 3 showed wrong conclusions, and 10 were identified as AI content. The mean percentage of similarity between the original and the generated abstracts was not high (2.10%-4.40%). The blinded reviewers achieved a 93% (224/240) accuracy rate in guessing which abstracts were written using ChatGPT. CONCLUSIONS: Using ChatGPT to generate a scientific abstract may not lead to issues of similarity when using real full texts written by humans. However, the quality of the ChatGPT-generated abstracts was suboptimal, and their accuracy was not 100%.


Asunto(s)
Inteligencia Artificial , Investigación , Humanos , Estudios Transversales , Investigadores , Lenguaje
SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA
...