Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 369
Filtrar
Mais filtros

Tipo de documento
Intervalo de ano de publicação
1.
Am J Obstet Gynecol ; 231(2): 276.e1-276.e10, 2024 08.
Artigo em Inglês | MEDLINE | ID: mdl-38710267

RESUMO

BACKGROUND: ChatGPT, a publicly available artificial intelligence large language model, has allowed for sophisticated artificial intelligence technology on demand. Indeed, use of ChatGPT has already begun to make its way into medical research. However, the medical community has yet to understand the capabilities and ethical considerations of artificial intelligence within this context, and unknowns exist regarding ChatGPT's writing abilities, accuracy, and implications for authorship. OBJECTIVE: We hypothesize that human reviewers and artificial intelligence detection software differ in their ability to correctly identify original published abstracts and artificial intelligence-written abstracts in the subjects of Gynecology and Urogynecology. We also suspect that concrete differences in writing errors, readability, and perceived writing quality exist between original and artificial intelligence-generated text. STUDY DESIGN: Twenty-five articles published in high-impact medical journals and a collection of Gynecology and Urogynecology journals were selected. ChatGPT was prompted to write 25 corresponding artificial intelligence-generated abstracts, providing the abstract title, journal-dictated abstract requirements, and select original results. The original and artificial intelligence-generated abstracts were reviewed by blinded Gynecology and Urogynecology faculty and fellows to identify the writing as original or artificial intelligence-generated. All abstracts were analyzed by publicly available artificial intelligence detection software GPTZero, Originality, and Copyleaks, and were assessed for writing errors and quality by artificial intelligence writing assistant Grammarly. RESULTS: A total of 157 reviews of 25 original and 25 artificial intelligence-generated abstracts were conducted by 26 faculty and 4 fellows; 57% of original abstracts and 42.3% of artificial intelligence-generated abstracts were correctly identified, yielding an average accuracy of 49.7% across all abstracts. All 3 artificial intelligence detectors rated the original abstracts as less likely to be artificial intelligence-written than the ChatGPT-generated abstracts (GPTZero, 5.8% vs 73.3%; P<.001; Originality, 10.9% vs 98.1%; P<.001; Copyleaks, 18.6% vs 58.2%; P<.001). The performance of the 3 artificial intelligence detection software differed when analyzing all abstracts (P=.03), original abstracts (P<.001), and artificial intelligence-generated abstracts (P<.001). Grammarly text analysis identified more writing issues and correctness errors in original than in artificial intelligence abstracts, including lower Grammarly score reflective of poorer writing quality (82.3 vs 88.1; P=.006), more total writing issues (19.2 vs 12.8; P<.001), critical issues (5.4 vs 1.3; P<.001), confusing words (0.8 vs 0.1; P=.006), misspelled words (1.7 vs 0.6; P=.02), incorrect determiner use (1.2 vs 0.2; P=.002), and comma misuse (0.3 vs 0.0; P=.005). CONCLUSION: Human reviewers are unable to detect the subtle differences between human and ChatGPT-generated scientific writing because of artificial intelligence's ability to generate tremendously realistic text. Artificial intelligence detection software improves the identification of artificial intelligence-generated writing, but still lacks complete accuracy and requires programmatic improvements to achieve optimal detection. Given that reviewers and editors may be unable to reliably detect artificial intelligence-generated texts, clear guidelines for reporting artificial intelligence use by authors and implementing artificial intelligence detection software in the review process will need to be established as artificial intelligence chatbots gain more widespread use.


Assuntos
Inteligência Artificial , Ginecologia , Urologia , Humanos , Indexação e Redação de Resumos , Publicações Periódicas como Assunto , Software , Redação , Autoria
2.
J Med Internet Res ; 26: e52001, 2024 Jun 26.
Artigo em Inglês | MEDLINE | ID: mdl-38924787

RESUMO

BACKGROUND: Due to recent advances in artificial intelligence (AI), language model applications can generate logical text output that is difficult to distinguish from human writing. ChatGPT (OpenAI) and Bard (subsequently rebranded as "Gemini"; Google AI) were developed using distinct approaches, but little has been studied about the difference in their capability to generate the abstract. The use of AI to write scientific abstracts in the field of spine surgery is the center of much debate and controversy. OBJECTIVE: The objective of this study is to assess the reproducibility of the structured abstracts generated by ChatGPT and Bard compared to human-written abstracts in the field of spine surgery. METHODS: In total, 60 abstracts dealing with spine sections were randomly selected from 7 reputable journals and used as ChatGPT and Bard input statements to generate abstracts based on supplied paper titles. A total of 174 abstracts, divided into human-written abstracts, ChatGPT-generated abstracts, and Bard-generated abstracts, were evaluated for compliance with the structured format of journal guidelines and consistency of content. The likelihood of plagiarism and AI output was assessed using the iThenticate and ZeroGPT programs, respectively. A total of 8 reviewers in the spinal field evaluated 30 randomly extracted abstracts to determine whether they were produced by AI or human authors. RESULTS: The proportion of abstracts that met journal formatting guidelines was greater among ChatGPT abstracts (34/60, 56.6%) compared with those generated by Bard (6/54, 11.1%; P<.001). However, a higher proportion of Bard abstracts (49/54, 90.7%) had word counts that met journal guidelines compared with ChatGPT abstracts (30/60, 50%; P<.001). The similarity index was significantly lower among ChatGPT-generated abstracts (20.7%) compared with Bard-generated abstracts (32.1%; P<.001). The AI-detection program predicted that 21.7% (13/60) of the human group, 63.3% (38/60) of the ChatGPT group, and 87% (47/54) of the Bard group were possibly generated by AI, with an area under the curve value of 0.863 (P<.001). The mean detection rate by human reviewers was 53.8% (SD 11.2%), achieving a sensitivity of 56.3% and a specificity of 48.4%. A total of 56.3% (63/112) of the actual human-written abstracts and 55.9% (62/128) of AI-generated abstracts were recognized as human-written and AI-generated by human reviewers, respectively. CONCLUSIONS: Both ChatGPT and Bard can be used to help write abstracts, but most AI-generated abstracts are currently considered unethical due to high plagiarism and AI-detection rates. ChatGPT-generated abstracts appear to be superior to Bard-generated abstracts in meeting journal formatting guidelines. Because humans are unable to accurately distinguish abstracts written by humans from those produced by AI programs, it is crucial to exercise special caution and examine the ethical boundaries of using AI programs, including ChatGPT and Bard.


Assuntos
Indexação e Redação de Resumos , Coluna Vertebral , Humanos , Coluna Vertebral/cirurgia , Indexação e Redação de Resumos/normas , Indexação e Redação de Resumos/métodos , Reprodutibilidade dos Testes , Inteligência Artificial , Redação/normas
3.
J Nurs Scholarsh ; 56(3): 478-485, 2024 05.
Artigo em Inglês | MEDLINE | ID: mdl-38124265

RESUMO

INTRODUCTION: The output of scholarly publications in scientific literature has increased exponentially in recent years. This increase in literature has been accompanied by an increase in retractions. Although some of these may be attributed to publishing errors, many are the result of unsavory research practices. The purposes of this study were to identify the number of retracted articles in nursing and reasons for the retractions, analyze the retraction notices, and determine the length of time for an article in nursing to be retracted. DESIGN: This was an exploratory study. METHODS: A search of PubMed/MEDLINE, the Cumulative Index to Nursing and Allied Health Literature, and Retraction Watch databases was conducted to identify retracted articles in nursing and their retraction notices. RESULTS: Between 1997 and 2022, 123 articles published in the nursing literature were retracted. Ten different reasons for retraction were used to categorize these articles with one-third of the retractions (n = 37, 30.1%) not specifying a reason. Sixty-eight percent (n = 77) were retracted because of an actual or a potential ethical concern: duplicate publication, data issues, plagiarism, authorship issues, and copyright. CONCLUSION: Nurses rely on nursing-specific scholarly literature as evidence for clinical decisions. The findings demonstrated that retractions are increasing within published nursing literature. In addition, it was evident that retraction notices do not prevent previously published work from being cited. This study addressed a gap in knowledge about article retractions specific to nursing.


Assuntos
Pesquisa em Enfermagem , Retratação de Publicação como Assunto , Humanos , Má Conduta Científica/estatística & dados numéricos , Publicações Periódicas como Assunto/estatística & dados numéricos , Editoração/estatística & dados numéricos , Plágio
4.
Dev World Bioeth ; 2024 Jan 09.
Artigo em Inglês | MEDLINE | ID: mdl-38193632

RESUMO

We aimed to conduct a scoping review to assess the profile of retracted health sciences articles authored by individuals affiliated with academic institutions in Latin America and the Caribbean (LAC). We systematically searched seven databases (PubMed, Scopus, Web of Science, Embase, Medline/Ovid, Scielo, and LILACS). We included articles published in peer-reviewed journals between 2003 and 2022 that had at least one author with an institutional affiliation in LAC. Data were collected on the year of publication, study design, authors' countries of origin, number of authors, subject matter of the manuscript, scientific journals of publication, retraction characteristics, and reasons for retraction. We included 147 articles, the majority being observational studies (41.5%). The LAC countries with the highest number of retractions were Brazil (n = 69), Colombia (n = 16), and Mexico (n = 15). The areas of study with the highest number of retractions were infectology (n = 21) and basic sciences (n = 15). A retraction label was applied to 89.1% of the articles, 70.7% were retracted by journal editors, and 89.1% followed international retraction guidelines. The primary reasons for retraction included errors in procedures or data collection (n = 39), inconsistency in results or conclusions (n = 37), plagiarism (n = 21), and suspected scientific fraud (n = 19). In conclusion, most retractions of scientific publications in health sciences in LAC adhered to international guidelines and were linked to methodological issues in execution and scientific misconduct. Efforts should be directed toward ensuring the integrity of scientific research in the field of health.

5.
Sci Eng Ethics ; 30(1): 4, 2024 Feb 12.
Artigo em Inglês | MEDLINE | ID: mdl-38345671

RESUMO

The past decade has seen extensive research carried out on the systematic causes of research misconduct. Simultaneously, less attention has been paid to the variation in academic misconduct between research fields, as most empirical studies focus on one particular discipline. We propose that academic discipline is one of several systematic factors that might contribute to academic misbehavior. Drawing on a neo-institutional approach, we argue that in the developing countries, the norm of textual originality has not drawn equal support across different research fields depending on its level of internationalization. Using plagiarism detection software, we analyzed 2,405 doctoral dissertations randomly selected from all dissertations defended in Russia between 2007 and 2015. We measured the globalization of each academic discipline by calculating the share of publications indexed in the global citation database in relation to overall output. Our results showed that, with an average share of detected borrowings of over 19%, the incidence of plagiarism in Russia is remarkably higher than in Western countries. Overall, disciplines closely follow the pattern of higher globalization associated with a lower percentage of borrowed text. We also found that plagiarism is less prevalent at research-oriented institutions supporting global ethical standards. Our findings suggest that it might be misleading to measure the prevalence of academic misconduct in developing countries without paying attention to variations at the disciplinary level.


Assuntos
Plágio , Má Conduta Científica , Organizações , Software
6.
Eur Spine J ; 32(11): 3704-3712, 2023 11.
Artigo em Inglês | MEDLINE | ID: mdl-37725162

RESUMO

PURPOSE: The number of articles retracted by peer-reviewed journals has increased in recent years. This study systematically reviews retracted publications in the spine surgery literature. METHODS: A search of PubMed MEDLINE, Ovid EMBASE, Retraction Watch, and the independent websites of 15 spine surgery-related journals from inception to September of 2022 was performed without language restrictions. PRISMA guidelines were followed with title/abstract screening, and full-text screening was conducted independently and in duplicate by two reviewers. Study characteristics and bibliometric information for each publication was extracted. RESULTS: Of 250 studies collected from the search, 65 met the inclusion criteria. The most common reason for retraction was data error (n = 15, 21.13%), followed by plagiarism (n = 14, 19.72%) and submission to another journal (n = 14, 19.72%). Most studies pertained to degenerative pathologies of the spine (n = 32, 80.00%). Most articles had no indication of retraction in their manuscript (n = 24, 36.92%), while others had a watermark or notice at the beginning of the article. The median number of citations per retracted publication was 10.0 (IQR 3-29), and the median 4-year impact factor of the journals was 5.05 (IQR 3.20-6.50). On multivariable linear regression, the difference in years from publication to retraction (p = 0.0343, ß = 6.56, 95% CI 0.50-12.62) and the journal 4-year impact factor (p = 0.0029, ß = 7.47, 95% CI 2.66-12.28) were positively associated with the total number of citations per retracted publication. Most articles originated from China (n = 30, 46.15%) followed by the United States (n = 12, 18.46%) and Germany (n = 3, 4.62%). The most common study design was retrospective cohort studies (n = 14, 21.54%). CONCLUSIONS: The retraction of publications has increased in recent years in spine surgery. Researchers consulting this body of literature should remain vigilant. Institutions and journals should collaborate to increase publication transparency and scientific integrity.


Assuntos
Pesquisa Biomédica , Má Conduta Científica , Humanos , Estudos Retrospectivos , Plágio , Fator de Impacto de Revistas , Projetos de Pesquisa
7.
J Med Internet Res ; 25: e48529, 2023 10 06.
Artigo em Inglês | MEDLINE | ID: mdl-37801343

RESUMO

We examined the gender distribution of authors of retracted articles in 134 medical journals across 10 disciplines, compared it with the gender distribution of authors of all published articles, and found that women were underrepresented among authors of retracted articles, and, in particular, of articles retracted for misconduct.


Assuntos
Pesquisa Biomédica , Publicações Periódicas como Assunto , Má Conduta Científica , Feminino , Humanos , Plágio , Estudos Retrospectivos , Publicações
8.
J Med Internet Res ; 25: e51229, 2023 12 25.
Artigo em Inglês | MEDLINE | ID: mdl-38145486

RESUMO

BACKGROUND: ChatGPT may act as a research assistant to help organize the direction of thinking and summarize research findings. However, few studies have examined the quality, similarity (abstracts being similar to the original one), and accuracy of the abstracts generated by ChatGPT when researchers provide full-text basic research papers. OBJECTIVE: We aimed to assess the applicability of an artificial intelligence (AI) model in generating abstracts for basic preclinical research. METHODS: We selected 30 basic research papers from Nature, Genome Biology, and Biological Psychiatry. Excluding abstracts, we inputted the full text into ChatPDF, an application of a language model based on ChatGPT, and we prompted it to generate abstracts with the same style as used in the original papers. A total of 8 experts were invited to evaluate the quality of these abstracts (based on a Likert scale of 0-10) and identify which abstracts were generated by ChatPDF, using a blind approach. These abstracts were also evaluated for their similarity to the original abstracts and the accuracy of the AI content. RESULTS: The quality of ChatGPT-generated abstracts was lower than that of the actual abstracts (10-point Likert scale: mean 4.72, SD 2.09 vs mean 8.09, SD 1.03; P<.001). The difference in quality was significant in the unstructured format (mean difference -4.33; 95% CI -4.79 to -3.86; P<.001) but minimal in the 4-subheading structured format (mean difference -2.33; 95% CI -2.79 to -1.86). Among the 30 ChatGPT-generated abstracts, 3 showed wrong conclusions, and 10 were identified as AI content. The mean percentage of similarity between the original and the generated abstracts was not high (2.10%-4.40%). The blinded reviewers achieved a 93% (224/240) accuracy rate in guessing which abstracts were written using ChatGPT. CONCLUSIONS: Using ChatGPT to generate a scientific abstract may not lead to issues of similarity when using real full texts written by humans. However, the quality of the ChatGPT-generated abstracts was suboptimal, and their accuracy was not 100%.


Assuntos
Inteligência Artificial , Pesquisa , Humanos , Estudos Transversais , Pesquisadores , Idioma
9.
J Korean Med Sci ; 38(45): e373, 2023 Nov 20.
Artigo em Inglês | MEDLINE | ID: mdl-37987104

RESUMO

Plagiarism is among the prevalent misconducts reported in scientific writing and common causes of article retraction in scholarly journals. Plagiarism of idea is not acceptable by any means. However, plagiarism of text is a matter of debate from culture to culture. Herein, I wish to reflect on a bird's eye view of plagiarism, particularly plagiarism of text, in scientific writing. Text similarity score as a signal of text plagiarism is not an appropriate index and an expert should examine the similarity with enough scrutiny. Text recycling in certain instances might be acceptable in scientific writing provided that the authors could correctly construe the text piece they borrowed. With introduction of artificial intelligence-based units, which help authors to write their manuscripts, the incidence of text plagiarism might increase. However, after a while, when a universal artificial unit takes over, no one will need to worry about text plagiarism as the incentive to commit plagiarism will be abolished, I believe.


Assuntos
Plágio , Má Conduta Científica , Humanos , Editoração , Inteligência Artificial , Redação
10.
J Korean Med Sci ; 38(31): e240, 2023 Aug 07.
Artigo em Inglês | MEDLINE | ID: mdl-37550808

RESUMO

Plagiarism is among commonly identified scientific misconducts in submitted manuscripts. Some journals routinely check the level of text similarity in the submitted manuscripts at the time of submission and reject the submission on the fly if the text similarity score exceeds a set cut-off value (e.g., 20%). Herein, I present a manuscript with 32% text similarity, yet without any instances of text plagiarism. This underlines the fact that text similarity is not necessarily tantamount to text plagiarism. Every instance of text similarity should be examined with scrutiny by a trained person in the editorial office. A high text similarity score does not always imply plagiarism; a low score, on the other hand, does not guarantee absence of plagiarism. There is no cut-off for text similarity to imply text plagiarism.


Assuntos
Publicações Periódicas como Assunto , Má Conduta Científica , Plágio
11.
J Korean Med Sci ; 38(46): e390, 2023 Nov 27.
Artigo em Inglês | MEDLINE | ID: mdl-38013646

RESUMO

BACKGROUND: Retraction is a correction process for the scientific literature that acts as a barrier to the dissemination of articles that have serious faults or misleading data. The purpose of this study was to investigate the characteristics of retracted papers from Kazakhstan. METHODS: Utilizing data from Retraction Watch, this cross-sectional descriptive analysis documented all retracted papers from Kazakhstan without regard to publication dates. The following data were recorded: publication title, DOI number, number of authors, publication date, retraction date, source, publication type, subject category of publication, collaborating country, and retraction reason. Source index status, Scopus citation value, and Altmetric Attention Score were obtained. RESULTS: Following the search, a total of 92 retracted papers were discovered. One duplicate article was excluded, leaving 91 publications for analysis. Most articles were retracted in 2022 (n = 22) and 2018 (n = 19). Among the identified publications, 49 (53.9%) were research articles, 39 (42.9%) were conference papers, 2 (2.2%) were review articles, and 1 (1.1%) was a book chapter. Russia (n = 24) and China (n = 5) were the most collaborative countries in the retracted publications. Fake-biased peer review (n = 38), plagiarism (n = 25), and duplication (n = 14) were the leading causes of retraction. CONCLUSION: The vast majority of the publications were research articles and conference papers. Russia was the leading collaborative country. The most prominent retraction reasons were fake-biased peer review, plagiarism, and duplication. Efforts to raise researchers' understanding of the grounds for retraction and ethical research techniques are required in Kazakhstan.


Assuntos
Pesquisa Biomédica , Má Conduta Científica , Humanos , Cazaquistão , Estudos Transversais , Plágio , Revisão por Pares , Publicações
12.
J Korean Med Sci ; 38(47): e405, 2023 Dec 04.
Artigo em Inglês | MEDLINE | ID: mdl-38050915

RESUMO

The concept of research integrity (RI) refers to a set of moral and ethical standards that serve as the foundation for the execution of research activities. Integrity in research is the incorporation of principles of honesty, transparency, and respect for ethical standards and norms throughout all stages of the research endeavor, encompassing study design, data collecting, analysis, reporting, and publishing. The preservation of RI is of utmost importance to uphold the credibility and amplify the influence of scientific research while also preventing and dealing with instances of scientific misconduct. Researchers, institutions, journals, and readers share responsibilities for preserving RI. Researchers must adhere to the highest ethical standards. Institutions have a role in establishing an atmosphere that supports integrity ideals while also providing useful guidance, instruction, and assistance to researchers. Editors and reviewers act as protectors, upholding quality and ethical standards in the dissemination of research results through publishing. Readers play a key role in the detection and reporting of fraudulent activity by critically evaluating content. The struggle against scientific misconduct has multiple dimensions and is continuous. It requires a collaborative effort and adherence to the principles of honesty, transparency, and rigorous science. By supporting a culture of RI, the scientific community may preserve its core principles and continue to contribute appropriately to society's well-being. It not only aids present research but also lays the foundation for future scientific advancements.


Assuntos
Pesquisa Biomédica , Má Conduta Científica , Humanos , Editoração , Projetos de Pesquisa , Pesquisadores
13.
J Korean Med Sci ; 38(12): e88, 2023 Mar 27.
Artigo em Inglês | MEDLINE | ID: mdl-36974397

RESUMO

Plagiarism is one of the most frequent forms of research misconduct in South and East Asian countries. This narrative review examines the factors contributing to research misconduct, emphasizing plagiarism, particularly in South, East and Southeast Asian countries. We conducted a PubMed and Scopus search using the terms plagiarism, Asia, South Asia, East Asia, Southeast Asia, research misconduct and retractions in January of 2022. Articles with missing abstracts, incomplete information about plagiarism, publication dates before 2010, and those unrelated to South, East, and Southeast Asian countries were excluded. The retraction watch database was searched for articles retracted between 9th January 2020 to 9th January 2022. A total of 159 articles were identified, of which 21 were included in the study using the database search criteria mentioned above. The review of articles identified a lack of training in scientific writing and research ethics, publication pressure, permissive attitudes, and inadequate regulatory measures as the primary reasons behind research misconduct in scientific publications. Plagiarism remains a common cause of unethical publications and retractions in regions of Asia (namely South, East and Southeast). Researchers lack training in scientific writing, and substantial gaps exist in understanding various forms of plagiarism, which heavily contribute to the problem. There is an urgent need to foster high research ethics standards and adhere to journal policies. Providing appropriate training in scientific writing among researchers may help improve the knowledge of different types of plagiarism and promote the use of antiplagiarism software, leading to a substantial reduction in the problem.


Assuntos
Pesquisa Biomédica , Má Conduta Científica , Humanos , Plágio , PubMed , Redação , Ásia
14.
J Korean Med Sci ; 38(40): e324, 2023 Oct 16.
Artigo em Inglês | MEDLINE | ID: mdl-37846787

RESUMO

BACKGROUND: Retraction is an essential procedure for correcting scientific literature and informing readers about articles containing significant errors or omissions. Ethical violations are one of the significant triggers of the retraction process. The objective of this study was to evaluate the characteristics of retracted articles in the medical literature due to ethical violations. METHODS: The Retraction Watch Database was utilized for this descriptive study. The 'ethical violations' and 'medicine' options were chosen. The date range was 2010 to 2023. The collected data included the number of authors, the date of publication and retraction, the journal of publication, the indexing status of the journal, the country of the corresponding author, the subject area of the article, and the particular retraction reasons. RESULTS: A total of 177 articles were analyzed. The most retractions were detected in 2019 (n = 29) and 2012 (n = 28). The median time period between the articles' first publication date and the date of retraction was 647 (0-4,295) days. The leading countries were China (n = 47), USA (n = 25), South Korea (n = 23), Iran (n = 14), and India (n = 12). The main causes of retraction were ethical approval issues (n = 65), data-related concerns (n = 51), informed consent issues (n = 45), and fake-biased peer review (n = 30). CONCLUSION: Unethical behavior is one of the most significant obstacles to scientific advancement. Obtaining appropriate ethics committee approvals and informed consent forms is crucial in ensuring the ethical conduct of medical research. It is the responsibility of journal editors to ensure that raw data is controlled and peer review processes are conducted effectively. It is essential to educate young researchers on unethical practices and the negative outcomes that may result from them.


Assuntos
Pesquisa Biomédica , Medicina , Má Conduta Científica , Humanos , Revisão por Pares , Coleta de Dados , Plágio
15.
Health Info Libr J ; 40(4): 440-446, 2023 Dec.
Artigo em Inglês | MEDLINE | ID: mdl-37806782

RESUMO

The artificial intelligence (AI) tool ChatGPT, which is based on a large language model (LLM), is gaining popularity in academic institutions, notably in the medical field. This article provides a brief overview of the capabilities of ChatGPT for medical writing and its implications for academic integrity. It provides a list of AI generative tools, common use of AI generative tools for medical writing, and provides a list of AI generative text detection tools. It also provides recommendations for policymakers, information professionals, and medical faculty for the constructive use of AI generative tools and related technology. It also highlights the role of health sciences librarians and educators in protecting students from generating text through ChatGPT in their academic work.


Assuntos
Bibliotecários , Escrita Médica , Humanos , Inteligência Artificial , Instituições Acadêmicas , Idioma
16.
Nurs Ethics ; : 9697330231200568, 2023 Oct 07.
Artigo em Inglês | MEDLINE | ID: mdl-37804005

RESUMO

BACKGROUND: Minimal research has been done to determine how well European nursing students understand the core principles of academic integrity and how often they deviate from good academic practice. AIM: The aim of this study was to find out what educational needs nursing students have in terms of academic integrity. RESEARCH DESIGN: A quantitative cross-sectional study in the form of a survey of nursing students was conducted via questionnaire in the fall of 2020. PARTICIPANTS: The sample was composed of 79 students in the BScN and MScN programs at Zürich University of Applied Sciences. ETHICAL CONSIDERATIONS: An application for a non-competence clearance was approved by the Ethics Committee in Zurich (BASEC No. Req-2020-00868). The survey was anonymous, and informed consent was obtained prior to participation. RESULTS: The participants had a high level of confidence in their own knowledge but were in many cases unable to correctly identify clear-cut examples of misconduct and to differentiate them from questionable practices. About 13% of the participants admitted that during their university education they had copied shorter passages from other sources into their own text without marking them as quotes. CONCLUSIONS: The study documents extensive knowledge gaps among nursing students regarding both academic misconduct and questionable practices and indicates a need for improved academic integrity training.

17.
Entropy (Basel) ; 25(9)2023 Aug 29.
Artigo em Inglês | MEDLINE | ID: mdl-37761570

RESUMO

The majority of the recent research on text similarity has been focused on machine learning strategies to combat the problem in the educational environment. When the originality of an idea is copied, it increases the difficulty of using a plagiarism detection system in practice, and the system fails. In cases like active-to-passive conversion, phrase structure changes, synonym substitution, and sentence reordering, the present approaches may not be adequate for plagiarism detection. In this article, semantic extraction and the quantum genetic algorithm (QGA) are integrated in a unified framework to identify idea plagiarism with the aim of enhancing the performance of existing methods in terms of detection accuracy and computational time. Semantic similarity measures, which use the WordNet database to extract semantic information, are used to capture a document's idea. In addition, the QGA is adapted to identify the interconnected, cohesive sentences that effectively convey the source document's main idea. QGAs are formulated using the quantum computing paradigm based on qubits and the superposition of states. By using the qubit chromosome as a representation rather than the more traditional binary, numeric, or symbolic representations, the QGA is able to express a linear superposition of solutions with the aim of increasing gene diversity. Due to its fast convergence and strong global search capacity, the QGA is well suited for a parallel structure. The proposed model has been assessed using a PAN 13-14 dataset, and the result indicates the model's ability to achieve significant detection improvement over some of the compared models. The recommended PD model achieves an approximately 20%, 15%, and 10% increase for TPR, PPV, and F-Score compared to GA and hierarchical GA (HGA)-based PD methods, respectively. Furthermore, the accuracy rate rises by approximately 10-15% for each increase in the number of samples in the dataset.

18.
High Educ (Dordr) ; 85(2): 247-263, 2023.
Artigo em Inglês | MEDLINE | ID: mdl-35431322

RESUMO

Plagiarism is a serious type of scholastic misconduct. In Rwanda, no research has been conducted to assess university students' attitudes and knowledge of plagiarism and if they have the skills to avoid plagiarizing. This study was conducted to assess knowledge of and attitudes towards plagiarism, as well as ability to recognize plagiaristic writing, among university students in Rwanda. An online questionnaire containing 10 knowledge questions, 10 attitude statements, and 5 writing cases with excerpts to test identification of plagiarism was administered between February and April 2021. Out of the 330 university students from 40 universities who completed the survey, 75.8% had a high knowledge level (score ≥ 80%), but only 11.6% had a high score in recognizing plagiaristic writing (score ≥ 80%). There was no statistically significant association between knowledge level and ability to recognize plagiaristic writing (P = 0.109). Lower odds were found in both diploma/certificate and bachelor students of having high knowledge as well as of having high ability to recognize plagiaristic writing than in master's students. Although respondents generally disapproved of plagiarism, approximately half of the respondents indicated that sometimes plagiarism is unavoidable, and self-plagiarism should not be punished in the same way as plagiarism of others' work. Inter-collegial collaboration on effective plagiarism policies and training programs is needed.

19.
High Educ (Dordr) ; 85(5): 979-997, 2023.
Artigo em Inglês | MEDLINE | ID: mdl-35669590

RESUMO

In academia, plagiarism is considered detrimental to the advancement of sciences, and the plagiarists can be charged with sanctions. However, the plagiarism cases involving three rectors of universities in Indonesia stand out, as they could defend their stand for not committing academic misconduct despite evidence found. By analyzing the three rectors' cases, the present study aims to answer how power relations take a role in plagiarism discourse in Indonesia, particularly in determining what is considered academic misconduct and what is not. By employing critical discourse analysis, we found that when the accusation of plagiarism appears during rectorial elections, the accused could equivocate that the accusation was meant to undermine them as a political opponent. When the accused plagiarists win the election, they have more power to deny and tackle the accusations of plagiarism. The findings indicate that plagiarism issues can be politicized, in which by those in power it can be used as a tool to undermine their political opponents, whereas the accused plagiarists can claim that the actual problem is personal and not about plagiarism. It is also shown that in the real context, whether something is called plagiarism or not is subject to interpretation by those in power. Supplementary Information: The online version contains supplementary material available at 10.1007/s10734-022-00875-z.

20.
J Acad Ethics ; 21(2): 231-249, 2023.
Artigo em Inglês | MEDLINE | ID: mdl-35815317

RESUMO

A high level of professional integrity is expected from healthcare professionals, and literature suggests a relationship between unethical behavior of healthcare professionals and poor academic integrity behavior at medical school. While academic integrity is well researched in western countries, it is not so in the Middle East, which is characterized by different cultural values that may influence students' academic integrity conduct. We conducted a cross-sectional study among health-professions students at a university in the Middle East to assess perceptual differences on various cheating behaviors, as well as to explore the reasons underlying the cheating behavior. A validated survey instrument disseminated among first and second-year undergraduate students resulted in 211 complete responses and this data was analyzed using descriptive and inferential statistics. Pearson's Chi-square/ Fischer's exact test was applied to test the association of various factors with academic misconduct. The major determinants of academic misconduct were investigated using Binary Logistic regression model. The conducted analysis and the results showed that preceding cheating behavior was the only factor significantly associated with cheating in the university (p < 0.001). No association was found between cheating behavior and age, college/major, awareness regarding academic integrity, or perception of faculty response. The reasons provided by students for cheating behavior were mainly academic workload and pressure to get a good grade. Various suggestions are made to enhance academic integrity among health-professions students including organizing workshops and events by the university to increase awareness and create an academic integrity culture, providing peer guidance as well as emotional and social support. Supplementary information: The online version contains supplementary material available at 10.1007/s10805-022-09452-6.

SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA