Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 1.880
Filtrar
Mais filtros

Intervalo de ano de publicação
2.
PLoS Biol ; 19(3): e3001161, 2021 03.
Artigo em Inglês | MEDLINE | ID: mdl-33788834

RESUMO

Scientists routinely use images to display data. Readers often examine figures first; therefore, it is important that figures are accessible to a broad audience. Many resources discuss fraudulent image manipulation and technical specifications for image acquisition; however, data on the legibility and interpretability of images are scarce. We systematically examined these factors in non-blot images published in the top 15 journals in 3 fields; plant sciences, cell biology, and physiology (n = 580 papers). Common problems included missing scale bars, misplaced or poorly marked insets, images or labels that were not accessible to colorblind readers, and insufficient explanations of colors, labels, annotations, or the species and tissue or object depicted in the image. Papers that met all good practice criteria examined for all image-based figures were uncommon (physiology 16%, cell biology 12%, plant sciences 2%). We present detailed descriptions and visual examples to help scientists avoid common pitfalls when publishing images. Our recommendations address image magnification, scale information, insets, annotation, and color and may encourage discussion about quality standards for bioimage publishing.


Assuntos
Obras Pictóricas como Assunto/tendências , Redação/normas , Pesquisa Biomédica , Comunicação , Humanos , Publicações Periódicas como Assunto , Publicações/normas , Editoração/tendências , Comunicação Acadêmica
7.
Int J Gynecol Cancer ; 34(5): 669-674, 2024 May 06.
Artigo em Inglês | MEDLINE | ID: mdl-38627032

RESUMO

OBJECTIVE: To determine if reviewer experience impacts the ability to discriminate between human-written and ChatGPT-written abstracts. METHODS: Thirty reviewers (10 seniors, 10 juniors, and 10 residents) were asked to differentiate between 10 ChatGPT-written and 10 human-written (fabricated) abstracts. For the study, 10 gynecologic oncology abstracts were fabricated by the authors. For each human-written abstract we generated a ChatGPT matching abstract by using the same title and the fabricated results of each of the human generated abstracts. A web-based questionnaire was used to gather demographic data and to record the reviewers' evaluation of the 20 abstracts. Comparative statistics and multivariable regression were used to identify factors associated with a higher correct identification rate. RESULTS: The 30 reviewers discriminated 20 abstracts, giving a total of 600 abstract evaluations. The reviewers were able to correctly identify 300/600 (50%) of the abstracts: 139/300 (46.3%) of the ChatGPT-generated abstracts and 161/300 (53.7%) of the human-written abstracts (p=0.07). Human-written abstracts had a higher rate of correct identification (median (IQR) 56.7% (49.2-64.1%) vs 45.0% (43.2-48.3%), p=0.023). Senior reviewers had a higher correct identification rate (60%) than junior reviewers and residents (45% each; p=0.043 and p=0.002, respectively). In a linear regression model including the experience level of the reviewers, familiarity with artificial intelligence (AI) and the country in which the majority of medical training was achieved (English speaking vs non-English speaking), the experience of the reviewer (ß=10.2 (95% CI 1.8 to 18.7)) and familiarity with AI (ß=7.78 (95% CI 0.6 to 15.0)) were independently associated with the correct identification rate (p=0.019 and p=0.035, respectively). In a correlation analysis the number of publications by the reviewer was positively correlated with the correct identification rate (r28)=0.61, p<0.001. CONCLUSION: A total of 46.3% of abstracts written by ChatGPT were detected by reviewers. The correct identification rate increased with reviewer and publication experience.


Assuntos
Indexação e Redação de Resumos , Humanos , Indexação e Redação de Resumos/normas , Feminino , Revisão da Pesquisa por Pares , Redação/normas , Ginecologia , Inquéritos e Questionários , Editoração/estatística & dados numéricos
8.
Curr Urol Rep ; 25(7): 163-168, 2024 Jul.
Artigo em Inglês | MEDLINE | ID: mdl-38836977

RESUMO

PURPOSE OF REVIEW: It is incumbent upon training programs to set the foundation for evidence-based practices and to create opportunities for trainees to develop into academic leaders. As dedicated resident research time and funding have declined in recent years, residency programs and the field at large will need to create new ways to incorporate scholarly activity into residency curricula. RECENT FINDINGS: Literature across specialties demonstrates barriers to resident involvement including lack of time, cost, and absent scholarly mentorship. Peer review stands as a ready-made solution that can be formalized into a collaborative relationship with journals. A formal relationship between professional societies, academic journals, and residencies can facilitate the use of peer review as a teaching tool for residency programs.


Assuntos
Internato e Residência , Urologia , Urologia/educação , Internato e Residência/métodos , Humanos , Pesquisa Biomédica/educação , Revisão por Pares , Redação/normas , Revisão da Pesquisa por Pares , Educação de Pós-Graduação em Medicina/métodos , Currículo
9.
Adv Health Sci Educ Theory Pract ; 29(3): 721-723, 2024 Jul.
Artigo em Inglês | MEDLINE | ID: mdl-38900340

RESUMO

This column is intended to address the kinds of knotty problems and dilemmas with which many scholars grapple in studying health professions education. In this article, the authors address the challenges in proofreading a manuscript. Emerging researchers might think that someone in the production team will catch any errors. This may not always be the case. We emphasize the importance of guiding mentees to take the process of preparing a manuscript for submission seriously.


Assuntos
Redação , Humanos , Redação/normas , Editoração/normas , Ocupações em Saúde/educação
10.
J Med Internet Res ; 26: e52001, 2024 Jun 26.
Artigo em Inglês | MEDLINE | ID: mdl-38924787

RESUMO

BACKGROUND: Due to recent advances in artificial intelligence (AI), language model applications can generate logical text output that is difficult to distinguish from human writing. ChatGPT (OpenAI) and Bard (subsequently rebranded as "Gemini"; Google AI) were developed using distinct approaches, but little has been studied about the difference in their capability to generate the abstract. The use of AI to write scientific abstracts in the field of spine surgery is the center of much debate and controversy. OBJECTIVE: The objective of this study is to assess the reproducibility of the structured abstracts generated by ChatGPT and Bard compared to human-written abstracts in the field of spine surgery. METHODS: In total, 60 abstracts dealing with spine sections were randomly selected from 7 reputable journals and used as ChatGPT and Bard input statements to generate abstracts based on supplied paper titles. A total of 174 abstracts, divided into human-written abstracts, ChatGPT-generated abstracts, and Bard-generated abstracts, were evaluated for compliance with the structured format of journal guidelines and consistency of content. The likelihood of plagiarism and AI output was assessed using the iThenticate and ZeroGPT programs, respectively. A total of 8 reviewers in the spinal field evaluated 30 randomly extracted abstracts to determine whether they were produced by AI or human authors. RESULTS: The proportion of abstracts that met journal formatting guidelines was greater among ChatGPT abstracts (34/60, 56.6%) compared with those generated by Bard (6/54, 11.1%; P<.001). However, a higher proportion of Bard abstracts (49/54, 90.7%) had word counts that met journal guidelines compared with ChatGPT abstracts (30/60, 50%; P<.001). The similarity index was significantly lower among ChatGPT-generated abstracts (20.7%) compared with Bard-generated abstracts (32.1%; P<.001). The AI-detection program predicted that 21.7% (13/60) of the human group, 63.3% (38/60) of the ChatGPT group, and 87% (47/54) of the Bard group were possibly generated by AI, with an area under the curve value of 0.863 (P<.001). The mean detection rate by human reviewers was 53.8% (SD 11.2%), achieving a sensitivity of 56.3% and a specificity of 48.4%. A total of 56.3% (63/112) of the actual human-written abstracts and 55.9% (62/128) of AI-generated abstracts were recognized as human-written and AI-generated by human reviewers, respectively. CONCLUSIONS: Both ChatGPT and Bard can be used to help write abstracts, but most AI-generated abstracts are currently considered unethical due to high plagiarism and AI-detection rates. ChatGPT-generated abstracts appear to be superior to Bard-generated abstracts in meeting journal formatting guidelines. Because humans are unable to accurately distinguish abstracts written by humans from those produced by AI programs, it is crucial to exercise special caution and examine the ethical boundaries of using AI programs, including ChatGPT and Bard.


Assuntos
Indexação e Redação de Resumos , Coluna Vertebral , Humanos , Coluna Vertebral/cirurgia , Indexação e Redação de Resumos/normas , Indexação e Redação de Resumos/métodos , Reprodutibilidade dos Testes , Inteligência Artificial , Redação/normas
11.
J Korean Med Sci ; 39(38): e297, 2024 Oct 07.
Artigo em Inglês | MEDLINE | ID: mdl-39376192

RESUMO

Nurses constitute nearly 50% of the worldwide health workforce, and the World Health Organisation has advocated for an enlargement of their roles to guarantee fair health care and address the increasing need for services. The growing specialization in nursing practice has led to a rise in educational options for nurses, including the growth of PhD programs. These programs play a crucial role in preparing nurse researchers and educators. This growth underlines the importance of evidence-based practice and high-quality academic writing in nursing. The article highlights the importance of nurses' involvement in creating evidence-based practice guidelines. The active engagement of nurses in developing evidence-based practice recommendations is essential to ensure the practicality, relevance, and grounding of these guidelines in real-world clinical experiences. The advancement of nursing depends mainly on using rigorous research procedures to generate, analyze, and disseminate knowledge and data. The current article discusses essential research methodologies, including interviews, surveys, and bibliometric and altmetric analyses. It also aims to tackle concerns about inadequate writing skills, plagiarism, and insufficient comprehension of ethical norms in research and publishing. The recommended strategies to promote nursing research and publications encompass enhancing writing skills through specialized education, embracing open-access publishing, and utilizing social media for broader distribution following publication. Implementing these approaches would increase the quality and impact of nursing publications and reinforce nursing's role in defining health policy and enhancing the care of patients.


Assuntos
Editoração , Redação , Redação/normas , Humanos , Pesquisa em Enfermagem
12.
J Korean Med Sci ; 39(32): e231, 2024 Aug 19.
Artigo em Inglês | MEDLINE | ID: mdl-39164055

RESUMO

Reporting standards are essential to health research as they improve accuracy and transparency. Over time, significant changes have occurred to the requirements for reporting research to ensure comprehensive and transparent reporting across a range of study domains and foster methodological rigor. The establishment of the Declaration of Helsinki, Consolidated Standards of Reporting Trials (CONSORT), Strengthening the Reporting of Observational Studies in Epidemiology (STROBE), and Preferred Reporting Items for Systematic Reviews and Meta-Analysis (PRISMA) are just a few of the historic initiatives that have increased research transparency. Through enhanced discoverability, statistical analysis facilitation, article quality enhancement, and language barrier reduction, artificial intelligence (AI)-in particular, large language models like ChatGPT-has transformed academic writing. However, problems with errors that could occur and the need for transparency while utilizing AI tools still exist. Modifying reporting rules to include AI-driven writing tools such as ChatGPT is ethically and practically challenging. In academic writing, precautions for truth, privacy, and responsibility are necessary due to concerns about biases, openness, data limits, and potential legal ramifications. The CONSORT-AI and Standard Protocol Items: Recommendations for Interventional Trials (SPIRIT)-AI Steering Group expands the CONSORT guidelines for AI clinical trials-new checklists like METRICS and CLEAR help to promote transparency in AI studies. Responsible usage of technology in research and writing software adoption requires interdisciplinary collaboration and ethical assessment. This study explores the impact of AI technologies, specifically ChatGPT, on past reporting standards and the need for revised guidelines for open, reproducible, and robust scientific publications.


Assuntos
Inteligência Artificial , Software , Redação , Humanos , Redação/normas , Projetos de Pesquisa/normas
13.
Int J Toxicol ; 43(4): 421-424, 2024.
Artigo em Inglês | MEDLINE | ID: mdl-38767005

RESUMO

Peer review is essential to preserving the integrity of the scientific publication process. Peer reviewers must adhere to the norms of the peer review process, including confidentiality, avoiding actual or apparent conflicts of interest, timeliness, constructiveness, and thoroughness. This mini review will discuss some of the different formats in which peer review might occur, as well as advantages and disadvantages of each. The topics then shift to providing advice for prospective reviewers, as well as a suggested format for use in writing a review.


Assuntos
Revisão da Pesquisa por Pares , Revisão da Pesquisa por Pares/normas , Humanos , Revisão por Pares/normas , Editoração/normas , Redação/normas
14.
BMC Med Educ ; 24(1): 354, 2024 Mar 29.
Artigo em Inglês | MEDLINE | ID: mdl-38553693

RESUMO

BACKGROUND: Writing multiple choice questions (MCQs) for the purpose of medical exams is challenging. It requires extensive medical knowledge, time and effort from medical educators. This systematic review focuses on the application of large language models (LLMs) in generating medical MCQs. METHODS: The authors searched for studies published up to November 2023. Search terms focused on LLMs generated MCQs for medical examinations. Non-English, out of year range and studies not focusing on AI generated multiple-choice questions were excluded. MEDLINE was used as a search database. Risk of bias was evaluated using a tailored QUADAS-2 tool. RESULTS: Overall, eight studies published between April 2023 and October 2023 were included. Six studies used Chat-GPT 3.5, while two employed GPT 4. Five studies showed that LLMs can produce competent questions valid for medical exams. Three studies used LLMs to write medical questions but did not evaluate the validity of the questions. One study conducted a comparative analysis of different models. One other study compared LLM-generated questions with those written by humans. All studies presented faulty questions that were deemed inappropriate for medical exams. Some questions required additional modifications in order to qualify. CONCLUSIONS: LLMs can be used to write MCQs for medical examinations. However, their limitations cannot be ignored. Further study in this field is essential and more conclusive evidence is needed. Until then, LLMs may serve as a supplementary tool for writing medical examinations. 2 studies were at high risk of bias. The study followed the Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) guidelines.


Assuntos
Avaliação Educacional , Humanos , Avaliação Educacional/métodos , Redação/normas , Idioma , Educação Médica
15.
Can J Surg ; 67(3): E243-E246, 2024.
Artigo em Inglês | MEDLINE | ID: mdl-38843943

RESUMO

SummaryLetters of recommendation are increasingly important for the residency match. We assessed whether an artificial intelligence (AI) tool could help in writing letters of recommendation by analyzing recommendation letters written by 3 academic staff and AI duplicate versions for 13 applicants. The preferred letters were selected by 3 blinded orthopedic program directors based on a pre-determined set of criteria. The first orthopedic program director selected the AI letter for 31% of applicants, and the 2 remaining program directors selected the AI letter for 38% of applicants, with the staff-written versions selected more often by all of the program directors (p < 0.05). The first program director recognized only 15% of the AI-written letters, the second was able to identify 92%, and the third director identified 77% of AI-written letters (p < 0.05).


Assuntos
Inteligência Artificial , Internato e Residência , Humanos , Redação/normas , Ortopedia/educação , Ortopedia/normas , Correspondência como Assunto , Seleção de Pessoal/métodos , Seleção de Pessoal/normas
16.
Croat Med J ; 65(2): 93-100, 2024 Apr 30.
Artigo em Inglês | MEDLINE | ID: mdl-38706235

RESUMO

AIM: To evaluate the quality of ChatGPT-generated case reports and assess the ability of ChatGPT to peer review medical articles. METHODS: This study was conducted from February to April 2023. First, ChatGPT 3.0 was used to generate 15 case reports, which were then peer-reviewed by expert human reviewers. Second, ChatGPT 4.0 was employed to peer review 15 published short articles. RESULTS: ChatGPT was capable of generating case reports, but these reports exhibited inaccuracies, particularly when it came to referencing. The case reports received mixed ratings from peer reviewers, with 33.3% of professionals recommending rejection. The reports' overall merit score was 4.9±1.8 out of 10. The review capabilities of ChatGPT were weaker than its text generation abilities. The AI as a peer reviewer did not recognize major inconsistencies in articles that had undergone significant content changes. CONCLUSION: While ChatGPT demonstrated proficiency in generating case reports, there were limitations in terms of consistency and accuracy, especially in referencing.


Assuntos
Revisão por Pares , Humanos , Revisão por Pares/normas , Redação/normas , Revisão da Pesquisa por Pares/normas
17.
J Public Health Manag Pract ; 30: S6-S14, 2024.
Artigo em Inglês | MEDLINE | ID: mdl-38870354

RESUMO

CONTEXT: Contributing to the evidence base, by disseminating findings through written products such as journal articles, is a core competency for public health practitioners. Disseminating practice-based evidence that supports improving cardiovascular health is necessary for filling literature gaps, generating health policies and laws, and translating evidence-based strategies into practice. However, a gap exists in the dissemination of practice-based evidence in public health. Public health practitioners face various dissemination barriers (eg, lack of time and resources, staff turnover) which, more recently, were compounded by the COVID-19 pandemic. PROGRAM: The Centers for Disease Control and Prevention's Division for Heart Disease and Stroke Prevention (DHDSP) partnered with the National Network of Public Health Institutes to implement a multimodal approach to build writing capacity among recipients funded by three DHDSP cooperative agreements. This project aimed to enhance public health practitioners' capacity to translate and disseminate their evaluation findings. IMPLEMENTATION: Internal evaluation technical assistance expertise and external subject matter experts helped to implement this project and to develop tailored multimodal capacity-building activities. These activities included online peer-to-peer discussion posts, virtual writing workshops, resource documents, one-to-one writing coaching sessions, an online toolkit, and a supplemental issue in a peer-reviewed journal. EVALUATION: Findings from an informal process evaluation demonstrate positive results. Most participants were engaged and satisfied with the project's activities. Across eight workshops, participants reported increased knowledge (≥94%) and enhanced confidence in writing (≥98%). The majority of participants (83%) reported that disseminating evaluation findings improved program implementation. Notably, 30 abstracts were submitted for a journal supplement and 23 articles were submitted for consideration. DISCUSSION: This multimodal approach serves as a promising model that enhances public health practitioners' capacity to disseminate evaluation findings during times of evolving health needs.


Assuntos
COVID-19 , Fortalecimento Institucional , Disseminação de Informação , Saúde Pública , Redação , Humanos , Estados Unidos , Saúde Pública/métodos , Redação/normas , COVID-19/prevenção & controle , COVID-19/epidemiologia , Disseminação de Informação/métodos , Fortalecimento Institucional/métodos , Doenças Cardiovasculares/prevenção & controle , SARS-CoV-2 , Centers for Disease Control and Prevention, U.S./organização & administração
18.
Nature ; 603(7899): 191-192, 2022 03.
Artigo em Inglês | MEDLINE | ID: mdl-35228710
SELEÇÃO DE REFERÊNCIAS
Detalhe da pesquisa