Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 7.756
Filtrar
4.
Croat Med J ; 65(2): 93-100, 2024 Apr 30.
Artigo em Inglês | MEDLINE | ID: mdl-38706235

RESUMO

AIM: To evaluate the quality of ChatGPT-generated case reports and assess the ability of ChatGPT to peer review medical articles. METHODS: This study was conducted from February to April 2023. First, ChatGPT 3.0 was used to generate 15 case reports, which were then peer-reviewed by expert human reviewers. Second, ChatGPT 4.0 was employed to peer review 15 published short articles. RESULTS: ChatGPT was capable of generating case reports, but these reports exhibited inaccuracies, particularly when it came to referencing. The case reports received mixed ratings from peer reviewers, with 33.3% of professionals recommending rejection. The reports' overall merit score was 4.9±1.8 out of 10. The review capabilities of ChatGPT were weaker than its text generation abilities. The AI as a peer reviewer did not recognize major inconsistencies in articles that had undergone significant content changes. CONCLUSION: While ChatGPT demonstrated proficiency in generating case reports, there were limitations in terms of consistency and accuracy, especially in referencing.


Assuntos
Revisão por Pares , Humanos , Revisão por Pares/normas , Redação/normas , Revisão da Pesquisa por Pares/normas
10.
PLoS One ; 19(4): e0300710, 2024.
Artigo em Inglês | MEDLINE | ID: mdl-38598482

RESUMO

How do author perceptions match up to the outcomes of the peer-review process and perceptions of others? In a top-tier computer science conference (NeurIPS 2021) with more than 23,000 submitting authors and 9,000 submitted papers, we surveyed the authors on three questions: (i) their predicted probability of acceptance for each of their papers, (ii) their perceived ranking of their own papers based on scientific contribution, and (iii) the change in their perception about their own papers after seeing the reviews. The salient results are: (1) Authors had roughly a three-fold overestimate of the acceptance probability of their papers: The median prediction was 70% for an approximately 25% acceptance rate. (2) Female authors exhibited a marginally higher (statistically significant) miscalibration than male authors; predictions of authors invited to serve as meta-reviewers or reviewers were similarly calibrated, but better than authors who were not invited to review. (3) Authors' relative ranking of scientific contribution of two submissions they made generally agreed with their predicted acceptance probabilities (93% agreement), but there was a notable 7% responses where authors predicted a worse outcome for their better paper. (4) The author-provided rankings disagreed with the peer-review decisions about a third of the time; when co-authors ranked their jointly authored papers, co-authors disagreed at a similar rate-about a third of the time. (5) At least 30% of respondents of both accepted and rejected papers said that their perception of their own paper improved after the review process. The stakeholders in peer review should take these findings into account in setting their expectations from peer review.


Assuntos
Revisão da Pesquisa por Pares , Revisão por Pares , Masculino , Feminino , Humanos , Inquéritos e Questionários
14.
Int J Gynecol Cancer ; 34(5): 669-674, 2024 May 06.
Artigo em Inglês | MEDLINE | ID: mdl-38627032

RESUMO

OBJECTIVE: To determine if reviewer experience impacts the ability to discriminate between human-written and ChatGPT-written abstracts. METHODS: Thirty reviewers (10 seniors, 10 juniors, and 10 residents) were asked to differentiate between 10 ChatGPT-written and 10 human-written (fabricated) abstracts. For the study, 10 gynecologic oncology abstracts were fabricated by the authors. For each human-written abstract we generated a ChatGPT matching abstract by using the same title and the fabricated results of each of the human generated abstracts. A web-based questionnaire was used to gather demographic data and to record the reviewers' evaluation of the 20 abstracts. Comparative statistics and multivariable regression were used to identify factors associated with a higher correct identification rate. RESULTS: The 30 reviewers discriminated 20 abstracts, giving a total of 600 abstract evaluations. The reviewers were able to correctly identify 300/600 (50%) of the abstracts: 139/300 (46.3%) of the ChatGPT-generated abstracts and 161/300 (53.7%) of the human-written abstracts (p=0.07). Human-written abstracts had a higher rate of correct identification (median (IQR) 56.7% (49.2-64.1%) vs 45.0% (43.2-48.3%), p=0.023). Senior reviewers had a higher correct identification rate (60%) than junior reviewers and residents (45% each; p=0.043 and p=0.002, respectively). In a linear regression model including the experience level of the reviewers, familiarity with artificial intelligence (AI) and the country in which the majority of medical training was achieved (English speaking vs non-English speaking), the experience of the reviewer (ß=10.2 (95% CI 1.8 to 18.7)) and familiarity with AI (ß=7.78 (95% CI 0.6 to 15.0)) were independently associated with the correct identification rate (p=0.019 and p=0.035, respectively). In a correlation analysis the number of publications by the reviewer was positively correlated with the correct identification rate (r28)=0.61, p<0.001. CONCLUSION: A total of 46.3% of abstracts written by ChatGPT were detected by reviewers. The correct identification rate increased with reviewer and publication experience.


Assuntos
Indexação e Redação de Resumos , Humanos , Indexação e Redação de Resumos/normas , Feminino , Revisão da Pesquisa por Pares , Redação/normas , Ginecologia , Inquéritos e Questionários , Editoração/estatística & dados numéricos
15.
J Prim Care Community Health ; 15: 21501319241252235, 2024.
Artigo em Inglês | MEDLINE | ID: mdl-38682542

RESUMO

Journal editors depend on peer reviewers to make decisions about submitted manuscripts. These reviewers help evaluate the methods, the results, the discussion of the results, and the overall organization and presentation of the manuscript. In addition, reviewers can help identify important mistakes and possible misconduct. Editors frequently have difficulty obtaining enough peer reviews which are submitted in a timely manner. This increases the workload of editors and journal managers and potentially delays the publication of clinical and research studies. This commentary discusses of the importance of peer reviews and make suggestions which potentially can increase the participation of academic faculty and researchers in this important activity.


Assuntos
Políticas Editoriais , Revisão da Pesquisa por Pares , Publicações Periódicas como Assunto , Humanos , Revisão da Pesquisa por Pares/normas , Revisão por Pares , Editoração/normas
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA
...