Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 10 de 10
Filtrar
Más filtros










Base de datos
Intervalo de año de publicación
1.
Artículo en Inglés | MEDLINE | ID: mdl-38744934

RESUMEN

BACKGROUND: Generative Pretrained Model (GPT) chatbots have gained popularity since the public release of ChatGPT. Studies have evaluated the ability of different GPT models to provide information about medical conditions. To date, no study has assessed the quality of ChatGPT outputs to prostate cancer related questions from both the physician and public perspective while optimizing outputs for patient consumption. METHODS: Nine prostate cancer-related questions, identified through Google Trends (Global), were categorized into diagnosis, treatment, and postoperative follow-up. These questions were processed using ChatGPT 3.5, and the responses were recorded. Subsequently, these responses were re-inputted into ChatGPT to create simplified summaries understandable at a sixth-grade level. Readability of both the original ChatGPT responses and the layperson summaries was evaluated using validated readability tools. A survey was conducted among urology providers (urologists and urologists in training) to rate the original ChatGPT responses for accuracy, completeness, and clarity using a 5-point Likert scale. Furthermore, two independent reviewers evaluated the layperson summaries on correctness trifecta: accuracy, completeness, and decision-making sufficiency. Public assessment of the simplified summaries' clarity and understandability was carried out through Amazon Mechanical Turk (MTurk). Participants rated the clarity and demonstrated their understanding through a multiple-choice question. RESULTS: GPT-generated output was deemed correct by 71.7% to 94.3% of raters (36 urologists, 17 urology residents) across 9 scenarios. GPT-generated simplified layperson summaries of this output was rated as accurate in 8 of 9 (88.9%) scenarios and sufficient for a patient to make a decision in 8 of 9 (88.9%) scenarios. Mean readability of layperson summaries was higher than original GPT outputs ([original ChatGPT v. simplified ChatGPT, mean (SD), p-value] Flesch Reading Ease: 36.5(9.1) v. 70.2(11.2), <0.0001; Gunning Fog: 15.8(1.7) v. 9.5(2.0), p < 0.0001; Flesch Grade Level: 12.8(1.2) v. 7.4(1.7), p < 0.0001; Coleman Liau: 13.7(2.1) v. 8.6(2.4), 0.0002; Smog index: 11.8(1.2) v. 6.7(1.8), <0.0001; Automated Readability Index: 13.1(1.4) v. 7.5(2.1), p < 0.0001). MTurk workers (n = 514) rated the layperson summaries as correct (89.5-95.7%) and correctly understood the content (63.0-87.4%). CONCLUSION: GPT shows promise for correct patient education for prostate cancer-related contents, but the technology is not designed for delivering patients information. Prompting the model to respond with accuracy, completeness, clarity and readability may enhance its utility when used for GPT-powered medical chatbots.

2.
Surgery ; 175(6): 1496-1502, 2024 Jun.
Artículo en Inglés | MEDLINE | ID: mdl-38582732

RESUMEN

Generative artificial intelligence is able to collect, extract, digest, and generate information in an understandable way for humans. As the first surgical applications of generative artificial intelligence are applied, this perspective paper aims to provide a comprehensive overview of current applications and future perspectives for the application of generative artificial intelligence in surgery, from preoperative planning to training. Generative artificial intelligence can be used before surgery for planning and decision support by extracting patient information and providing patients with information and simulation regarding the procedure. Intraoperatively, generative artificial intelligence can document data that is normally not captured as intraoperative adverse events or provide information to help decision-making. Postoperatively, GAIs can help with patient discharge and follow-up. The ability to provide real-time feedback and store it for later review is an important capability of GAIs. GAI applications are emerging as highly specialized, task-specific tools for tasks such as data extraction, synthesis, presentation, and communication within the realm of surgery. GAIs have the potential to play a pivotal role in facilitating interaction between surgeons and artificial intelligence.


Asunto(s)
Inteligencia Artificial , Humanos , Procedimientos Quirúrgicos Operativos/métodos
3.
BMJ ; 384: e077192, 2024 01 31.
Artículo en Inglés | MEDLINE | ID: mdl-38296328

RESUMEN

OBJECTIVES: To determine the extent and content of academic publishers' and scientific journals' guidance for authors on the use of generative artificial intelligence (GAI). DESIGN: Cross sectional, bibliometric study. SETTING: Websites of academic publishers and scientific journals, screened on 19-20 May 2023, with the search updated on 8-9 October 2023. PARTICIPANTS: Top 100 largest academic publishers and top 100 highly ranked scientific journals, regardless of subject, language, or country of origin. Publishers were identified by the total number of journals in their portfolio, and journals were identified through the Scimago journal rank using the Hirsch index (H index) as an indicator of journal productivity and impact. MAIN OUTCOME MEASURES: The primary outcomes were the content of GAI guidelines listed on the websites of the top 100 academic publishers and scientific journals, and the consistency of guidance between the publishers and their affiliated journals. RESULTS: Among the top 100 largest publishers, 24% provided guidance on the use of GAI, of which 15 (63%) were among the top 25 publishers. Among the top 100 highly ranked journals, 87% provided guidance on GAI. Of the publishers and journals with guidelines, the inclusion of GAI as an author was prohibited in 96% and 98%, respectively. Only one journal (1%) explicitly prohibited the use of GAI in the generation of a manuscript, and two (8%) publishers and 19 (22%) journals indicated that their guidelines exclusively applied to the writing process. When disclosing the use of GAI, 75% of publishers and 43% of journals included specific disclosure criteria. Where to disclose the use of GAI varied, including in the methods or acknowledgments, in the cover letter, or in a new section. Variability was also found in how to access GAI guidelines shared between journals and publishers. GAI guidelines in 12 journals directly conflicted with those developed by the publishers. The guidelines developed by top medical journals were broadly similar to those of academic journals. CONCLUSIONS: Guidelines by some top publishers and journals on the use of GAI by authors are lacking. Among those that provided guidelines, the allowable uses of GAI and how it should be disclosed varied substantially, with this heterogeneity persisting in some instances among affiliated publishers and journals. Lack of standardization places a burden on authors and could limit the effectiveness of the regulations. As GAI continues to grow in popularity, standardized guidelines to protect the integrity of scientific output are needed.


Asunto(s)
Inteligencia Artificial , Publicaciones Periódicas como Asunto , Humanos , Estudios Transversales , Edición , Bibliometría
5.
Eur Urol ; 85(2): 146-153, 2024 Feb.
Artículo en Inglés | MEDLINE | ID: mdl-37926642

RESUMEN

BACKGROUND: Since its release in November 2022, ChatGPT has captivated society and shown potential for various aspects of health care. OBJECTIVE: To investigate potential use of ChatGPT, a large language model (LLM), in urology by gathering opinions from urologists worldwide. DESIGN, SETTING, AND PARTICIPANTS: An open web-based survey was distributed via social media and e-mail chains to urologists between April 20, 2023 and May 5, 2023. Participants were asked to answer questions related to their knowledge and experience with artificial intelligence, as well as their opinions of potential use of ChatGPT/LLMs in research and clinical practice. OUTCOME MEASUREMENTS AND STATISTICAL ANALYSIS: Data are reported as the mean and standard deviation for continuous variables, and the frequency and percentage for categorical variables. Charts and tables are used as appropriate, with descriptions of the chart types and the measures used. The data are reported in accordance with the Checklist for Reporting Results of Internet E-Surveys (CHERRIES). RESULTS AND LIMITATIONS: A total of 456 individuals completed the survey (64% completion rate). Nearly half (47.7%) reported that they use ChatGPT/LLMs in their academic practice, with fewer using the technology in clinical practice (19.8%). More than half (62.2%) believe there are potential ethical concerns when using ChatGPT for scientific or academic writing, and 53% reported that they have experienced limitations when using ChatGPT in academic practice. CONCLUSIONS: Urologists recognise the potential of ChatGPT/LLMs in research but have concerns regarding ethics and patient acceptance. There is a desire for regulations and guidelines to ensure appropriate use. In addition, measures should be taken to establish rules and guidelines to maximise safety and efficiency when using this novel technology. PATIENT SUMMARY: A survey asked 456 urologists from around the world about using an artificial intelligence tool called ChatGPT in their work. Almost half of them use ChatGPT for research, but not many use it for patients care. The resonders think ChatGPT could be helpful, but they worry about problems like ethics and want rules to make sure it's used safely.


Asunto(s)
Urología , Humanos , Inteligencia Artificial , Estudios Transversales , Estudios Prospectivos , Lenguaje
7.
Urol Pract ; 10(5): 436-443, 2023 09.
Artículo en Inglés | MEDLINE | ID: mdl-37410015

RESUMEN

INTRODUCTION: This study assessed ChatGPT's ability to generate readable, accurate, and clear layperson summaries of urological studies, and compared the performance of ChatGPT-generated summaries with original abstracts and author-written patient summaries to determine its effectiveness as a potential solution for creating accessible medical literature for the public. METHODS: Articles from the top 5 ranked urology journals were selected. A ChatGPT prompt was developed following guidelines to maximize readability, accuracy, and clarity, minimizing variability. Readability scores and grade-level indicators were calculated for the ChatGPT summaries, original abstracts, and patient summaries. Two MD physicians independently rated the accuracy and clarity of the ChatGPT-generated layperson summaries. Statistical analyses were conducted to compare readability scores. Cohen's κ coefficient was used to assess interrater reliability for correctness and clarity evaluations. RESULTS: A total of 256 journal articles were included. The ChatGPT-generated summaries were created with an average time of 17.5 (SD 15.0) seconds. The readability scores of the ChatGPT-generated summaries were significantly better than the original abstracts, with Global Readability Score 54.8 (12.3) vs 29.8 (18.5), Flesch Kincade Reading Ease 54.8 (12.3) vs 29.8 (18.5), Flesch Kincaid Grade Level 10.4 (2.2) vs 13.5 (4.0), Gunning Fog Score 12.9 (2.6) vs 16.6 (4.1), Smog Index 9.1 (2.0) vs 12.0 (3.0), Coleman Liau Index 12.9 (2.1) vs 14.9 (3.7), and Automated Readability Index 11.1 (2.5) vs 12.0 (5.7; P < .0001 for all except Automated Readability Index, which was P = .037). The correctness rate of ChatGPT outputs was >85% across all categories assessed, with interrater agreement (Cohen's κ) between 2 independent physician reviewers ranging from 0.76-0.95. CONCLUSIONS: ChatGPT can create accurate summaries of scientific abstracts for patients, with well-crafted prompts enhancing user-friendliness. Although the summaries are satisfactory, expert verification is necessary for improved accuracy.


Asunto(s)
Alfabetización en Salud , Urología , Humanos , Reproducibilidad de los Resultados , Comprensión , Lenguaje
9.
Eur Urol Focus ; 9(6): 1068-1071, 2023 11.
Artículo en Inglés | MEDLINE | ID: mdl-37349181

RESUMEN

We evaluated the comprehensibility of patient summaries provided by urology journals for the general public. The WebFX online tool was used to assess the readability of abstracts and patient summaries by scoring the text according to established readability indices. A total of 266 articles were included and statistical analysis was performed to compare the readability of abstracts and patient summaries, stratified by article type and text type. The results show that patient summaries consistently performed worse than abstracts for all readability metrics, and the readability levels for both abstracts and patient summaries were more advanced than the recommended guidelines on average. This study suggests that patient summaries provided by these urology journals may not be easily understood by the general population, and tools should be developed to help urological researchers improve the accessibility of their work. PATIENT SUMMARY: We checked how easy it is to read and understand patient summaries and abstracts of research articles from four urology journals. We found that the summaries and abstracts were too hard to read. This study shows that we need to make these summaries easier to read for everyone.


Asunto(s)
Urología , Humanos , Comprensión , Proyectos de Investigación
10.
Sci Immunol ; 6(61)2021 07 01.
Artículo en Inglés | MEDLINE | ID: mdl-34210785

RESUMEN

A central feature of the SARS-CoV-2 pandemic is that some individuals become severely ill or die, whereas others have only a mild disease course or are asymptomatic. Here we report development of an improved multimeric αß T cell staining reagent platform, with each maxi-ferritin "spheromer" displaying 12 peptide-MHC complexes. Spheromers stain specific T cells more efficiently than peptide-MHC tetramers and capture a broader portion of the sequence repertoire for a given peptide-MHC. Analyzing the response in unexposed individuals, we find that T cells recognizing peptides conserved amongst coronaviruses are more abundant and tend to have a "memory" phenotype, compared to those unique to SARS-CoV-2. Significantly, CD8+ T cells with these conserved specificities are much more abundant in COVID-19 patients with mild disease versus those with a more severe illness, suggesting a protective role.


Asunto(s)
Linfocitos T CD8-positivos/inmunología , COVID-19/inmunología , Epítopos de Linfocito T/inmunología , SARS-CoV-2/inmunología , Índice de Severidad de la Enfermedad , Linfocitos T CD8-positivos/patología , COVID-19/patología , Femenino , Humanos , Masculino
SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA
...