Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 5 de 5
Filtrar
Mais filtros

Base de dados
Ano de publicação
Tipo de documento
Intervalo de ano de publicação
1.
J Urol ; : 101097JU0000000000004199, 2024 Aug 14.
Artigo em Inglês | MEDLINE | ID: mdl-39141845

RESUMO

PURPOSE: This cross-sectional study assessed a generative-AI platform to automate the creation of accurate, appropriate, and compelling social-media (SoMe) posts from urological journal articles. MATERIALS AND METHODS: One hundred SoMe-posts from the top 3 journals in urology X (Twitter) profiles were collected from Aug-2022 to Oct-2023 A freeware GPT-tool was developed to auto-generate SoMe posts, which included title-summarization, key findings, pertinent emojis, hashtags, and DOI links to the article. Three physicians independently evaluated GPT-generated posts for achieving tetrafecta of accuracy and appropriateness criteria. Fifteen scenarios were created from 5 randomly selected posts from each journal. Each scenario contained both the original and the GPT-generated post for the same article. Five questions were formulated to investigate the posts' likability, shareability, engagement, understandability, and comprehensiveness. The paired posts were then randomized and presented to blinded academic authors and general public through Amazon Mechanical Turk (AMT) responders for preference evaluation. RESULTS: Median (IQR) time for post auto-generation was 10.2 seconds (8.5-12.5). Of the 150 rated GPT-generated posts, 115 (76.6%) met the correctness tetrafecta: 144 (96%) accurately summarized the title, 147 (98%) accurately presented the articles' main findings, 131 (87.3%) appropriately used emojis and hashtags 138 (92%). A total of 258 academic urologists and 493 AMT responders answered the surveys, wherein the GPT-generated posts consistently outperformed the original journals' posts for both academicians and AMT responders (P < .05). CONCLUSIONS: Generative-AI can automate the creation of SoMe posts from urology journal abstracts that are both accurate and preferable by the academic community and general public.

3.
Urol Oncol ; 2024 Aug 22.
Artigo em Inglês | MEDLINE | ID: mdl-39179437

RESUMO

OBJECTIVE: To evaluate the learning curve of a transperineal (TP) magnetic resonance imaging (MRI) and transrectal ultrasound (TRUS) fusion prostate biopsy (PBx). MATERIALS AND METHODS: Consecutive patients undergoing MRI followed by TP PBx from May/2017 to January/2023, were prospectively enrolled (IRB# HS-13-00663). All participants underwent MRI followed by 12 to 14 core systematic PBx (SB), with at least 2 additional targeted biopsy (TB) cores per PIRADS ≥3. The biopsies were performed transperineally using an organ tracking image-fusion system. The cohort was divided into chronological quintiles. An inflection point analysis was performed to determine proficiency. Operative time was defined from insertion to removal of the TRUS probe from the patient's rectum. Grade Group ≥2 defined clinically significant prostate cancer (CSPCa). Statistically significant if P < 0.05. RESULTS: A total of 370 patients were included and divided into quintiles of 74 patients. MRI findings and PIRADS distribution were similar between quintiles (P = 0.08). The CSPCa detection with SB+TB was consistent across quintiles: PIRADS 1 and 2 (range, 0%-18%; P = 0.25); PIRADS 3 to 5 (range, 46%-70%; P = 0.12). The CSPCa detection on PIRADS 3 to 5 TB alone, for quintiles 1 to 5, was respectively 44%, 58%, 66%, 41%, and 53% (P = 0.08). The median operative time significantly decreased for PIRADS 1 and 2 (33 min to 13 min; P < 0.01) and PIRADS 3 to 5 (48 min to 19 min; P < 0.01), reaching a plateau after 156 cases. Complications were not significantly different across quintiles (range, 0-5.4%; P = 0.3). CONCLUSIONS: The CSPCa detection remained consistently satisfactory throughout the learning curve of the Transperineal MRI/TRUS fusion prostate biopsy. However, the operative time significantly decreased with proficiency achieved after 156 cases.

4.
Artigo em Inglês | MEDLINE | ID: mdl-38744934

RESUMO

BACKGROUND: Generative Pretrained Model (GPT) chatbots have gained popularity since the public release of ChatGPT. Studies have evaluated the ability of different GPT models to provide information about medical conditions. To date, no study has assessed the quality of ChatGPT outputs to prostate cancer related questions from both the physician and public perspective while optimizing outputs for patient consumption. METHODS: Nine prostate cancer-related questions, identified through Google Trends (Global), were categorized into diagnosis, treatment, and postoperative follow-up. These questions were processed using ChatGPT 3.5, and the responses were recorded. Subsequently, these responses were re-inputted into ChatGPT to create simplified summaries understandable at a sixth-grade level. Readability of both the original ChatGPT responses and the layperson summaries was evaluated using validated readability tools. A survey was conducted among urology providers (urologists and urologists in training) to rate the original ChatGPT responses for accuracy, completeness, and clarity using a 5-point Likert scale. Furthermore, two independent reviewers evaluated the layperson summaries on correctness trifecta: accuracy, completeness, and decision-making sufficiency. Public assessment of the simplified summaries' clarity and understandability was carried out through Amazon Mechanical Turk (MTurk). Participants rated the clarity and demonstrated their understanding through a multiple-choice question. RESULTS: GPT-generated output was deemed correct by 71.7% to 94.3% of raters (36 urologists, 17 urology residents) across 9 scenarios. GPT-generated simplified layperson summaries of this output was rated as accurate in 8 of 9 (88.9%) scenarios and sufficient for a patient to make a decision in 8 of 9 (88.9%) scenarios. Mean readability of layperson summaries was higher than original GPT outputs ([original ChatGPT v. simplified ChatGPT, mean (SD), p-value] Flesch Reading Ease: 36.5(9.1) v. 70.2(11.2), <0.0001; Gunning Fog: 15.8(1.7) v. 9.5(2.0), p < 0.0001; Flesch Grade Level: 12.8(1.2) v. 7.4(1.7), p < 0.0001; Coleman Liau: 13.7(2.1) v. 8.6(2.4), 0.0002; Smog index: 11.8(1.2) v. 6.7(1.8), <0.0001; Automated Readability Index: 13.1(1.4) v. 7.5(2.1), p < 0.0001). MTurk workers (n = 514) rated the layperson summaries as correct (89.5-95.7%) and correctly understood the content (63.0-87.4%). CONCLUSION: GPT shows promise for correct patient education for prostate cancer-related contents, but the technology is not designed for delivering patients information. Prompting the model to respond with accuracy, completeness, clarity and readability may enhance its utility when used for GPT-powered medical chatbots.

SELEÇÃO DE REFERÊNCIAS
Detalhe da pesquisa