Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 5 de 5
Filtrar
Mais filtros

Base de dados
Ano de publicação
Tipo de documento
País de afiliação
Intervalo de ano de publicação
1.
J Urol ; : 101097JU0000000000004199, 2024 Aug 14.
Artigo em Inglês | MEDLINE | ID: mdl-39141845

RESUMO

PURPOSE: This cross-sectional study assessed a generative-AI platform to automate the creation of accurate, appropriate, and compelling social-media (SoMe) posts from urological journal articles. MATERIALS AND METHODS: One hundred SoMe-posts from the top 3 journals in urology X (Twitter) profiles were collected from Aug-2022 to Oct-2023 A freeware GPT-tool was developed to auto-generate SoMe posts, which included title-summarization, key findings, pertinent emojis, hashtags, and DOI links to the article. Three physicians independently evaluated GPT-generated posts for achieving tetrafecta of accuracy and appropriateness criteria. Fifteen scenarios were created from 5 randomly selected posts from each journal. Each scenario contained both the original and the GPT-generated post for the same article. Five questions were formulated to investigate the posts' likability, shareability, engagement, understandability, and comprehensiveness. The paired posts were then randomized and presented to blinded academic authors and general public through Amazon Mechanical Turk (AMT) responders for preference evaluation. RESULTS: Median (IQR) time for post auto-generation was 10.2 seconds (8.5-12.5). Of the 150 rated GPT-generated posts, 115 (76.6%) met the correctness tetrafecta: 144 (96%) accurately summarized the title, 147 (98%) accurately presented the articles' main findings, 131 (87.3%) appropriately used emojis and hashtags 138 (92%). A total of 258 academic urologists and 493 AMT responders answered the surveys, wherein the GPT-generated posts consistently outperformed the original journals' posts for both academicians and AMT responders (P < .05). CONCLUSIONS: Generative-AI can automate the creation of SoMe posts from urology journal abstracts that are both accurate and preferable by the academic community and general public.

2.
Int Braz J Urol ; 50(5): 616-628, 2024.
Artigo em Inglês | MEDLINE | ID: mdl-39106117

RESUMO

PURPOSE: To compare transperineal (TP) vs transrectal (TR) magnetic resonance imaging (MRI) and transrectal ultrasound (TRUS) fusion-guided prostate biopsy (PBx) in a large, ethnically diverse and multiracial cohort. MATERIALS AND METHODS: Consecutive patients who underwent multiparametric (mp) MRI followed by TP or TR TRUS-fusion guided PBx, were identified from a prospective database (IRB #HS-13-00663). All patients underwent mpMRI followed by 12-14 core systematic PBx. A minimum of two additional target-biopsy cores were taken per PIRADS≥3 lesion. The endpoint was the detection of clinically significant prostate cancer (CSPCa; Grade Group, GG≥2). Statistical significance was defined as p<0.05. RESULTS: A total of 1491 patients met inclusion criteria, with 480 undergoing TP and 1011 TR PBx. Overall, 11% of patients were Asians, 5% African Americans, 14% Hispanic, 14% Others, and 56% White, similar between TP and TR (p=0.4). For PIRADS 3-5, the TP PBx CSPCa detection was significantly higher (61% vs 54%, p=0.03) than TR PBx, but not for PIRADS 1-2 (13% vs 13%, p=1.0). After adjusting for confounders on multivariable analysis, Black race, but not the PBx approach (TP vs TR), was an independent predictor of CSPCa detection. The median maximum cancer core length (11 vs 8mm; p<0.001) and percent (80% vs 60%; p<0.001) were greater for TP PBx even after adjusting for confounders. CONCLUSIONS: In a large and diverse cohort, Black race, but not the biopsy approach, was an independent predictor for CSPCa detection. TP and TR PBx yielded similar CSPCa detection rates; however the TP PBx was histologically more informative.


Assuntos
Biópsia Guiada por Imagem , Próstata , Neoplasias da Próstata , Ultrassonografia de Intervenção , Humanos , Masculino , Neoplasias da Próstata/patologia , Neoplasias da Próstata/diagnóstico por imagem , Biópsia Guiada por Imagem/métodos , Pessoa de Meia-Idade , Idoso , Ultrassonografia de Intervenção/métodos , Próstata/patologia , Próstata/diagnóstico por imagem , Períneo , Imagem por Ressonância Magnética Intervencionista/métodos , Gradação de Tumores , Imageamento por Ressonância Magnética Multiparamétrica/métodos , Reprodutibilidade dos Testes
4.
Artigo em Inglês | MEDLINE | ID: mdl-38744934

RESUMO

BACKGROUND: Generative Pretrained Model (GPT) chatbots have gained popularity since the public release of ChatGPT. Studies have evaluated the ability of different GPT models to provide information about medical conditions. To date, no study has assessed the quality of ChatGPT outputs to prostate cancer related questions from both the physician and public perspective while optimizing outputs for patient consumption. METHODS: Nine prostate cancer-related questions, identified through Google Trends (Global), were categorized into diagnosis, treatment, and postoperative follow-up. These questions were processed using ChatGPT 3.5, and the responses were recorded. Subsequently, these responses were re-inputted into ChatGPT to create simplified summaries understandable at a sixth-grade level. Readability of both the original ChatGPT responses and the layperson summaries was evaluated using validated readability tools. A survey was conducted among urology providers (urologists and urologists in training) to rate the original ChatGPT responses for accuracy, completeness, and clarity using a 5-point Likert scale. Furthermore, two independent reviewers evaluated the layperson summaries on correctness trifecta: accuracy, completeness, and decision-making sufficiency. Public assessment of the simplified summaries' clarity and understandability was carried out through Amazon Mechanical Turk (MTurk). Participants rated the clarity and demonstrated their understanding through a multiple-choice question. RESULTS: GPT-generated output was deemed correct by 71.7% to 94.3% of raters (36 urologists, 17 urology residents) across 9 scenarios. GPT-generated simplified layperson summaries of this output was rated as accurate in 8 of 9 (88.9%) scenarios and sufficient for a patient to make a decision in 8 of 9 (88.9%) scenarios. Mean readability of layperson summaries was higher than original GPT outputs ([original ChatGPT v. simplified ChatGPT, mean (SD), p-value] Flesch Reading Ease: 36.5(9.1) v. 70.2(11.2), <0.0001; Gunning Fog: 15.8(1.7) v. 9.5(2.0), p < 0.0001; Flesch Grade Level: 12.8(1.2) v. 7.4(1.7), p < 0.0001; Coleman Liau: 13.7(2.1) v. 8.6(2.4), 0.0002; Smog index: 11.8(1.2) v. 6.7(1.8), <0.0001; Automated Readability Index: 13.1(1.4) v. 7.5(2.1), p < 0.0001). MTurk workers (n = 514) rated the layperson summaries as correct (89.5-95.7%) and correctly understood the content (63.0-87.4%). CONCLUSION: GPT shows promise for correct patient education for prostate cancer-related contents, but the technology is not designed for delivering patients information. Prompting the model to respond with accuracy, completeness, clarity and readability may enhance its utility when used for GPT-powered medical chatbots.

5.
Eur Urol Focus ; 9(6): 873-887, 2023 11.
Artigo em Inglês | MEDLINE | ID: mdl-38036339

RESUMO

CONTEXT: Carbon footprint (CF) has emerged as an important factor when assessing health care interventions. OBJECTIVE: To investigate the reduction in CF for patients utilizing telemedicine. EVIDENCE ACQUISITION: The PubMed, Scopus, and Web of Science databases were queried for studies describing telemedicine consultation and reporting on carbon emissions saved and the carbon emissions of telemedicine devices as primary outcomes, and travel distance and time and cost savings and safety as secondary outcomes. Outcomes were tabulated and calculated per consultation. Carbon emissions and travel distances were also calculated for each total study cohort. Risk of bias was assessed using the Newcastle-Ottawa scale, and the Oxford level of evidence was determined. EVIDENCE SYNTHESIS: A total of 48 studies met the inclusion criteria, covering 68 465 481 telemedicine consultations and savings of 691 825 tons of CO2 emissions and 3 318 464 047 km of travel distance. Carbon assessment was mostly reported as the estimated distance saved using a conversion factor. Medical specialties used telemedicine to connect specialists with patients at home (n = 25) or at a local center (n = 6). Surgical specialties used telemedicine for virtual preoperative assessment (n = 9), follow-up (n = 4), and general consultation (n = 4). The savings per consultation were 21.9-632.17 min and $1.85-$325. More studies focused on the COVID-19 time frame (n = 33) than before the pandemic (n = 15). The studies are limited by calculations, mostly for the travel distance for carbon savings, and appropriate follow-up to analyze the real impact on travel and appointments. CONCLUSIONS: Telemedicine reduces the CF of the health care sector. Expanding the use of telemedicine and educating providers and patients could further decrease CO2 emissions and save both money and time. PATIENT SUMMARY: We reviewed 48 studies on the use of telemedicine. We found that people used their cars less and saved time and money, as well as CO2 emissions, if they used teleconsultations. Some studies only looked at how much CO2 from driving was saved, so there might be more to learn about the benefits of teleconsultations. The use of online doctor appointments is not only good for our planet but also helps patients in saving time and money. This review is registered on the PROSPERO database for systematic reviews (CRD42023456839).


Assuntos
Pegada de Carbono , Telemedicina , Humanos , Carbono , Dióxido de Carbono/análise , Atenção à Saúde , Encaminhamento e Consulta , Revisões Sistemáticas como Assunto
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA