Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 4 de 4
Filtrar
Más filtros










Base de datos
Intervalo de año de publicación
1.
Artículo en Inglés | MEDLINE | ID: mdl-38744934

RESUMEN

BACKGROUND: Generative Pretrained Model (GPT) chatbots have gained popularity since the public release of ChatGPT. Studies have evaluated the ability of different GPT models to provide information about medical conditions. To date, no study has assessed the quality of ChatGPT outputs to prostate cancer related questions from both the physician and public perspective while optimizing outputs for patient consumption. METHODS: Nine prostate cancer-related questions, identified through Google Trends (Global), were categorized into diagnosis, treatment, and postoperative follow-up. These questions were processed using ChatGPT 3.5, and the responses were recorded. Subsequently, these responses were re-inputted into ChatGPT to create simplified summaries understandable at a sixth-grade level. Readability of both the original ChatGPT responses and the layperson summaries was evaluated using validated readability tools. A survey was conducted among urology providers (urologists and urologists in training) to rate the original ChatGPT responses for accuracy, completeness, and clarity using a 5-point Likert scale. Furthermore, two independent reviewers evaluated the layperson summaries on correctness trifecta: accuracy, completeness, and decision-making sufficiency. Public assessment of the simplified summaries' clarity and understandability was carried out through Amazon Mechanical Turk (MTurk). Participants rated the clarity and demonstrated their understanding through a multiple-choice question. RESULTS: GPT-generated output was deemed correct by 71.7% to 94.3% of raters (36 urologists, 17 urology residents) across 9 scenarios. GPT-generated simplified layperson summaries of this output was rated as accurate in 8 of 9 (88.9%) scenarios and sufficient for a patient to make a decision in 8 of 9 (88.9%) scenarios. Mean readability of layperson summaries was higher than original GPT outputs ([original ChatGPT v. simplified ChatGPT, mean (SD), p-value] Flesch Reading Ease: 36.5(9.1) v. 70.2(11.2), <0.0001; Gunning Fog: 15.8(1.7) v. 9.5(2.0), p < 0.0001; Flesch Grade Level: 12.8(1.2) v. 7.4(1.7), p < 0.0001; Coleman Liau: 13.7(2.1) v. 8.6(2.4), 0.0002; Smog index: 11.8(1.2) v. 6.7(1.8), <0.0001; Automated Readability Index: 13.1(1.4) v. 7.5(2.1), p < 0.0001). MTurk workers (n = 514) rated the layperson summaries as correct (89.5-95.7%) and correctly understood the content (63.0-87.4%). CONCLUSION: GPT shows promise for correct patient education for prostate cancer-related contents, but the technology is not designed for delivering patients information. Prompting the model to respond with accuracy, completeness, clarity and readability may enhance its utility when used for GPT-powered medical chatbots.

3.
Eur Urol Focus ; 9(6): 873-887, 2023 11.
Artículo en Inglés | MEDLINE | ID: mdl-38036339

RESUMEN

CONTEXT: Carbon footprint (CF) has emerged as an important factor when assessing health care interventions. OBJECTIVE: To investigate the reduction in CF for patients utilizing telemedicine. EVIDENCE ACQUISITION: The PubMed, Scopus, and Web of Science databases were queried for studies describing telemedicine consultation and reporting on carbon emissions saved and the carbon emissions of telemedicine devices as primary outcomes, and travel distance and time and cost savings and safety as secondary outcomes. Outcomes were tabulated and calculated per consultation. Carbon emissions and travel distances were also calculated for each total study cohort. Risk of bias was assessed using the Newcastle-Ottawa scale, and the Oxford level of evidence was determined. EVIDENCE SYNTHESIS: A total of 48 studies met the inclusion criteria, covering 68 465 481 telemedicine consultations and savings of 691 825 tons of CO2 emissions and 3 318 464 047 km of travel distance. Carbon assessment was mostly reported as the estimated distance saved using a conversion factor. Medical specialties used telemedicine to connect specialists with patients at home (n = 25) or at a local center (n = 6). Surgical specialties used telemedicine for virtual preoperative assessment (n = 9), follow-up (n = 4), and general consultation (n = 4). The savings per consultation were 21.9-632.17 min and $1.85-$325. More studies focused on the COVID-19 time frame (n = 33) than before the pandemic (n = 15). The studies are limited by calculations, mostly for the travel distance for carbon savings, and appropriate follow-up to analyze the real impact on travel and appointments. CONCLUSIONS: Telemedicine reduces the CF of the health care sector. Expanding the use of telemedicine and educating providers and patients could further decrease CO2 emissions and save both money and time. PATIENT SUMMARY: We reviewed 48 studies on the use of telemedicine. We found that people used their cars less and saved time and money, as well as CO2 emissions, if they used teleconsultations. Some studies only looked at how much CO2 from driving was saved, so there might be more to learn about the benefits of teleconsultations. The use of online doctor appointments is not only good for our planet but also helps patients in saving time and money. This review is registered on the PROSPERO database for systematic reviews (CRD42023456839).


Asunto(s)
Huella de Carbono , Telemedicina , Humanos , Carbono , Dióxido de Carbono/análisis , Atención a la Salud , Derivación y Consulta , Revisiones Sistemáticas como Asunto
SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA
...