Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 2 de 2
Filter
Add more filters










Database
Language
Publication year range
1.
Acad Radiol ; 31(1): 338-342, 2024 Jan.
Article in English | MEDLINE | ID: mdl-37709612

ABSTRACT

RATIONALE AND OBJECTIVES: With recent advancements in the power and accessibility of artificial intelligence (AI) Large Language Models (LLMs) patients might increasingly turn to these platforms to answer questions regarding radiologic examinations and procedures, despite valid concerns about the accuracy of information provided. This study aimed to assess the accuracy and completeness of information provided by the Bing Chatbot-a LLM powered by ChatGPT-on patient education for common radiologic exams. MATERIALS AND METHODS: We selected three common radiologic examinations and procedures: computed tomography (CT) abdomen, magnetic resonance imaging (MRI) spine, and bone biopsy. For each, ten questions were tested on the chatbot in two trials using three different chatbot settings. Two reviewers independently assessed the chatbot's responses for accuracy and completeness compared to an accepted online resource, radiologyinfo.org. RESULTS: Of the 360 reviews performed, 336 (93%) were rated "entirely correct" and 24 (7%) were "mostly correct," indicating a high level of reliability. Completeness ratings showed that 65% were "complete" and 35% were "mostly complete." The "More Creative" chatbot setting produced a higher proportion of responses rated "entirely correct" but there were otherwise no significant difference in ratings based on chatbot settings or exam types. The readability level was rated eighth-grade level. CONCLUSION: The Bing Chatbot provided accurate responses answering all or most aspects of the question asked of it, with responses tending to err on the side of caution for nuanced questions. Importantly, no responses were inaccurate or had potential to cause harm or confusion for the user. Thus, LLM chatbots demonstrate potential to enhance patient education in radiology and could be integrated into patient portals for various purposes, including exam preparation and results interpretation.


Subject(s)
Artificial Intelligence , Radiology , Humans , Reproducibility of Results , Patient Education as Topic , Radiography
2.
Clin Orthop Relat Res ; 471(10): 3237-42, 2013 Oct.
Article in English | MEDLINE | ID: mdl-23801062

ABSTRACT

BACKGROUND: A low response rate is believed to decrease the validity of survey studies. Factors associated with nonresponse to surveys are poorly characterized in orthopaedic research. QUESTIONS/PURPOSES: This study addressed whether (1) psychologic factors; (2) demographics; (3) illness-related factors; and (4) pain are predictors of a lower likelihood of a patient returning a mailed survey. METHODS: One hundred four adult, new or return patients completed questionnaires including the Pain Catastrophizing Scale, Patient Health Questionnaire-9 depression scale, Short Health Anxiety Index, demographics, and a pain scale (0-10) during a routine visit to a hand and upper extremity surgeon. Of these patients, 38% had undergone surgery and the remainder was seen for various other conditions. Six months after their visit, patients were mailed the DASH questionnaire and a scale to rate their satisfaction with the visit (0-10). Bivariate analysis and logistic regression were used to determine risk factors for being a nonresponder to the followup of this study. The cohort consisted of 57 women and 47 men with a mean age of 51 years with various diagnoses. Thirty-five patients (34%) returned the questionnaire. Responders were satisfied with their visit (mean satisfaction, 8.7) and had a DASH score of 9.6. RESULTS: Compared with patients who returned the questionnaires, nonresponders had higher pain catastrophizing scores, were younger, more frequently male, and had more pain at enrollment. In logistic regression, male sex (odds ratio [OR], 2.6), pain (OR, 1.3), and younger age (OR, 1.03) were associated with not returning the questionnaire. CONCLUSIONS: Survey studies should be interpreted in light of the fact that patients who do not return questionnaires in a hand surgery practice differ from patients who do return them. Hand surgery studies that rely on questionnaire evaluation remote from study enrollment should include tactics to improve the response of younger, male patients with more pain. LEVEL OF EVIDENCE: Level II, prognostic study. See Guidelines for Authors for a complete description of levels of evidence.


Subject(s)
Data Collection , Hand/surgery , Health Surveys , Orthopedic Procedures , Patient Satisfaction , Adolescent , Adult , Aged , Aged, 80 and over , Female , Health Status , Humans , Male , Middle Aged , Pain Measurement , Pain, Postoperative/diagnosis , Postal Service , Research Design , Self Report , Surveys and Questionnaires
SELECTION OF CITATIONS
SEARCH DETAIL
...