Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 20 de 672
Filter
1.
BMC Public Health ; 24(1): 2393, 2024 Sep 03.
Article in English | MEDLINE | ID: mdl-39227851

ABSTRACT

BACKGROUND: Oncological patients have high information needs that are often unmet. Patient versions of oncological clinical practice guidelines (PVG) translate clinical practice guidelines into laypersons' language and might help to address patients' information needs. Currently, 30 oncological PVG have been published in Germany and more are being developed. Following a large multi-phase project on oncological PVGs in Germany, recommendations to improve use and dissemination of PVG were adopted in a multi-stakeholder workshop. METHODS: Organisations representing users of PVGs (patients, medical personnel, and multipliers), creators, initiators/funding organisations of PVGs, and organisations with methodological expertise in the development of clinical practice guidelines or in patient health information were invited to participate. The workshop included a World Café for discussion of pre-selected recommendations and structured consensus procedure for of all recommendations. Recommendations with agreement of > 75% were approved, and in case of ≤ 75% agreement, recommendations were rejected. RESULTS: The workshop took place on 24th April 2023 in Cologne, Germany. Overall, 23 people from 24 organisations participated in the discussion. Of 35 suggested recommendations 28 recommendations reached consensus and were approved. The recommendations referred to the topics dissemination (N = 13), design and format (N = 7), (digital) links (N = 5), digitalisation (N = 4), up-to-dateness (N = 3), and use of the PVG in collaboration between healthcare providers and patients (N = 3). CONCLUSION: The practical recommendations consider various perspectives and can help to improve use and dissemination of oncological PVG in Germany. The inclusion of different stakeholders could facilitate the transfer of the results into practice.


Subject(s)
Practice Guidelines as Topic , Humans , Germany , Neoplasms/therapy , Information Dissemination/methods , Medical Oncology/standards , Stakeholder Participation
2.
Virchows Arch ; 2024 Aug 17.
Article in English | MEDLINE | ID: mdl-39153109

ABSTRACT

Pathologists have closely collaborated with clinicians, mainly urologists, to update the Gleason grading system to reflect the current practice and approach in prostate cancer diagnosis, prognosis, and treatment. This has led to the development of what is called patient advocacy and patient information. Ten common questions asked by patients to pathologists concerning PCa grading and the answers given by the latter are reported.

3.
Cureus ; 16(7): e64114, 2024 Jul.
Article in English | MEDLINE | ID: mdl-39119408

ABSTRACT

INTRODUCTION: ChatGPT (OpenAI, San Francisco, CA, USA) is a novel artificial intelligence (AI) application that is used by millions of people, and the numbers are growing by the day. Because it has the potential to be a source of patient information, the study aimed to evaluate the ability of ChatGPT to answer frequently asked questions (FAQs) about asthma with consistent reliability, acceptability, and easy readability. METHODS: We collected 30 FAQs about asthma from the Global Initiative for Asthma website. ChatGPT was asked each question twice, by two different users, to assess for consistency. The responses were evaluated by five board-certified internal medicine physicians for reliability and acceptability. The consistency of responses was determined by the differences in evaluation between the two answers to the same question. The readability of all responses was measured using the Flesch Reading Ease Scale (FRES), the Flesch-Kincaid Grade Level (FKGL), and the Simple Measure of Gobbledygook (SMOG). RESULTS: Sixty responses were collected for evaluation. Fifty-six (93.33%) of the responses were of good reliability. The average rating of the responses was 3.65 out of 4 total points. 78.3% (n=47) of the responses were found acceptable by the evaluators to be the only answer for an asthmatic patient. Only two (6.67%) of the 30 questions had inconsistent answers. The average readability of all responses was determined to be 33.50±14.37 on the FRES, 12.79±2.89 on the FKGL, and 13.47±2.38 on the SMOG. CONCLUSION: Compared to online websites, we found that ChatGPT can be a reliable and acceptable source of information for asthma patients in terms of information quality. However, all responses were of difficult readability, and none followed the recommended readability levels. Therefore, the readability of this AI application requires improvement to be more suitable for patients.

4.
J Med Internet Res ; 26: e55138, 2024 Aug 14.
Article in English | MEDLINE | ID: mdl-39141910

ABSTRACT

BACKGROUND: OpenAI's ChatGPT is a source of advanced online health information (OHI) that may be integrated into individuals' health information-seeking routines. However, concerns have been raised about its factual accuracy and impact on health outcomes. To forecast implications for medical practice and public health, more information is needed on who uses the tool, how often, and for what. OBJECTIVE: This study aims to characterize the reasons for and types of ChatGPT OHI use and describe the users most likely to engage with the platform. METHODS: In this cross-sectional survey, patients received invitations to participate via the ResearchMatch platform, a nonprofit affiliate of the National Institutes of Health. A web-based survey measured demographic characteristics, use of ChatGPT and other sources of OHI, experience characterization, and resultant health behaviors. Descriptive statistics were used to summarize the data. Both 2-tailed t tests and Pearson chi-square tests were used to compare users of ChatGPT OHI to nonusers. RESULTS: Of 2406 respondents, 21.5% (n=517) respondents reported using ChatGPT for OHI. ChatGPT users were younger than nonusers (32.8 vs 39.1 years, P<.001) with lower advanced degree attainment (BA or higher; 49.9% vs 67%, P<.001) and greater use of transient health care (ED and urgent care; P<.001). ChatGPT users were more avid consumers of general non-ChatGPT OHI (percentage of weekly or greater OHI seeking frequency in past 6 months, 28.2% vs 22.8%, P<.001). Around 39.3% (n=206) respondents endorsed using the platform for OHI 2-3 times weekly or more, and most sought the tool to determine if a consultation was required (47.4%, n=245) or to explore alternative treatment (46.2%, n=239). Use characterization was favorable as many believed ChatGPT to be just as or more useful than other OHIs (87.7%, n=429) and their doctor (81%, n=407). About one-third of respondents requested a referral (35.6%, n=184) or changed medications (31%, n=160) based on the information received from ChatGPT. As many users reported skepticism regarding the ChatGPT output (67.9%, n=336), most turned to their physicians (67.5%, n=349). CONCLUSIONS: This study underscores the significant role of AI-generated OHI in shaping health-seeking behaviors and the potential evolution of patient-provider interactions. Given the proclivity of these users to enact health behavior changes based on AI-generated content, there is an opportunity for physicians to guide ChatGPT OHI users on an informed and examined use of the technology.


Subject(s)
Artificial Intelligence , Humans , Cross-Sectional Studies , United States , Male , Female , Adult , Surveys and Questionnaires , Middle Aged , Aged , Young Adult , Information Seeking Behavior
5.
Orthop J Sports Med ; 12(7): 23259671241257516, 2024 Jul.
Article in English | MEDLINE | ID: mdl-39139744

ABSTRACT

Background: The consumer availability and automated response functions of chat generator pretrained transformer (ChatGPT-4), a large language model, poise this application to be utilized for patient health queries and may have a role in serving as an adjunct to minimize administrative and clinical burden. Purpose: To evaluate the ability of ChatGPT-4 to respond to patient inquiries concerning ulnar collateral ligament (UCL) injuries and compare these results with the performance of Google. Study Design: Cross-sectional study. Methods: Google Web Search was used as a benchmark, as it is the most widely used search engine worldwide and the only search engine that generates frequently asked questions (FAQs) when prompted with a query, allowing comparisons through a systematic approach. The query "ulnar collateral ligament reconstruction" was entered into Google, and the top 10 FAQs, answers, and their sources were recorded. ChatGPT-4 was prompted to perform a Google search of FAQs with the same query and to record the sources of answers for comparison. This process was again replicated to obtain 10 new questions requiring numeric instead of open-ended responses. Finally, responses were graded independently for clinical accuracy (grade 0 = inaccurate, grade 1 = somewhat accurate, grade 2 = accurate) by 2 fellowship-trained sports medicine surgeons (D.W.A, J.S.D.) blinded to the search engine and answer source. Results: ChatGPT-4 used a greater proportion of academic sources than Google to provide answers to the top 10 FAQs, although this was not statistically significant (90% vs 50%; P = .14). In terms of question overlap, 40% of the most common questions on Google and ChatGPT-4 were the same. When comparing FAQs with numeric responses, 20% of answers were completely overlapping, 30% demonstrated partial overlap, and the remaining 50% did not demonstrate any overlap. All sources used by ChatGPT-4 to answer these FAQs were academic, while only 20% of sources used by Google were academic (P = .0007). The remaining Google sources included social media (40%), medical practices (20%), single-surgeon websites (10%), and commercial websites (10%). The mean (± standard deviation) accuracy for answers given by ChatGPT-4 was significantly greater compared with Google for the top 10 FAQs (1.9 ± 0.2 vs 1.2 ± 0.6; P = .001) and top 10 questions with numeric answers (1.8 ± 0.4 vs 1 ± 0.8; P = .013). Conclusion: ChatGPT-4 is capable of providing responses with clinically relevant content concerning UCL injuries and reconstruction. ChatGPT-4 utilized a greater proportion of academic websites to provide responses to FAQs representative of patient inquiries compared with Google Web Search and provided significantly more accurate answers. Moving forward, ChatGPT has the potential to be used as a clinical adjunct when answering queries about UCL injuries and reconstruction, but further validation is warranted before integrated or autonomous use in clinical settings.

6.
Digit Health ; 10: 20552076241269538, 2024.
Article in English | MEDLINE | ID: mdl-39148811

ABSTRACT

Objectives: To assess the quality and alignment of ChatGPT's cancer treatment recommendations (RECs) with National Comprehensive Cancer Network (NCCN) guidelines and expert opinions. Methods: Three urologists performed quantitative and qualitative assessments in October 2023 analyzing responses from ChatGPT-4 and ChatGPT-3.5 to 108 prostate, kidney, and bladder cancer prompts using two zero-shot prompt templates. Performance evaluation involved calculating five ratios: expert-approved/expert-disagreed and NCCN-aligned RECs against total ChatGPT RECs plus coverage and adherence rates to NCCN. Experts rated the response's quality on a 1-5 scale considering correctness, comprehensiveness, specificity, and appropriateness. Results: ChatGPT-4 outperformed ChatGPT-3.5 in prostate cancer inquiries, with an average word count of 317.3 versus 124.4 (p < 0.001) and 6.1 versus 3.9 RECs (p < 0.001). Its rater-approved REC ratio (96.1% vs. 89.4%) and alignment with NCCN guidelines (76.8% vs. 49.1%, p = 0.001) were superior and scored significantly better on all quality dimensions. Across 108 prompts covering three cancers, ChatGPT-4 produced an average of 6.0 RECs per case, with an 88.5% approval rate from raters, 86.7% NCCN concordance, and only a 9.5% disagreement rate. It achieved high marks in correctness (4.5), comprehensiveness (4.4), specificity (4.0), and appropriateness (4.4). Subgroup analyses across cancer types, disease statuses, and different prompt templates were reported. Conclusions: ChatGPT-4 demonstrated significant improvement in providing accurate and detailed treatment recommendations for urological cancers in line with clinical guidelines and expert opinion. However, it is vital to recognize that AI tools are not without flaws and should be utilized with caution. ChatGPT could supplement, but not replace, personalized advice from healthcare professionals.

7.
Angle Orthod ; 2024 Aug 14.
Article in English | MEDLINE | ID: mdl-39194996

ABSTRACT

OBJECTIVES: To evaluate the reliability of information produced by the artificial intelligence-based program ChatGPT in terms of accuracy and relevance, as assessed by orthodontists, dental students, and individuals seeking orthodontic treatment. MATERIALS AND METHODS: Frequently asked and curious questions in four basic areas related to orthodontics were prepared and asked in ChatGPT (Version 4.0), and answers were evaluated by three different groups (senior dental students, individuals seeking orthodontic treatment, orthodontists). Questions asked in these basic areas of orthodontics were about: clear aligners (CA), lingual orthodontics (LO), esthetic braces (EB), and temporomandibular disorders (TMD). The answers were evaluated by the Global Quality Scale (GQS) and Quality Criteria for Consumer Health Information (DISCERN) scale. RESULTS: The total mean DISCERN score for answers on CA for students was 51.7 ± 9.38, for patients was 57.2 ± 10.73 and, for orthodontists was 47.4 ± 4.78 (P = .001). Comparison of GQS scores for LO among groups: students (3.53 ± 0.78), patients (4.40 ± 0.72), and orthodontists (3.63 ± 0.72) (P < .001). Intergroup comparison of ChatGPT evaluations about TMD was examined in terms of the DISCERN scale, with the highest value given in the patients group (57.83 ± 11.47) and lowest value in the orthodontist group (45.90 ± 11.84). When information quality evaluation about EB was examined, it GQS scores were >3 in all three groups (students: 3.50 ± 0.78; patients: 4.17 ± 0.87; orthodontists: 3.50 ± 0.82). CONCLUSIONS: ChatGPT has significant potential in terms of usability for patient information and education in the field of orthodontics if it is developed and necessary updates are made.

8.
Front Psychol ; 15: 1378854, 2024.
Article in English | MEDLINE | ID: mdl-38962233

ABSTRACT

Background: The provision of audio recordings of their own medical encounters to patients, termed consultation recordings, has demonstrated promising benefits, particularly in addressing information needs of cancer patients. While this intervention has been explored globally, there is limited research specific to Germany. This study investigates the attitudes and experiences of cancer patients in Germany toward consultation recordings. Methods: We conducted a nationwide cross-sectional quantitative online survey, informed by semi-structured interviews with cancer patients. The survey assessed participants' attitudes, experiences and desire for consultation recordings in the future. The data was analyzed using descriptive statistics and subgroup analyses. Results: A total of 287 adult cancer patients participated. An overwhelming majority (92%) expressed a (very) positive attitude. Overall, participants strongly endorsed the anticipated benefits of the intervention, such as improved recall and enhanced understanding. Some participants expressed concerns that physicians might feel pressured and could become more reserved in their interactions with the use of such recordings. While a small proportion (5%) had prior experience with audio recording medical encounters, the majority (92%) expressed interest in having consultation recordings in the future. Discussion: We observed positive attitudes of cancer patients in Germany toward consultation recordings, paralleling international research findings. Despite limited experiences, participants acknowledged the potential benefits of the intervention, particularly related to recalling and comprehending information from medical encounters. Our findings suggest that the potential of the intervention is currently underutilized in German cancer care. While acknowledging the possibility of a positive bias in our results, we conclude that this study represents an initial exploration of the intervention's potential within the German cancer care context, laying the groundwork for its further evaluation.

9.
BMC Oral Health ; 24(1): 798, 2024 Jul 15.
Article in English | MEDLINE | ID: mdl-39010000

ABSTRACT

BACKGROUND: The aim of this study was to evaluate the content and quality of videos about bruxism treatments on YouTube, a platform frequently used by patients today to obtain information. METHODS: A YouTube search was performed using the keywords "bruxism treatment" and "teeth grinding treatment". "The sort by relevance" filter was used for both search terms and the first 150 videos were saved. A total of 139 videos that met the study criteria were included in the study. Videos were classified as poor, moderate or excellent based on a usefulness score that evaluated content quality. The modified DISCERN tool was also used to evaluate video quality. Additionally, videos were categorized according to the upload source, target audience and video type. The types of treatments mentioned in the videos and the demographic data of the videos were recorded. RESULTS: According to the usefulness score, 59% of the videos were poor-quality, 36.7% were moderate-quality and 4.3% were excellent-quality. Moderate-quality videos had a higher interaction index than excellent-quality videos (p = 0.039). The video duration of excellent-quality videos was longer than that of moderate and poor-quality videos (p = 0.024, p = 0.002). Videos with poor-quality content were found to have significantly lower DISCERN scores than videos with moderate (p < 0.001) and excellent-quality content (p = 0.008). Additionally, there was a significantly positive and moderate (r = 0.446) relationship between DISCERN scores and content usefulness scores (p < 0.001). There was only a weak positive correlation between DISCERN scores and video length (r = 0.359; p < 0.001). The videos uploaded by physiotherapists had significantly higher views per day and viewing rate than videos uploaded by medical doctors (p = 0.037), university-hospital-institute (p = 0.024) and dentists (p = 0.006). The videos uploaded by physiotherapists had notably higher number of likes and number of comments than videos uploaded by medical doctors (p = 0.023; p = 0.009, respectively), university-hospital-institute (p = 0.003; p = 0.008, respectively) and dentists (p = 0.002; p = 0.002, respectively). CONCLUSIONS: Although the majority of videos on YouTube about bruxism treatments are produced by professionals, most of the videos contain limited information, which may lead patients to debate treatment methods. Health professionals should warn patients against this potentially misleading content and direct them to reliable sources.


Subject(s)
Bruxism , Social Media , Video Recording , Humans , Bruxism/therapy , Reproducibility of Results
10.
World J Urol ; 42(1): 455, 2024 Jul 29.
Article in English | MEDLINE | ID: mdl-39073590

ABSTRACT

PURPOSE: Large language models (LLMs) are a form of artificial intelligence (AI) that uses deep learning techniques to understand, summarize and generate content. The potential benefits of LLMs in healthcare is predicted to be immense. The objective of this study was to examine the quality of patient information leaflets (PILs) produced by 3 LLMs on urological topics. METHODS: Prompts were created to generate PILs from 3 LLMs: ChatGPT-4, PaLM 2 (Google Bard) and Llama 2 (Meta) across four urology topics (circumcision, nephrectomy, overactive bladder syndrome, and transurethral resection of the prostate). PILs were evaluated using a quality assessment checklist. PIL readability was assessed by the Average Reading Level Consensus Calculator. RESULTS: PILs generated by PaLM 2 had the highest overall average quality score (3.58), followed by Llama 2 (3.34) and ChatGPT-4 (3.08). PaLM 2 generated PILs were of the highest quality in all topics except TURP and was the only LLM to include images. Medical inaccuracies were present in all generated content including instances of significant error. Readability analysis identified PaLM 2 generated PILs as the simplest (age 14-15 average reading level). Llama 2 PILs were the most difficult (age 16-17 average). CONCLUSION: While LLMs can generate PILs that may help reduce healthcare professional workload, generated content requires clinician input for accuracy and inclusion of health literacy aids, such as images. LLM-generated PILs were above the average reading level for adults, necessitating improvement in LLM algorithms and/or prompt design. How satisfied patients are to LLM-generated PILs remains to be evaluated.


Subject(s)
Artificial Intelligence , Urology , Humans , Patient Education as Topic/methods , Language , Urologic Diseases/surgery
11.
Neuroophthalmology ; 48(4): 257-266, 2024.
Article in English | MEDLINE | ID: mdl-38933748

ABSTRACT

Most cases of optic neuritis (ON) occur in women and in patients between the ages of 15 and 45 years, which represents a key demographic of individuals who seek health information using the internet. As clinical providers strive to ensure patients have accessible information to understand their condition, assessing the standard of online resources is essential. To assess the quality, content, accountability, and readability of online information for optic neuritis. This cross-sectional study analyzed 11 freely available medical sites with information on optic neuritis and used PubMed as a gold standard for comparison. Twelve questions were composed to include the information most relevant to patients, and each website was independently examined by four neuro-ophthalmologists. Readability was analyzed using an online readability tool. Journal of the American Medical Association (JAMA) benchmarks, four criteria designed to assess the quality of health information further were used to evaluate the accountability of each website. Freely available online information. On average, websites scored 27.98 (SD ± 9.93, 95% CI 24.96-31.00) of 48 potential points (58.3%) for the twelve questions. There were significant differences in the comprehensiveness and accuracy of content across websites (p < .001). The mean reading grade level of websites was 11.90 (SD ± 2.52, 95% CI 8.83-15.25). Zero websites achieved all four JAMA benchmarks. Interobserver reliability was robust between three of four neuro-ophthalmologist (NO) reviewers (ρ = 0.77 between NO3 and NO2, ρ = 0.91 between NO3 and NO1, ρ = 0.74 between NO2 and NO1; all p < .05). The quality of freely available online information detailing optic neuritis varies by source, with significant room for improvement. The material presented is difficult to interpret and exceeds the recommended reading level for health information. Most websites reviewed did not provide comprehensive information regarding non-therapeutic aspects of the disease. Ophthalmology organizations should be encouraged to create content that is more accessible to the general public.

12.
Br J Hosp Med (Lond) ; 85(6): 1-9, 2024 Jun 30.
Article in English | MEDLINE | ID: mdl-38941972

ABSTRACT

Aims/Background Seroma formation is the most common complication following breast surgery. However, there is little evidence on the readability of online patient education materials on this issue. This study aimed to assess the accessibility and readability of the relevant online information. Methods This systematic review of the literature identified 37 relevant websites for further analysis. The readability of each online article was assessed through using a range of readability formulae. Results The average Flesch-Reading Ease score for all patient education materials was 53.9 (± 21.9) and the average Flesch-Kincaid reading grade level was 7.32 (± 3.1), suggesting they were 'fairly difficult' to read and is higher than the recommended reading level. Conclusion Online patient education materials regarding post-surgery breast seroma are at a higher-than-recommended reading grade level for the public. Improvement would allow all patients, regardless of literacy level, to access such resources to aid decision-making around undergoing breast surgery.


Subject(s)
Comprehension , Health Literacy , Internet , Patient Education as Topic , Seroma , Humans , Seroma/etiology , Patient Education as Topic/methods , Female , Postoperative Complications , Breast Diseases/surgery , Mastectomy/adverse effects , Consumer Health Information/standards
13.
Article in English | MEDLINE | ID: mdl-38928992

ABSTRACT

INTRODUCTION: Podcasts have emerged as a promising tool in patient preparation for hospital visits. However, the nuanced experiences of patients who engage with this medium remain underexplored. OBJECTIVES: This study explored patients' experiences of receiving information by way of podcasts prior to their hospital visits. METHODS: Semi-structured interviews were conducted with patients with suspected chronic obstructive pulmonary disease (COPD), lung cancer, or sleep apnea. The method of data analysis chosen was thematic analysis. RESULTS: Based on data from 24 interviews, five key themes were identified: technical challenges in utilization of podcasts; individual preferences for information prior to hospital visits; building trust and reducing anxiety through podcasts; the role of podcasts as an accessible and convenient source of information; and enhancement of engagement and empowerment through podcasts. Additionally, the study highlighted the critical importance of tailoring podcasts' content to individual preferences to optimize the delivery of healthcare information. CONCLUSIONS: Podcasts can serve as a meaningful supplement to traditional information sources for patients. However, it is important to recognize that not all patients may be able to engage with this medium effectively due to technical challenges or personal preferences.


Subject(s)
Pulmonary Disease, Chronic Obstructive , Humans , Male , Female , Middle Aged , Aged , Adult , Webcasts as Topic , Lung Neoplasms , Interviews as Topic , Aged, 80 and over
14.
Article in English | MEDLINE | ID: mdl-38888980

ABSTRACT

AIM: To explore the knowledge and unmet informational needs of candidates for left ventricular assist devices (LVADs), as well as of patients, caregivers, and family members, by analyzing social media data from the MyLVAD.com website. METHODS AND RESULTS: A qualitative content analysis method was employed, systematically examining and categorizing forum posts and comments published on the MyLVAD.com website from March 2015 to February 2023. The data was collected using an automated script to retrieve threads from MyLVAD.com, focusing on genuine questions reflecting information and knowledge gaps. The study received approval from an ethics committee. The research team developed and continuously updated categorization matrices to organize information into categories and subcategories systematically. From 856 posts and comments analyzed, 435 contained questions representing informational needs, of which six main categories were identified: clothing, complications/adverse effects, LVAD pros and cons, self-care, therapy, and recent LVAD implantation. The self-care category, which includes managing the driveline site and understanding equipment functionality, was the most prominent, reflecting nearly half of the questions. Other significant areas of inquiry included complications/adverse effects and the pros and cons of LVAD. CONCLUSION: The analysis of social media data from MyLVAD.com reveals significant unmet informational needs among LVAD candidates, patients, and their support networks. Unlike traditional data, this social media-based research provides an unbiased view of patient conversations, offering valuable insights into their real-world concerns and knowledge gaps. The findings underscore the importance of tailored educational resources to address these unmet needs, potentially enhancing LVAD patient care.

15.
JMIR Form Res ; 8: e50087, 2024 Jun 06.
Article in English | MEDLINE | ID: mdl-38843520

ABSTRACT

BACKGROUND: With the global increase of cesarean deliveries, breech presentation is the third indication for elective cesarean delivery. Implementation of external cephalic version (ECV), in which the position of the baby is manipulated externally to prevent breech presentation at term, remains suboptimal. Increasing knowledge for caretakers and patients is beneficial in the uptake of ECV implementation. In recent decades, the internet has become the most important source of information for both patients and health care professionals. However, the use and availability of the internet also bring about concerns since the information is often not regulated or reviewed. Information needs to be understandable, correct, and easily obtainable for the patient. Owing to its global reach, YouTube has great potential to both hinder and support spreading medical information and can therefore be used as a tool for shared decision-making. OBJECTIVE: The objective of this study was to investigate the available information on YouTube about ECV and assess the quality and usefulness of the information in the videos. METHODS: A YouTube search was performed with five search terms and the first 35 results were selected for analysis. A quality assessment scale was developed to quantify the accuracy of medical information of each video. The main outcome measure was the usefulness score, dividing the videos into useful, slightly useful, and not useful categories. The source of upload was divided into five subcategories and two broad categories of medical or nonmedical. Secondary outcomes included audience engagement, misinformation, and encouraging or discouraging ECV. RESULTS: Among the 70 videos, only 14% (n=10) were defined as useful. Every useful video was uploaded by educational channels or health care professionals and 80% (8/10) were derived from a medical source. Over half of the not useful videos were uploaded by birth attendants and vloggers. Videos uploaded by birth attendants scored the highest on audience engagement. The presence of misinformation was low across all groups. Two-thirds of the vloggers encouraged ECV to their viewers. CONCLUSIONS: A minor percentage of videos about ECV on YouTube are considered useful. Vloggers often encourage their audience to opt for ECV. Videos with higher audience engagement had a lower usefulness score compared to videos with lower audience engagement. Sources from medically accurate videos should cooperate with sources with high audience engagement to contribute to the uptake of ECV by creating more awareness and a positive attitude of the procedure, thereby lowering the chance for a cesarean delivery due to breech presentation at term.

16.
JMIR Ment Health ; 11: e58129, 2024 Aug 02.
Article in English | MEDLINE | ID: mdl-38876484

ABSTRACT

BACKGROUND: Due to recent advances in artificial intelligence, large language models (LLMs) have emerged as a powerful tool for a variety of language-related tasks, including sentiment analysis, and summarization of provider-patient interactions. However, there is limited research on these models in the area of crisis prediction. OBJECTIVE: This study aimed to evaluate the performance of LLMs, specifically OpenAI's generative pretrained transformer 4 (GPT-4), in predicting current and future mental health crisis episodes using patient-provided information at intake among users of a national telemental health platform. METHODS: Deidentified patient-provided data were pulled from specific intake questions of the Brightside telehealth platform, including the chief complaint, for 140 patients who indicated suicidal ideation (SI), and another 120 patients who later indicated SI with a plan during the course of treatment. Similar data were pulled for 200 randomly selected patients, treated during the same time period, who never endorsed SI. In total, 6 senior Brightside clinicians (3 psychologists and 3 psychiatrists) were shown patients' self-reported chief complaint and self-reported suicide attempt history but were blinded to the future course of treatment and other reported symptoms, including SI. They were asked a simple yes or no question regarding their prediction of endorsement of SI with plan, along with their confidence level about the prediction. GPT-4 was provided with similar information and asked to answer the same questions, enabling us to directly compare the performance of artificial intelligence and clinicians. RESULTS: Overall, the clinicians' average precision (0.7) was higher than that of GPT-4 (0.6) in identifying the SI with plan at intake (n=140) versus no SI (n=200) when using the chief complaint alone, while sensitivity was higher for the GPT-4 (0.62) than the clinicians' average (0.53). The addition of suicide attempt history increased the clinicians' average sensitivity (0.59) and precision (0.77) while increasing the GPT-4 sensitivity (0.59) but decreasing the GPT-4 precision (0.54). Performance decreased comparatively when predicting future SI with plan (n=120) versus no SI (n=200) with a chief complaint only for the clinicians (average sensitivity=0.4; average precision=0.59) and the GPT-4 (sensitivity=0.46; precision=0.48). The addition of suicide attempt history increased performance comparatively for the clinicians (average sensitivity=0.46; average precision=0.69) and the GPT-4 (sensitivity=0.74; precision=0.48). CONCLUSIONS: GPT-4, with a simple prompt design, produced results on some metrics that approached those of a trained clinician. Additional work must be done before such a model can be piloted in a clinical setting. The model should undergo safety checks for bias, given evidence that LLMs can perpetuate the biases of the underlying data on which they are trained. We believe that LLMs hold promise for augmenting the identification of higher-risk patients at intake and potentially delivering more timely care to patients.


Subject(s)
Suicidal Ideation , Telemedicine , Humans , Male , Female , Adult , Middle Aged , Artificial Intelligence , Suicide, Attempted/psychology , Mental Health Teletherapy
17.
Pediatr Surg Int ; 40(1): 150, 2024 Jun 04.
Article in English | MEDLINE | ID: mdl-38833023

ABSTRACT

BACKGROUND: Recent data highlight the internet's pivotal role as the primary information source for patients. In this study, we emulate a patient's/caregiver's quest for online information concerning chest deformities and assess the quality of available information. METHODS: We conducted an internet search using combination of the terms "pectus excavatum," "pectus excavatum surgery," "funnel chest," "pectus excavatum repair" and identified the first 100 relevant websites from the three most popular search engines: Google, Yahoo, and Bing. These websites were evaluated using the modified Ensuring Quality Information for Patients (EQIP) instrument. RESULTS: Of the 300 websites generated, 140 (46.7%) were included in our evaluation after elimination of duplicates, non-English websites, and those targeting medical professionals. The EQIP scores in the final sample ranged from 8 to 32/36, with a median score of 22. Most of the evaluated websites (32.8%) originated from hospitals, yet none met all 36 EQIP criteria. DISCUSSION: None of the evaluated websites pertaining to pectus excavatum achieved a flawless "content quality" score. The diverse array of websites potentially complicates patients' efforts to navigate toward high-quality resources. Barriers in accessing high-quality online patient information may contribute to disparities in referral, patient engagement, treatment satisfaction, and overall quality of life. LEVEL OF EVIDENCE: IV.


Subject(s)
Funnel Chest , Internet , Humans , Funnel Chest/surgery , Thoracic Wall/abnormalities , Patient Education as Topic/methods , Consumer Health Information , Information Sources
18.
Trials ; 25(1): 372, 2024 Jun 10.
Article in English | MEDLINE | ID: mdl-38858790

ABSTRACT

BACKGROUND: Retaining participants in randomised controlled trials (RCTs) is challenging and trial teams are often required to use strategies to ensure retention or improve it. Other than monetary incentives, there is no requirement to disclose the use of retention strategies to the participant. Additionally, not all retention strategies are developed at the planning stage, i.e. post-funding during protocol development, but some protocols include strategies for participant retention as retention is considered and planned for early in the trial planning stage. It is yet unknown if these plans are communicated in the corresponding participant information leaflets (PILs). The purpose of our study was to determine if PILs communicate plans to promote participant retention and, if so, are these outlined in the corresponding trial protocol. METHODS: Ninety-two adult PILs and their 90 corresponding protocols from Clinical Trial Units (CTUs) in the UK were analysed. Directed (deductive) content analysis was used to analyse the participant retention text from the PILs. Data were presented using a narrative summary and frequencies where appropriate. RESULTS: Plans to promote participant retention were communicated in 81.5% (n = 75/92) of PILs. Fifty-seven percent (n = 43/75) of PILs communicated plans to use "combined strategies" to promote participant retention. The most common individual retention strategy was telling the participants that data collection for the trial would be scheduled during routine care visits (16%; n = 12/75 PILs). The importance of retention and the impact that missing or deleted data (deleting data collected prior to withdrawal) has on the ability to answer the research question were explained in 6.5% (n = 6/92) and 5.4% (n = 5/92) of PILs respectively. Out of the 59 PILs and 58 matching protocols that both communicated plans to use strategies to promote participant retention, 18.6% (n = 11/59) communicated the same information, the remaining 81.4% (n = 48/59) of PILs either only partially communicated (45.8%; n = 27/59) the same information or did not communicate the same information (35.6%; n = 21/59) as the protocol with regard to the retention strategy(ies). CONCLUSION: Retention strategies are frequently communicated to potential trial participants in PILs; however, the information provided often differs from the content in the corresponding protocol. Participant retention considerations are best done at the planning stage of the trial and we encourage trial teams to be consistent in the communication of these strategies in both the protocol and PIL.


Subject(s)
Pamphlets , Randomized Controlled Trials as Topic , Humans , Randomized Controlled Trials as Topic/methods , Adult , Communication , Patient Selection , Research Subjects/psychology , Patient Education as Topic/methods , Clinical Trial Protocols as Topic , Health Knowledge, Attitudes, Practice , United Kingdom , Research Design , Patient Dropouts
19.
BMC Womens Health ; 24(1): 346, 2024 Jun 15.
Article in English | MEDLINE | ID: mdl-38877503

ABSTRACT

BACKGROUND: Approximately 13% of women in the United States of reproductive age seek infertility services. Assisted reproductive technology (ART), including in vitro fertilization, is used to help patients achieve pregnancy. Many people are not familiar with these treatments prior to becoming patients and possess knowledge gaps about care. METHODS: This study employed qualitative methods to investigate how patients interact with information sources during care. Patients who underwent ART including embryo transfer between January 2017 and April 2022 at a large urban healthcare center were eligible. Semi-structured, in-depth interviews were conducted between August and October 2022. Fifteen females with an average age of 39 years participated. Reflexive thematic analysis was performed. RESULTS: Two main themes emerged. Participants (1) utilized clinic-provided information and then turned to outside sources to fill knowledge gaps; (2) struggled to learn about costs, insurance, and mental health resources to support care. Participants preferred clinic-provided resources and then utilized academic sources, the internet, and social media when they had unfulfilled information needs. Knowledge gaps related to cost, insurance, and mental health support were reported. CONCLUSION: ART clinics can consider providing more information about cost, insurance, and mental health support to patients. TRIAL REGISTRATION: The Massachusetts General Hospital Institutional Review Board approved this study (#2022P000474) and informed consent was obtained from each participant.


Subject(s)
Information Seeking Behavior , Qualitative Research , Reproductive Techniques, Assisted , Humans , Female , Adult , Reproductive Techniques, Assisted/psychology , Health Knowledge, Attitudes, Practice , Middle Aged , United States , Pregnancy
20.
J Surg Res ; 299: 205-212, 2024 Jul.
Article in English | MEDLINE | ID: mdl-38772269

ABSTRACT

INTRODUCTION: Critical limb-threatening ischemia (CLTI) is associated with a high risk of amputation, yet patients undergoing amputation due to CLTI have little knowledge of the amputation process and the rehabilitation that awaits. The aim of the present study was to develop and validate information material for patients undergoing amputation. METHODS: Nine participants were included in the study. Two focus group interviews were performed with seven patients who had undergone lower extremity amputation due to CLTI within the past 2 y. Additionally, two individual interviews were carried out. A semistructured interview guide was used, and the interviews were transcribed verbatim and analysed using qualitative content analysis with a deductive approach. RESULTS: Three themes were identified as essential for the design of the written information: Perspectives on design and formatting, Providing information to enhance participation in care, and Accessibility to information and support. The prototyped information leaflet was perceived as acceptable, useable, relevant, and comprehensible by the participants. CONCLUSIONS: For patients to actively engage in their care, it is vital that their information needs are met and that they are provided with psychosocial support when needed. Written and oral information should be provided by a trusted healthcare professional.


Subject(s)
Amputation, Surgical , Focus Groups , Lower Extremity , Patient Education as Topic , Qualitative Research , Humans , Amputation, Surgical/psychology , Male , Female , Aged , Middle Aged , Lower Extremity/surgery , Lower Extremity/blood supply , Ischemia/etiology , Ischemia/surgery , Aged, 80 and over , Interviews as Topic , Pamphlets , Chronic Limb-Threatening Ischemia/surgery
SELECTION OF CITATIONS
SEARCH DETAIL