Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 3 de 3
Filter
Add more filters

Database
Language
Affiliation country
Publication year range
1.
Article in English | MEDLINE | ID: mdl-39008829

ABSTRACT

OBJECTIVE: Returning aggregate study results is an important ethical responsibility to promote trust and inform decision making, but the practice of providing results to a lay audience is not widely adopted. Barriers include significant cost and time required to develop lay summaries and scarce infrastructure necessary for returning them to the public. Our study aims to generate, evaluate, and implement ChatGPT 4 lay summaries of scientific abstracts on a national clinical study recruitment platform, ResearchMatch, to facilitate timely and cost-effective return of study results at scale. MATERIALS AND METHODS: We engineered prompts to summarize abstracts at a literacy level accessible to the public, prioritizing succinctness, clarity, and practical relevance. Researchers and volunteers assessed ChatGPT-generated lay summaries across five dimensions: accuracy, relevance, accessibility, transparency, and harmfulness. We used precision analysis and adaptive random sampling to determine the optimal number of summaries for evaluation, ensuring high statistical precision. RESULTS: ChatGPT achieved 95.9% (95% CI, 92.1-97.9) accuracy and 96.2% (92.4-98.1) relevance across 192 summary sentences from 33 abstracts based on researcher review. 85.3% (69.9-93.6) of 34 volunteers perceived ChatGPT-generated summaries as more accessible and 73.5% (56.9-85.4) more transparent than the original abstract. None of the summaries were deemed harmful. We expanded ResearchMatch's technical infrastructure to automatically generate and display lay summaries for over 750 published studies that resulted from the platform's recruitment mechanism. DISCUSSION AND CONCLUSION: Implementing AI-generated lay summaries on ResearchMatch demonstrates the potential of a scalable framework generalizable to broader platforms for enhancing research accessibility and transparency.

2.
medRxiv ; 2024 Mar 18.
Article in English | MEDLINE | ID: mdl-38562678

ABSTRACT

Suicide prevention requires risk identification, appropriate intervention, and follow-up. Traditional risk identification relies on patient self-reporting, support network reporting, or face-to-face screening with validated instruments or history and physical exam. In the last decade, statistical risk models have been studied and more recently deployed to augment clinical judgment. Models have generally been found to be low precision or problematic at scale due to low incidence. Few have been tested in clinical practice, and none have been tested in clinical trials to our knowledge. Methods: We report the results of a pragmatic randomized controlled trial (RCT) in three outpatient adult Neurology clinic settings. This two-arm trial compared the effectiveness of Interruptive and Non-Interruptive Clinical Decision Support (CDS) to prompt further screening of suicidal ideation for those predicted to be high risk using a real-time, validated statistical risk model of suicide attempt risk, with the decision to screen as the primary end point. Secondary outcomes included rates of suicidal ideation and attempts in both arms. Manual chart review of every trial encounter was used to determine if suicide risk assessment was subsequently documented. Results: From August 16, 2022, through February 16, 2023, our study randomized 596 patient encounters across 561 patients for providers to receive either Interruptive or Non-Interruptive CDS in a 1:1 ratio. Adjusting for provider cluster effects, Interruptive CDS led to significantly higher numbers of decisions to screen (42%=121/289 encounters) compared to Non-Interruptive CDS (4%=12/307) (odds ratio=17.7, p-value <0.001). Secondarily, no documented episodes of suicidal ideation or attempts occurred in either arm. While the proportion of documented assessments among those noting the decision to screen was higher for providers in the Non-Interruptive arm (92%=11/12) than in the Interruptive arm (52%=63/121), the interruptive CDS was associated with more frequent documentation of suicide risk assessment (63/289 encounters compared to 11/307, p-value<0.001). Conclusions: In this pragmatic RCT of real-time predictive CDS to guide suicide risk assessment, Interruptive CDS led to higher numbers of decisions to screen and documented suicide risk assessments. Well-powered large-scale trials randomizing this type of CDS compared to standard of care are indicated to measure effectiveness in reducing suicidal self-harm. ClinicalTrials.gov Identifier: NCT05312437.

3.
JMIR AI ; 2: e52888, 2023 Dec 06.
Article in English | MEDLINE | ID: mdl-38875540

ABSTRACT

BACKGROUND: Artificial intelligence (AI) and machine learning (ML) technology design and development continues to be rapid, despite major limitations in its current form as a practice and discipline to address all sociohumanitarian issues and complexities. From these limitations emerges an imperative to strengthen AI and ML literacy in underserved communities and build a more diverse AI and ML design and development workforce engaged in health research. OBJECTIVE: AI and ML has the potential to account for and assess a variety of factors that contribute to health and disease and to improve prevention, diagnosis, and therapy. Here, we describe recent activities within the Artificial Intelligence/Machine Learning Consortium to Advance Health Equity and Researcher Diversity (AIM-AHEAD) Ethics and Equity Workgroup (EEWG) that led to the development of deliverables that will help put ethics and fairness at the forefront of AI and ML applications to build equity in biomedical research, education, and health care. METHODS: The AIM-AHEAD EEWG was created in 2021 with 3 cochairs and 51 members in year 1 and 2 cochairs and ~40 members in year 2. Members in both years included AIM-AHEAD principal investigators, coinvestigators, leadership fellows, and research fellows. The EEWG used a modified Delphi approach using polling, ranking, and other exercises to facilitate discussions around tangible steps, key terms, and definitions needed to ensure that ethics and fairness are at the forefront of AI and ML applications to build equity in biomedical research, education, and health care. RESULTS: The EEWG developed a set of ethics and equity principles, a glossary, and an interview guide. The ethics and equity principles comprise 5 core principles, each with subparts, which articulate best practices for working with stakeholders from historically and presently underrepresented communities. The glossary contains 12 terms and definitions, with particular emphasis on optimal development, refinement, and implementation of AI and ML in health equity research. To accompany the glossary, the EEWG developed a concept relationship diagram that describes the logical flow of and relationship between the definitional concepts. Lastly, the interview guide provides questions that can be used or adapted to garner stakeholder and community perspectives on the principles and glossary. CONCLUSIONS: Ongoing engagement is needed around our principles and glossary to identify and predict potential limitations in their uses in AI and ML research settings, especially for institutions with limited resources. This requires time, careful consideration, and honest discussions around what classifies an engagement incentive as meaningful to support and sustain their full engagement. By slowing down to meet historically and presently underresourced institutions and communities where they are and where they are capable of engaging and competing, there is higher potential to achieve needed diversity, ethics, and equity in AI and ML implementation in health research.

SELECTION OF CITATIONS
SEARCH DETAIL