Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 21
Filtrar
Más filtros













Base de datos
Intervalo de año de publicación
1.
J Biomed Inform ; 156: 104663, 2024 Jun 04.
Artículo en Inglés | MEDLINE | ID: mdl-38838949

RESUMEN

OBJECTIVE: This study aims to investigate the association between social determinants of health (SDoH) and clinical research recruitment outcomes and recommends evidence-based strategies to enhance equity. MATERIALS AND METHODS: Data were collected from the internal clinical study manager database, clinical data warehouse, and clinical research registry. Study characteristics (e.g., study phase) and sociodemographic information were extracted. Median neighborhood income, distance from the study location, and Area Deprivation Index (ADI) were calculated. Mixed effect generalized regression was used for clustering effects and false discovery rate adjustment for multiple testing. A stratified analysis was performed to examine the impact in distinct medical departments. RESULTS: The study sample consisted of 3,962 individuals, with a mean age of 61.5 years, 53.6 % male, 54.2 % White, and 49.1 % non-Hispanic or Latino. Study characteristics revealed a variety of protocols across different departments, with cardiology having the highest percentage of participants (46.4 %). Industry funding was the most common (74.5 %), and digital advertising and personal outreach were the main recruitment methods (58.9 % and 90.8 %). DISCUSSION: The analysis demonstrated significant associations between participant characteristics and research participation, including biological sex, age, ethnicity, and language. The stratified analysis revealed other significant associations for recruitment strategies. SDoH is crucial to clinical research recruitment, and this study presents evidence-based solutions for equity and inclusivity. Researchers can tailor recruitment strategies to overcome barriers and increase participant diversity by identifying participant characteristics and research involvement status. CONCLUSION: The findings highlight the relevance of clinical research inequities and equitable representation of historically underrepresented populations. We need to improve recruitment strategies to promote diversity and inclusivity in research.

2.
AIDS Behav ; 2024 May 04.
Artículo en Inglés | MEDLINE | ID: mdl-38703337

RESUMEN

Effective recruitment strategies are pivotal for informatics-based intervention trials success, particularly for people living with HIV (PLWH), where engagement can be challenging. Although informatics interventions are recognized for improving health outcomes, the effectiveness of their recruitment strategies remains unclear. We investigated the application of a social marketing framework in navigating the nuances of recruitment for informatics-based intervention trials for PLWH by examining participant experiences and perceptions. We used qualitative descriptive methodology to conduct semi-structured interviews with 90 research participants from four informatics-based intervention trials. Directed inductive and deductive content analyses were guided by Howcutt et al.'s social marketing framework on applying the decision-making process to research recruitment. The majority were male (86.7%), living in the Northeast United States (56%), and identified as Black (32%) or White (32%). Most participants (60%) completed the interview remotely. Sixteen subthemes emerged from five themes: motivation, perception, attitude formation, integration, and learning. Findings from our interview data suggest that concepts from Howcutt et al.'s framework informed participants' decisions to participate in an informatics-based intervention trial. We found that the participants' perceptions of trust in the research process were integral to the participants across the four trials. However, the recruitment approach and communication medium preferences varied between older and younger age groups. Social marketing framework can provide insight into improving the research recruitment process. Future work should delve into the complex interplay between the type of informatics-based interventions, trust in the research process, and communication preferences, and how these factors collectively influence participants' willingness to engage.

3.
J Biomed Inform ; 154: 104649, 2024 Jun.
Artículo en Inglés | MEDLINE | ID: mdl-38697494

RESUMEN

OBJECTIVE: Automated identification of eligible patients is a bottleneck of clinical research. We propose Criteria2Query (C2Q) 3.0, a system that leverages GPT-4 for the semi-automatic transformation of clinical trial eligibility criteria text into executable clinical database queries. MATERIALS AND METHODS: C2Q 3.0 integrated three GPT-4 prompts for concept extraction, SQL query generation, and reasoning. Each prompt was designed and evaluated separately. The concept extraction prompt was benchmarked against manual annotations from 20 clinical trials by two evaluators, who later also measured SQL generation accuracy and identified errors in GPT-generated SQL queries from 5 clinical trials. The reasoning prompt was assessed by three evaluators on four metrics: readability, correctness, coherence, and usefulness, using corrected SQL queries and an open-ended feedback questionnaire. RESULTS: Out of 518 concepts from 20 clinical trials, GPT-4 achieved an F1-score of 0.891 in concept extraction. For SQL generation, 29 errors spanning seven categories were detected, with logic errors being the most common (n = 10; 34.48 %). Reasoning evaluations yielded a high coherence rating, with the mean score being 4.70 but relatively lower readability, with a mean of 3.95. Mean scores of correctness and usefulness were identified as 3.97 and 4.37, respectively. CONCLUSION: GPT-4 significantly improves the accuracy of extracting clinical trial eligibility criteria concepts in C2Q 3.0. Continued research is warranted to ensure the reliability of large language models.


Asunto(s)
Ensayos Clínicos como Asunto , Humanos , Procesamiento de Lenguaje Natural , Programas Informáticos , Selección de Paciente
4.
JAMIA Open ; 7(1): ooae021, 2024 Apr.
Artículo en Inglés | MEDLINE | ID: mdl-38455840

RESUMEN

Objective: To automate scientific claim verification using PubMed abstracts. Materials and Methods: We developed CliVER, an end-to-end scientific Claim VERification system that leverages retrieval-augmented techniques to automatically retrieve relevant clinical trial abstracts, extract pertinent sentences, and use the PICO framework to support or refute a scientific claim. We also created an ensemble of three state-of-the-art deep learning models to classify rationale of support, refute, and neutral. We then constructed CoVERt, a new COVID VERification dataset comprising 15 PICO-encoded drug claims accompanied by 96 manually selected and labeled clinical trial abstracts that either support or refute each claim. We used CoVERt and SciFact (a public scientific claim verification dataset) to assess CliVER's performance in predicting labels. Finally, we compared CliVER to clinicians in the verification of 19 claims from 6 disease domains, using 189 648 PubMed abstracts extracted from January 2010 to October 2021. Results: In the evaluation of label prediction accuracy on CoVERt, CliVER achieved a notable F1 score of 0.92, highlighting the efficacy of the retrieval-augmented models. The ensemble model outperforms each individual state-of-the-art model by an absolute increase from 3% to 11% in the F1 score. Moreover, when compared with four clinicians, CliVER achieved a precision of 79.0% for abstract retrieval, 67.4% for sentence selection, and 63.2% for label prediction, respectively. Conclusion: CliVER demonstrates its early potential to automate scientific claim verification using retrieval-augmented strategies to harness the wealth of clinical trial abstracts in PubMed. Future studies are warranted to further test its clinical utility.

5.
J Am Med Inform Assoc ; 31(5): 1062-1073, 2024 Apr 19.
Artículo en Inglés | MEDLINE | ID: mdl-38447587

RESUMEN

BACKGROUND: Alzheimer's disease and related dementias (ADRD) affect over 55 million globally. Current clinical trials suffer from low recruitment rates, a challenge potentially addressable via natural language processing (NLP) technologies for researchers to effectively identify eligible clinical trial participants. OBJECTIVE: This study investigates the sociotechnical feasibility of NLP-driven tools for ADRD research prescreening and analyzes the tools' cognitive complexity's effect on usability to identify cognitive support strategies. METHODS: A randomized experiment was conducted with 60 clinical research staff using three prescreening tools (Criteria2Query, Informatics for Integrating Biology and the Bedside [i2b2], and Leaf). Cognitive task analysis was employed to analyze the usability of each tool using the Health Information Technology Usability Evaluation Scale. Data analysis involved calculating descriptive statistics, interrater agreement via intraclass correlation coefficient, cognitive complexity, and Generalized Estimating Equations models. RESULTS: Leaf scored highest for usability followed by Criteria2Query and i2b2. Cognitive complexity was found to be affected by age, computer literacy, and number of criteria, but was not significantly associated with usability. DISCUSSION: Adopting NLP for ADRD prescreening demands careful task delegation, comprehensive training, precise translation of eligibility criteria, and increased research accessibility. The study highlights the relevance of these factors in enhancing NLP-driven tools' usability and efficacy in clinical research prescreening. CONCLUSION: User-modifiable NLP-driven prescreening tools were favorably received, with system type, evaluation sequence, and user's computer literacy influencing usability more than cognitive complexity. The study emphasizes NLP's potential in improving recruitment for clinical trials, endorsing a mixed-methods approach for future system evaluation and enhancements.


Asunto(s)
Enfermedad de Alzheimer , Informática Médica , Humanos , Procesamiento de Lenguaje Natural , Estudios de Factibilidad , Determinación de la Elegibilidad
6.
Appl Clin Inform ; 15(2): 306-312, 2024 Mar.
Artículo en Inglés | MEDLINE | ID: mdl-38442909

RESUMEN

OBJECTIVES: Large language models (LLMs) like Generative pre-trained transformer (ChatGPT) are powerful algorithms that have been shown to produce human-like text from input data. Several potential clinical applications of this technology have been proposed and evaluated by biomedical informatics experts. However, few have surveyed health care providers for their opinions about whether the technology is fit for use. METHODS: We distributed a validated mixed-methods survey to gauge practicing clinicians' comfort with LLMs for a breadth of tasks in clinical practice, research, and education, which were selected from the literature. RESULTS: A total of 30 clinicians fully completed the survey. Of the 23 tasks, 16 were rated positively by more than 50% of the respondents. Based on our qualitative analysis, health care providers considered LLMs to have excellent synthesis skills and efficiency. However, our respondents had concerns that LLMs could generate false information and propagate training data bias.Our survey respondents were most comfortable with scenarios that allow LLMs to function in an assistive role, like a physician extender or trainee. CONCLUSION: In a mixed-methods survey of clinicians about LLM use, health care providers were encouraging of having LLMs in health care for many tasks, and especially in assistive roles. There is a need for continued human-centered development of both LLMs and artificial intelligence in general.


Asunto(s)
Algoritmos , Inteligencia Artificial , Humanos , Instituciones de Salud , Personal de Salud , Lenguaje
7.
Matern Child Health J ; 28(3): 578-586, 2024 Mar.
Artículo en Inglés | MEDLINE | ID: mdl-38147277

RESUMEN

INTRODUCTION: Stigma and bias related to race and other minoritized statuses may underlie disparities in pregnancy and birth outcomes. One emerging method to identify bias is the study of stigmatizing language in the electronic health record. The objective of our study was to develop automated natural language processing (NLP) methods to identify two types of stigmatizing language: marginalizing language and its complement, power/privilege language, accurately and automatically in labor and birth notes. METHODS: We analyzed notes for all birthing people > 20 weeks' gestation admitted for labor and birth at two hospitals during 2017. We then employed text preprocessing techniques, specifically using TF-IDF values as inputs, and tested machine learning classification algorithms to identify stigmatizing and power/privilege language in clinical notes. The algorithms assessed included Decision Trees, Random Forest, and Support Vector Machines. Additionally, we applied a feature importance evaluation method (InfoGain) to discern words that are highly correlated with these language categories. RESULTS: For marginalizing language, Decision Trees yielded the best classification with an F-score of 0.73. For power/privilege language, Support Vector Machines performed optimally, achieving an F-score of 0.91. These results demonstrate the effectiveness of the selected machine learning methods in classifying language categories in clinical notes. CONCLUSION: We identified well-performing machine learning methods to automatically detect stigmatizing language in clinical notes. To our knowledge, this is the first study to use NLP performance metrics to evaluate the performance of machine learning methods in discerning stigmatizing language. Future studies should delve deeper into refining and evaluating NLP methods, incorporating the latest algorithms rooted in deep learning.


Asunto(s)
Algoritmos , Procesamiento de Lenguaje Natural , Femenino , Humanos , Registros Electrónicos de Salud , Aprendizaje Automático , Lenguaje
8.
J Clin Transl Sci ; 7(1): e199, 2023.
Artículo en Inglés | MEDLINE | ID: mdl-37830010

RESUMEN

Background: Randomized clinical trials (RCT) are the foundation for medical advances, but participant recruitment remains a persistent barrier to their success. This retrospective data analysis aims to (1) identify clinical trial features associated with successful participant recruitment measured by accrual percentage and (2) compare the characteristics of the RCTs by assessing the most and least successful recruitment, which are indicated by varying thresholds of accrual percentage such as ≥ 90% vs ≤ 10%, ≥ 80% vs ≤ 20%, and ≥ 70% vs ≤ 30%. Methods: Data from the internal research registry at Columbia University Irving Medical Center and Aggregated Analysis of ClinicalTrials.gov were collected for 393 randomized interventional treatment studies closed to further enrollment. We compared two regularized linear regression and six tree-based machine learning models for accrual percentage (i.e., reported accrual to date divided by the target accrual) prediction. The outperforming model and Tree SHapley Additive exPlanations were used for feature importance analysis for participant recruitment. The identified features were compared between the two subgroups. Results: CatBoost regressor outperformed the others. Key features positively associated with recruitment success, as measured by accrual percentage, include government funding and compensation. Meanwhile, cancer research and non-conventional recruitment methods (e.g., websites) are negatively associated with recruitment success. Statistically significant subgroup differences (corrected p-value < .05) were found in 15 of the top 30 most important features. Conclusion: This multi-source retrospective study highlighted key features influencing RCT participant recruitment, offering actionable steps for improvement, including flexible recruitment infrastructure and appropriate participant compensation.

9.
J Am Med Inform Assoc ; 30(12): 1895-1903, 2023 11 17.
Artículo en Inglés | MEDLINE | ID: mdl-37615994

RESUMEN

OBJECTIVE: Outcomes are important clinical study information. Despite progress in automated extraction of PICO (Population, Intervention, Comparison, and Outcome) entities from PubMed, rarely are these entities encoded by standard terminology to achieve semantic interoperability. This study aims to evaluate the suitability of the Unified Medical Language System (UMLS) and SNOMED-CT in encoding outcome concepts in randomized controlled trial (RCT) abstracts. MATERIALS AND METHODS: We iteratively developed and validated an outcome annotation guideline and manually annotated clinically significant outcome entities in the Results and Conclusions sections of 500 randomly selected RCT abstracts on PubMed. The extracted outcomes were fully, partially, or not mapped to the UMLS via MetaMap based on established heuristics. Manual UMLS browser search was performed for select unmapped outcome entities to further differentiate between UMLS and MetaMap errors. RESULTS: Only 44% of 2617 outcome concepts were fully covered in the UMLS, among which 67% were complex concepts that required the combination of 2 or more UMLS concepts to represent them. SNOMED-CT was present as a source in 61% of the fully mapped outcomes. DISCUSSION: Domains such as Metabolism and Nutrition, and Infections and Infectious Diseases need expanded outcome concept coverage in the UMLS and MetaMap. Future work is warranted to similarly assess the terminology coverage for P, I, C entities. CONCLUSION: Computational representation of clinical outcomes is important for clinical evidence extraction and appraisal and yet faces challenges from the inherent complexity and lack of coverage of these concepts in UMLS and SNOMED-CT, as demonstrated in this study.


Asunto(s)
Systematized Nomenclature of Medicine , Unified Medical Language System , PubMed , Ensayos Clínicos Controlados Aleatorios como Asunto
10.
NPJ Digit Med ; 6(1): 158, 2023 Aug 24.
Artículo en Inglés | MEDLINE | ID: mdl-37620423

RESUMEN

Recent advances in large language models (LLMs) have demonstrated remarkable successes in zero- and few-shot performance on various downstream tasks, paving the way for applications in high-stakes domains. In this study, we systematically examine the capabilities and limitations of LLMs, specifically GPT-3.5 and ChatGPT, in performing zero-shot medical evidence summarization across six clinical domains. We conduct both automatic and human evaluations, covering several dimensions of summary quality. Our study demonstrates that automatic metrics often do not strongly correlate with the quality of summaries. Furthermore, informed by our human evaluations, we define a terminology of error types for medical evidence summarization. Our findings reveal that LLMs could be susceptible to generating factually inconsistent summaries and making overly convincing or uncertain statements, leading to potential harm due to misinformation. Moreover, we find that models struggle to identify the salient information and are more error-prone when summarizing over longer textual contexts.

11.
AMIA Jt Summits Transl Sci Proc ; 2023: 281-290, 2023.
Artículo en Inglés | MEDLINE | ID: mdl-37350899

RESUMEN

Participant recruitment continues to be a challenge to the success of randomized controlled trials, resulting in increased costs, extended trial timelines and delayed treatment availability. Literature provides evidence that study design features (e.g., trial phase, study site involvement) and trial sponsor are significantly associated with recruitment success. Principal investigators oversee the conduct of clinical trials, including recruitment. Through a cross-sectional survey and a thematic analysis of free-text responses, we assessed the perceptions of sixteen principal investigators regarding success factors for participant recruitment. Study site involvement and funding source do not necessarily make recruitment easier or more challenging from the perspective of the principal investigators. The most commonly used recruitment strategies are also the most effort inefficient (e.g., in-person recruitment, reviewing the electronic medical records for prescreening). Finally, we recommended actionable steps, such as improving staff support and leveraging informatics-driven approaches, to allow clinical researchers to enhance participant recruitment.

12.
medRxiv ; 2023 Apr 24.
Artículo en Inglés | MEDLINE | ID: mdl-37162998

RESUMEN

Recent advances in large language models (LLMs) have demonstrated remarkable successes in zero- and few-shot performance on various downstream tasks, paving the way for applications in high-stakes domains. In this study, we systematically examine the capabilities and limitations of LLMs, specifically GPT-3.5 and ChatGPT, in performing zero-shot medical evidence summarization across six clinical domains. We conduct both automatic and human evaluations, covering several dimensions of summary quality. Our study has demonstrated that automatic metrics often do not strongly correlate with the quality of summaries. Furthermore, informed by our human evaluations, we define a terminology of error types for medical evidence summarization. Our findings reveal that LLMs could be susceptible to generating factually inconsistent summaries and making overly convincing or uncertain statements, leading to potential harm due to misinformation. Moreover, we find that models struggle to identify the salient information and are more error-prone when summarizing over longer textual contexts.

13.
J Biomed Inform ; 142: 104375, 2023 06.
Artículo en Inglés | MEDLINE | ID: mdl-37141977

RESUMEN

OBJECTIVE: Feasible, safe, and inclusive eligibility criteria are crucial to successful clinical research recruitment. Existing expert-centered methods for eligibility criteria selection may not be representative of real-world populations. This paper presents a novel model called OPTEC (OPTimal Eligibility Criteria) based on the Multiple Attribute Decision Making method boosted by an efficient greedy algorithm. METHODS: It systematically identifies the optimal criteria combination for a given medical condition with the optimal tradeoff among feasibility, patient safety, and cohort diversity. The model offers flexibility in attribute configurations and generalizability to various clinical domains. The model was evaluated on two clinical domains (i.e., Alzheimer's disease and Neoplasm of pancreas) using two datasets (i.e., MIMIC-III dataset and NewYork-Presbyterian/Columbia University Irving Medical Center (NYP/CUIMC) database). RESULTS: We simulated the process of automatically optimizing eligibility criteria according to user-specified prioritization preferences and generated recommendations based on the top-ranked criteria combination accordingly (top 0.41-2.75%) with OPTEC. Harnessing the power of the model, we designed an interactive criteria recommendation system and conducted a case study with an experienced clinical researcher using the think-aloud protocol. CONCLUSIONS: The results demonstrated that OPTEC could be used to recommend feasible eligibility criteria combinations, and to provide actionable recommendations for clinical study designers to construct a feasible, safe, and diverse cohort definition during early study design.


Asunto(s)
Algoritmos , Proyectos de Investigación , Humanos , Selección de Paciente , Determinación de la Elegibilidad , Investigadores
14.
Nurs Inq ; 30(3): e12557, 2023 07.
Artículo en Inglés | MEDLINE | ID: mdl-37073504

RESUMEN

The presence of stigmatizing language in the electronic health record (EHR) has been used to measure implicit biases that underlie health inequities. The purpose of this study was to identify the presence of stigmatizing language in the clinical notes of pregnant people during the birth admission. We conducted a qualitative analysis on N = 1117 birth admission EHR notes from two urban hospitals in 2017. We identified stigmatizing language categories, such as Disapproval (39.3%), Questioning patient credibility (37.7%), Difficult patient (21.3%), Stereotyping (1.6%), and Unilateral decisions (1.6%) in 61 notes (5.4%). We also defined a new stigmatizing language category indicating Power/privilege. This was present in 37 notes (3.3%) and signaled approval of social status, upholding a hierarchy of bias. The stigmatizing language was most frequently identified in birth admission triage notes (16%) and least frequently in social work initial assessments (13.7%). We found that clinicians from various disciplines recorded stigmatizing language in the medical records of birthing people. This language was used to question birthing people's credibility and convey disapproval of decision-making abilities for themselves or their newborns. We reported a Power/privilege language bias in the inconsistent documentation of traits considered favorable for patient outcomes (e.g., employment status). Future work on stigmatizing language may inform tailored interventions to improve perinatal outcomes for all birthing people and their families.


Asunto(s)
Lenguaje , Estereotipo , Recién Nacido , Embarazo , Femenino , Humanos , Registros Electrónicos de Salud
15.
Int J Med Inform ; 171: 104985, 2023 03.
Artículo en Inglés | MEDLINE | ID: mdl-36638583

RESUMEN

BACKGROUND: Participant recruitment is a barrier to successful clinical research. One strategy to improve recruitment is to conduct eligibility prescreening, a resource-intensive process where clinical research staff manually reviews electronic health records data to identify potentially eligible patients. Criteria2Query (C2Q) was developed to address this problem by capitalizing on natural language processing to generate queries to identify eligible participants from clinical databases semi-autonomously. OBJECTIVE: We examined the clinical research staff's perceived usability of C2Q for clinical research eligibility prescreening. METHODS: Twenty clinical research staff evaluated the usability of C2Q using a cognitive walkthrough with a think-aloud protocol and a Post-Study System Usability Questionnaire. On-screen activity and audio were recorded and transcribed. After every-five evaluators completed an evaluation, usability problems were rated by informatics experts and prioritized for system refinement. There were four iterations of system refinement based on the evaluation feedback. Guided by the Organizational Framework for Intuitive Human-computer Interaction, we performed a directed deductive content analysis of the verbatim transcriptions. RESULTS: Evaluators aged from 24 to 46 years old (33.8; SD: 7.32) demonstrated high computer literacy (6.36; SD:0.17); female (75 %), White (35 %), and clinical research coordinators (45 %). C2Q demonstrated high usability during the final cycle (2.26 out of 7 [lower scores are better], SD: 0.74). The number of unique usability issues decreased after each refinement. Fourteen subthemes emerged from three themes: seeking user goals, performing well-learned tasks, and determining what to do next. CONCLUSIONS: The cognitive walkthrough with a think-aloud protocol informed iterative system refinement and demonstrated the usability of C2Q by clinical research staff. Key recommendations for system development and implementation include improving system intuitiveness and overall user experience through comprehensive consideration of user needs and requirements for task completion.


Asunto(s)
Procesamiento de Lenguaje Natural , Interfaz Usuario-Computador , Humanos , Femenino , Adulto Joven , Adulto , Persona de Mediana Edad , Computadores , Registros Electrónicos de Salud , Registros
16.
J Am Med Inform Assoc ; 29(7): 1161-1171, 2022 06 14.
Artículo en Inglés | MEDLINE | ID: mdl-35426943

RESUMEN

OBJECTIVE: To combine machine efficiency and human intelligence for converting complex clinical trial eligibility criteria text into cohort queries. MATERIALS AND METHODS: Criteria2Query (C2Q) 2.0 was developed to enable real-time user intervention for criteria selection and simplification, parsing error correction, and concept mapping. The accuracy, precision, recall, and F1 score of enhanced modules for negation scope detection, temporal and value normalization were evaluated using a previously curated gold standard, the annotated eligibility criteria of 1010 COVID-19 clinical trials. The usability and usefulness were evaluated by 10 research coordinators in a task-oriented usability evaluation using 5 Alzheimer's disease trials. Data were collected by user interaction logging, a demographic questionnaire, the Health Information Technology Usability Evaluation Scale (Health-ITUES), and a feature-specific questionnaire. RESULTS: The accuracies of negation scope detection, temporal and value normalization were 0.924, 0.916, and 0.966, respectively. C2Q 2.0 achieved a moderate usability score (3.84 out of 5) and a high learnability score (4.54 out of 5). On average, 9.9 modifications were made for a clinical study. Experienced researchers made more modifications than novice researchers. The most frequent modification was deletion (5.35 per study). Furthermore, the evaluators favored cohort queries resulting from modifications (score 4.1 out of 5) and the user engagement features (score 4.3 out of 5). DISCUSSION AND CONCLUSION: Features to engage domain experts and to overcome the limitations in automated machine output are shown to be useful and user-friendly. We concluded that human-computer collaboration is key to improving the adoption and user-friendliness of natural language processing.


Asunto(s)
COVID-19 , Inteligencia Artificial , Determinación de la Elegibilidad/métodos , Humanos , Procesamiento de Lenguaje Natural , Selección de Paciente
17.
J Am Med Inform Assoc ; 29(1): 197-206, 2021 12 28.
Artículo en Inglés | MEDLINE | ID: mdl-34725689

RESUMEN

OBJECTIVE: We conducted a systematic review to assess the effect of natural language processing (NLP) systems in improving the accuracy and efficiency of eligibility prescreening during the clinical research recruitment process. MATERIALS AND METHODS: Guided by the Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) standards of quality for reporting systematic reviews, a protocol for study eligibility was developed a priori and registered in the PROSPERO database. Using predetermined inclusion criteria, studies published from database inception through February 2021 were identified from 5 databases. The Joanna Briggs Institute Critical Appraisal Checklist for Quasi-experimental Studies was adapted to determine the study quality and the risk of bias of the included articles. RESULTS: Eleven studies representing 8 unique NLP systems met the inclusion criteria. These studies demonstrated moderate study quality and exhibited heterogeneity in the study design, setting, and intervention type. All 11 studies evaluated the NLP system's performance for identifying eligible participants; 7 studies evaluated the system's impact on time efficiency; 4 studies evaluated the system's impact on workload; and 2 studies evaluated the system's impact on recruitment. DISCUSSION: NLP systems in clinical research eligibility prescreening are an understudied but promising field that requires further research to assess its impact on real-world adoption. Future studies should be centered on continuing to develop and evaluate relevant NLP systems to improve enrollment into clinical studies. CONCLUSION: Understanding the role of NLP systems in improving eligibility prescreening is critical to the advancement of clinical research recruitment.


Asunto(s)
Determinación de la Elegibilidad , Procesamiento de Lenguaje Natural , Lista de Verificación , Manejo de Datos , Humanos , Proyectos de Investigación
18.
Int J Med Inform ; 153: 104529, 2021 09.
Artículo en Inglés | MEDLINE | ID: mdl-34385097

RESUMEN

OBJECTIVE: The widespread and frequent use of mobile technology among adolescents, including sexual minority adolescents, presents an opportunity for the development of mobile health (mHealth) technology to combat the continuing HIV epidemic among young men who have sex with men (YMSM). We analyzed perceptions of the quality and impact of an HIV prevention mobile app on sexual risk reduction among YMSM. METHODS: Participants were recruited from a larger randomized controlled trial of the MyPEEPS Mobile app among YMSM aged 13-18 years. Data were collected via semi-structured interviews to assess quality and user satisfaction with MyPEEPS Mobile app using analysis informed by the Information Systems Success framework. Interview data were transcribed verbatim and analyzed using six themes: information quality, net benefit, user satisfaction, product quality, service quality, and health care barriers. RESULTS: Interviews were conducted with 40 YMSM (45% Hispanic; 80% non-White; 88% non-rural resident; 28% aged 17 years). Participants' responses indicated that information quality was high, reporting that the app information was concise, easy to understand, useful, and relevant to their life. The net benefits were stated as improvements in their decision-making skills, health behaviors, communication skills with partner(s), and increased knowledge of HIV risk. There was general user satisfaction and enjoyment when using the app, although most of the participants did not intend to reuse the app unless new activities were added. Participants expressed that the product quality of the app was good due to its personalization, representation of the LGBTQIA + community, and user-friendly interface. Although no major technical issues were reported, participants suggested that adaption to a native app, rather than a web app, would improve service quality through faster loading speed. Participants also identified some health care barriers that were minimized by app use. CONCLUSIONS: The MyPEEPS Mobile app is a well received, functional, and entertaining mHealth HIV prevention tool that may improve HIV prevention skills and reduce HIV risk among YMSM.


Asunto(s)
Infecciones por VIH , Aplicaciones Móviles , Salud Sexual , Minorías Sexuales y de Género , Adolescente , Infecciones por VIH/prevención & control , Homosexualidad Masculina , Humanos , Sistemas de Información , Masculino
19.
Stud Health Technol Inform ; 281: 984-988, 2021 May 27.
Artículo en Inglés | MEDLINE | ID: mdl-34042820

RESUMEN

Clinical trial eligibility criteria are important for selecting the right participants for clinical trials. However, they are often complex and not computable. This paper presents the participatory design of a human-computer collaboration method for criteria simplification that includes natural language processing followed by user-centered eligibility criteria simplification. A case study on the ARCADIA trial shows how criteria were simplified for structured database querying by clinical researchers and identifies rules for criteria simplification and concept normalization.


Asunto(s)
Procesamiento de Lenguaje Natural , Investigadores , Bases de Datos Factuales , Determinación de la Elegibilidad , Humanos
20.
J Am Med Inform Assoc ; 28(3): 616-621, 2021 03 01.
Artículo en Inglés | MEDLINE | ID: mdl-33216120

RESUMEN

Clinical trials are the gold standard for generating reliable medical evidence. The biggest bottleneck in clinical trials is recruitment. To facilitate recruitment, tools for patient search of relevant clinical trials have been developed, but users often suffer from information overload. With nearly 700 coronavirus disease 2019 (COVID-19) trials conducted in the United States as of August 2020, it is imperative to enable rapid recruitment to these studies. The COVID-19 Trial Finder was designed to facilitate patient-centered search of COVID-19 trials, first by location and radius distance from trial sites, and then by brief, dynamically generated medical questions to allow users to prescreen their eligibility for nearby COVID-19 trials with minimum human computer interaction. A simulation study using 20 publicly available patient case reports demonstrates its precision and effectiveness.


Asunto(s)
COVID-19 , Ensayos Clínicos como Asunto , Indización y Redacción de Resúmenes , Adulto , Anciano , Anciano de 80 o más Años , Preescolar , Determinación de la Elegibilidad , Femenino , Humanos , Almacenamiento y Recuperación de la Información , Masculino , Persona de Mediana Edad , Selección de Paciente
SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA