Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 5 de 5
Filtrar
Mais filtros

Bases de dados
País/Região como assunto
Ano de publicação
Tipo de documento
Assunto da revista
País de afiliação
Intervalo de ano de publicação
1.
Otolaryngol Head Neck Surg ; 170(6): 1512-1518, 2024 Jun.
Artigo em Inglês | MEDLINE | ID: mdl-38488302

RESUMO

OBJECTIVE: The Centers for Medicare & Medicaid Services "OpenPayments" database tracks industry payments to US physicians to improve research conflicts of interest (COIs) transparency, but manual cross-checking of articles' authors against this database is labor-intensive. This study aims to assess the potential of large language models (LLMs) like ChatGPT to automate COI data analysis in medical publications. STUDY DESIGN: An observational study analyzing the accuracy of ChatGPT in automating the cross-checking of COI disclosures in medical research articles against the OpenPayments database. SETTING: Publications regarding Food and Drug Administration-approved biologics for chronic rhinosinusitis with nasal polyposis: omalizumab, mepolizumab, and dupilumab. METHODS: First, ChatGPT evaluated author affiliations from PubMed to identify those based in the United States. Second, for author names matching 1 or multiple payment recipients in OpenPayments, ChatGPT undertook a comparative analysis between author affiliation and OpenPayments recipient metadata. Third, ChatGPT scrutinized full article COI statements, producing an intricate matrix of disclosures for each author against each relevant company (Sanofi, Regeneron, Genentech, Novartis, and GlaxoSmithKline). A random subset of responses was manually checked for accuracy. RESULTS: In total, 78 relevant articles and 294 unique US authors were included, leading to 980 LLM queries. Manual verification showed accuracies of 100% (200/200; 95% confidence interval [CI]: 98.1%-100%) for country analysis, 97.4% (113/116; 95% CI: 92.7%-99.1%) for matching author affiliations with OpenPayments metadata, and 99.2% (1091/1100; 95% CI: 98.5%-99.6%) for COI statement data extraction. CONCLUSION: LLMs have robust potential to automate author-company-specific COI cross-checking against the OpenPayments database. Our findings pave the way for streamlined, efficient, and accurate COI assessment that could be widely employed across medical research.


Assuntos
Conflito de Interesses , Conflito de Interesses/economia , Humanos , Estados Unidos , Revelação , Indústria Farmacêutica/economia , Indústria Farmacêutica/ética , Pesquisa Biomédica/ética , Pesquisa Biomédica/economia , Autoria , Bases de Dados Factuais
2.
J Clin Transl Sci ; 8(1): e53, 2024.
Artigo em Inglês | MEDLINE | ID: mdl-38544748

RESUMO

Background: Incarceration is a significant social determinant of health, contributing to high morbidity, mortality, and racialized health inequities. However, incarceration status is largely invisible to health services research due to inadequate clinical electronic health record (EHR) capture. This study aims to develop, train, and validate natural language processing (NLP) techniques to more effectively identify incarceration status in the EHR. Methods: The study population consisted of adult patients (≥ 18 y.o.) who presented to the emergency department between June 2013 and August 2021. The EHR database was filtered for notes for specific incarceration-related terms, and then a random selection of 1,000 notes was annotated for incarceration and further stratified into specific statuses of prior history, recent, and current incarceration. For NLP model development, 80% of the notes were used to train the Longformer-based and RoBERTa algorithms. The remaining 20% of the notes underwent analysis with GPT-4. Results: There were 849 unique patients across 989 visits in the 1000 annotated notes. Manual annotation revealed that 559 of 1000 notes (55.9%) contained evidence of incarceration history. ICD-10 code (sensitivity: 4.8%, specificity: 99.1%, F1-score: 0.09) demonstrated inferior performance to RoBERTa NLP (sensitivity: 78.6%, specificity: 73.3%, F1-score: 0.79), Longformer NLP (sensitivity: 94.6%, specificity: 87.5%, F1-score: 0.93), and GPT-4 (sensitivity: 100%, specificity: 61.1%, F1-score: 0.86). Conclusions: Our advanced NLP models demonstrate a high degree of accuracy in identifying incarceration status from clinical notes. Further research is needed to explore their scaled implementation in population health initiatives and assess their potential to mitigate health disparities through tailored system interventions.

4.
JMIR Med Educ ; 9: e45312, 2023 02 08.
Artigo em Inglês | MEDLINE | ID: mdl-36753318

RESUMO

BACKGROUND: Chat Generative Pre-trained Transformer (ChatGPT) is a 175-billion-parameter natural language processing model that can generate conversation-style responses to user input. OBJECTIVE: This study aimed to evaluate the performance of ChatGPT on questions within the scope of the United States Medical Licensing Examination (USMLE) Step 1 and Step 2 exams, as well as to analyze responses for user interpretability. METHODS: We used 2 sets of multiple-choice questions to evaluate ChatGPT's performance, each with questions pertaining to Step 1 and Step 2. The first set was derived from AMBOSS, a commonly used question bank for medical students, which also provides statistics on question difficulty and the performance on an exam relative to the user base. The second set was the National Board of Medical Examiners (NBME) free 120 questions. ChatGPT's performance was compared to 2 other large language models, GPT-3 and InstructGPT. The text output of each ChatGPT response was evaluated across 3 qualitative metrics: logical justification of the answer selected, presence of information internal to the question, and presence of information external to the question. RESULTS: Of the 4 data sets, AMBOSS-Step1, AMBOSS-Step2, NBME-Free-Step1, and NBME-Free-Step2, ChatGPT achieved accuracies of 44% (44/100), 42% (42/100), 64.4% (56/87), and 57.8% (59/102), respectively. ChatGPT outperformed InstructGPT by 8.15% on average across all data sets, and GPT-3 performed similarly to random chance. The model demonstrated a significant decrease in performance as question difficulty increased (P=.01) within the AMBOSS-Step1 data set. We found that logical justification for ChatGPT's answer selection was present in 100% of outputs of the NBME data sets. Internal information to the question was present in 96.8% (183/189) of all questions. The presence of information external to the question was 44.5% and 27% lower for incorrect answers relative to correct answers on the NBME-Free-Step1 (P<.001) and NBME-Free-Step2 (P=.001) data sets, respectively. CONCLUSIONS: ChatGPT marks a significant improvement in natural language processing models on the tasks of medical question answering. By performing at a greater than 60% threshold on the NBME-Free-Step-1 data set, we show that the model achieves the equivalent of a passing score for a third-year medical student. Additionally, we highlight ChatGPT's capacity to provide logic and informational context across the majority of answers. These facts taken together make a compelling case for the potential applications of ChatGPT as an interactive medical education tool to support learning.

5.
Anesth Analg ; 136(2): 317-326, 2023 02 01.
Artigo em Inglês | MEDLINE | ID: mdl-35726884

RESUMO

BACKGROUND: Prolonged opioid use after surgery (POUS), defined as the filling of at least 1 opioid prescription filled between 90 and 180 days after surgery, has been shown to increase health care costs and utilization in adult populations. However, its economic burden has not been studied in adolescent patients. We hypothesized that adolescents with POUS would have higher health care costs and utilization than non-POUS patients. METHODS: Opioid-naive patients 12 to 21 years of age in the United States who received outpatient prescription opioids after surgery were identified from insurance claim data from the Optum Clinformatics Data Mart Database from January 1, 2003, to June 30, 2019. The primary outcomes were total health care costs and visits in the 730-day period after the surgical encounter in patients with POUS versus those without POUS. Multivariable regression analyses were used to determine adjusted health care cost and visit differences. RESULTS: A total of 126,338 unique patients undergoing 132,107 procedures were included in the analysis, with 4867 patients meeting criteria for POUS for an incidence of 3.9%. Adjusted mean total health care costs in the 730 days after surgery were $4604 (95% confidence interval [CI], $4027-$5181) higher in patients with POUS than that in non-POUS patients. Patients with POUS had increases in mean adjusted inpatient length of stay (0.26 greater [95% CI, 0.22-0.30]), inpatient visits (0.07 greater [95% CI, 0.07-0.08]), emergency visits (0.96 greater [95% CI, 0.89-1.03]), and outpatient/other visits (5.78 greater [95% CI, 5.37-6.19]) in the 730 days after surgery ( P < .001 for all comparisons). CONCLUSIONS: In adolescents, POUS was associated with increased total health care costs and utilization in the 730 days after their surgical encounter. Given the increased health care burden associated with POUS in adolescents, further investigation of preventative measures for high-risk individuals and additional study of the relationship between opioid prescription and outcomes may be warranted.


Assuntos
Analgésicos Opioides , Transtornos Relacionados ao Uso de Opioides , Adulto , Humanos , Adolescente , Estados Unidos/epidemiologia , Analgésicos Opioides/efeitos adversos , Sobrecarga do Cuidador , Transtornos Relacionados ao Uso de Opioides/diagnóstico , Transtornos Relacionados ao Uso de Opioides/epidemiologia , Custos de Cuidados de Saúde , Pacientes Ambulatoriais , Estudos Retrospectivos
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA