Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 4 de 4
Filtrar
Mais filtros











Base de dados
Intervalo de ano de publicação
1.
JMIR AI ; 3: e54371, 2024 Aug 13.
Artigo em Inglês | MEDLINE | ID: mdl-39137416

RESUMO

BACKGROUND: Although uncertainties exist regarding implementation, artificial intelligence-driven generative language models (GLMs) have enormous potential in medicine. Deployment of GLMs could improve patient comprehension of clinical texts and improve low health literacy. OBJECTIVE: The goal of this study is to evaluate the potential of ChatGPT-3.5 and GPT-4 to tailor the complexity of medical information to patient-specific input education level, which is crucial if it is to serve as a tool in addressing low health literacy. METHODS: Input templates related to 2 prevalent chronic diseases-type II diabetes and hypertension-were designed. Each clinical vignette was adjusted for hypothetical patient education levels to evaluate output personalization. To assess the success of a GLM (GPT-3.5 and GPT-4) in tailoring output writing, the readability of pre- and posttransformation outputs were quantified using the Flesch reading ease score (FKRE) and the Flesch-Kincaid grade level (FKGL). RESULTS: Responses (n=80) were generated using GPT-3.5 and GPT-4 across 2 clinical vignettes. For GPT-3.5, FKRE means were 57.75 (SD 4.75), 51.28 (SD 5.14), 32.28 (SD 4.52), and 28.31 (SD 5.22) for 6th grade, 8th grade, high school, and bachelor's, respectively; FKGL mean scores were 9.08 (SD 0.90), 10.27 (SD 1.06), 13.4 (SD 0.80), and 13.74 (SD 1.18). GPT-3.5 only aligned with the prespecified education levels at the bachelor's degree. Conversely, GPT-4's FKRE mean scores were 74.54 (SD 2.6), 71.25 (SD 4.96), 47.61 (SD 6.13), and 13.71 (SD 5.77), with FKGL mean scores of 6.3 (SD 0.73), 6.7 (SD 1.11), 11.09 (SD 1.26), and 17.03 (SD 1.11) for the same respective education levels. GPT-4 met the target readability for all groups except the 6th-grade FKRE average. Both GLMs produced outputs with statistically significant differences (P<.001; 8th grade P<.001; high school P<.001; bachelors P=.003; FKGL: 6th grade P=.001; 8th grade P<.001; high school P<.001; bachelors P<.001) between mean FKRE and FKGL across input education levels. CONCLUSIONS: GLMs can change the structure and readability of medical text outputs according to input-specified education. However, GLMs categorize input education designation into 3 broad tiers of output readability: easy (6th and 8th grade), medium (high school), and difficult (bachelor's degree). This is the first result to suggest that there are broader boundaries in the success of GLMs in output text simplification. Future research must establish how GLMs can reliably personalize medical texts to prespecified education levels to enable a broader impact on health care literacy.

2.
Artigo em Inglês | MEDLINE | ID: mdl-38188191

RESUMO

Objective: We aimed to elucidate associations between geographic location, size, and ranking of medical schools that orthopaedic surgery residents graduate from and the residencies that they match both pre-COVID-19 and post-COVID-19 pandemic by examining the 2017 to 2022 orthopaedic surgery residency cohorts. Methods: Demographics were extracted using Doximity Residency Navigator platform, the 2021 US News and World Report, and program websites. Medical schools were classified as large if they had >613 medical students. Postgraduate year 1 (PGY-1) (2021 match) and PGY-2 (2022 match) residents were classified as the COVID-19 cohort. Location was categorized as Northeast, Midwest, South, and West. Chi-square tests, Cohen's H value, and descriptive statistics were used for analysis with statistical significance set at p <0.05. Results: Four thousand two hundred forty-three residents from 160 accredited US orthopaedic residency programs (78.4%) were included. Northeastern applicants were most likely to match in the same region (p <0.01), and southern applicants were most likely to match at their home program (p <0.001). Applicants affected by the COVID-19 pandemic did not differ from their predecessors with regards to matching to the same region (p = 0.637) or home program (p = 0.489). Applicants from public medical schools were more likely to match in the same region and at their home program (p <0.001), whereas those from private medical schools were more likely to match at top-ranked residencies (p <0.001). Students from both top 25- and top 50-ranked medical schools were more likely to match at their home program (p <0.01) and attend top 20-ranked residency programs (p <0.0001). Conclusion: These results demonstrate significant associations between matched residencies and attended medical schools' geographic location, school type, and ranking. During the pandemic, geographic trends were overall unchanged, whereas residents from large or lower-ranked schools were more likely to match at home programs, and those from private or top-ranked schools were less likely to attend top residencies.

3.
World Neurosurg ; 184: 253-266.e2, 2024 Apr.
Artigo em Inglês | MEDLINE | ID: mdl-38141755

RESUMO

OBJECTIVE: With no cure for Alzheimer disease (AD), current efforts involve therapeutics that prevent further cognitive impairment. Deep brain stimulation (DBS) has been studied for its potential to mitigate AD symptoms. This systematic review investigates the efficacy of current and previous targets for their ability to slow cognitive decline in treating AD. METHODS: A systematic review of the literature was performed through a search of the PubMed, Scopus, and Web of Science databases. Human studies between 1994 and 2023 were included. Sample size, cognitive outcomes, and complications were recorded for each study. RESULTS: Fourteen human studies were included: 7 studies with 6 distinct cohorts (n = 56) targeted the fornix, 6 studies with 3 distinct cohorts (n = 17) targeted the nucleus basalis of Meynert (NBM), and 1 study (n = 3) investigated DBS of the ventral striatum (VS). The Alzheimer's Disease Assessment Scale-Cognitive Subscale, Mini-Mental State Examination, and Clinical Dementia Rating Scale Sum of Boxes were used as the primary outcomes. In 5 of 6 cohorts where DBS targeted the fornix, cognitive decline was slowed based on the Alzheimer's Disease Assessment Scale-Cognitive Subscale or Mini-Mental State Examination scores. In 2 of 3 NBM cohorts, a similar reduction was reported. When DBS targeted the VS, the patients' Clinical Dementia Rating Scale Sum of Boxes scores indicated a slowed decline. CONCLUSIONS: This review summarizes current evidence and addresses variability in study designs regarding the therapeutic benefit of DBS of the fornix, NBM, and VS. Because of varying study parameters, varying outcome measures, varying study durations, and limited cohort sizes, definitive conclusions regarding the utility of DBS for AD cannot be made. Further investigation is needed to determine the safety and efficacy of DBS for AD.


Assuntos
Doença de Alzheimer , Disfunção Cognitiva , Estimulação Encefálica Profunda , Estimulação Encefálica Profunda/métodos , Humanos , Doença de Alzheimer/terapia , Doença de Alzheimer/psicologia , Disfunção Cognitiva/terapia , Disfunção Cognitiva/etiologia , Disfunção Cognitiva/psicologia , Resultado do Tratamento , Fórnice
4.
JMIR Med Educ ; 9: e49877, 2023 Nov 10.
Artigo em Inglês | MEDLINE | ID: mdl-37948112

RESUMO

BACKGROUND: The transition to clinical clerkships can be difficult for medical students, as it requires the synthesis and application of preclinical information into diagnostic and therapeutic decisions. ChatGPT-a generative language model with many medical applications due to its creativity, memory, and accuracy-can help students in this transition. OBJECTIVE: This paper models ChatGPT 3.5's ability to perform interactive clinical simulations and shows this tool's benefit to medical education. METHODS: Simulation starting prompts were refined using ChatGPT 3.5 in Google Chrome. Starting prompts were selected based on assessment format, stepwise progression of simulation events and questions, free-response question type, responsiveness to user inputs, postscenario feedback, and medical accuracy of the feedback. The chosen scenarios were advanced cardiac life support and medical intensive care (for sepsis and pneumonia). RESULTS: Two starting prompts were chosen. Prompt 1 was developed through 3 test simulations and used successfully in 2 simulations. Prompt 2 was developed through 10 additional test simulations and used successfully in 1 simulation. CONCLUSIONS: ChatGPT is capable of creating simulations for early clinical education. These simulations let students practice novel parts of the clinical curriculum, such as forming independent diagnostic and therapeutic impressions over an entire patient encounter. Furthermore, the simulations can adapt to user inputs in a way that replicates real life more accurately than premade question bank clinical vignettes. Finally, ChatGPT can create potentially unlimited free simulations with specific feedback, which increases access for medical students with lower socioeconomic status and underresourced medical schools. However, no tool is perfect, and ChatGPT is no exception; there are concerns about simulation accuracy and replicability that need to be addressed to further optimize ChatGPT's performance as an educational resource.

SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA