Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 3 de 3
Filtrar
Más filtros











Base de datos
Intervalo de año de publicación
1.
N Am Spine Soc J ; 19: 100333, 2024 Sep.
Artículo en Inglés | MEDLINE | ID: mdl-39040948

RESUMEN

Background: ChatGPT is an advanced language AI able to generate responses to clinical questions regarding lumbar disc herniation with radiculopathy. Artificial intelligence (AI) tools are increasingly being considered to assist clinicians in decision-making. This study compared ChatGPT-3.5 and ChatGPT-4.0 responses to established NASS clinical guidelines and evaluated concordance. Methods: ChatGPT-3.5 and ChatGPT-4.0 were prompted with fifteen questions from The 2012 NASS Clinical Guidelines for the diagnosis and treatment of lumbar disc herniation with radiculopathy. Clinical questions organized into categories were directly entered as unmodified queries into ChatGPT. Language output was assessed by two independent authors on September 26, 2023 based on operationally-defined parameters of accuracy, over-conclusiveness, supplementary, and incompleteness. ChatGPT-3.5 and ChatGPT-4.0 performance was compared via chi-square analyses. Results: Among the fifteen responses produced by ChatGPT-3.5, 7 (47%) were accurate, 7 (47%) were over-conclusive, fifteen (100%) were supplementary, and 6 (40%) were incomplete. For ChatGPT-4.0, ten (67%) were accurate, 5 (33%) were over-conclusive, 10 (67%) were supplementary, and 6 (40%) were incomplete. There was a statistically significant difference in supplementary information (100% vs. 67%; p=.014) between ChatGPT-3.5 and ChatGPT-4.0. Accuracy (47% vs. 67%; p=.269), over-conclusiveness (47% vs. 33%; p=.456), and incompleteness (40% vs. 40%; p=1.000) did not show significant differences between ChatGPT-3.5 and ChatGPT-4.0. ChatGPT-3.5 and ChatGPT-4.0 both yielded 100% accuracy for definition and history and physical examination categories. Diagnostic testing yielded 0% accuracy for ChatGPT-3.5 and 100% accuracy for ChatGPT-4.0. Nonsurgical interventions had 50% accuracy for ChatGPT-3.5 and 63% accuracy for ChatGPT-4.0. Surgical interventions resulted in 0% accuracy for ChatGPT-3.5 and 33% accuracy for ChatGPT-4.0. Conclusions: ChatGPT-4.0 provided less supplementary information and overall higher accuracy in question categories than ChatGPT-3.5. ChatGPT showed reasonable concordance to NASS guidelines, but clinicians should caution use of ChatGPT in its current state as it fails to safeguard against misinformation.

2.
Cureus ; 16(3): e57319, 2024 Mar.
Artículo en Inglés | MEDLINE | ID: mdl-38690503

RESUMEN

The intracellular coccobacilli Rickettsia rickettsii causes Rocky Mountain Spotted Fever, a potentially fatal illness. This bacterium is transmitted to humans through a tick vector. Patients classically present with a triad of symptoms, including fever, headache, and a rash that begins on the extremities and spreads proximally to the trunk. Diagnosis of this disease can prove difficult when patients have unusual symptoms, such as hypertensive crisis. In this case report, we present a 29-year-old male who arrived at the emergency room with altered mental status and a hypertensive crisis after his family reported one week of changes in his behavior. The patient had no evidence of ticks, tick bites, fever, or rash. Positive findings in the emergency room included a WBC of 14.9 × 109. All other physical exams, imaging, and laboratory findings were non-contributory. The patient was promptly given IV hydralazine to control his blood pressure and empiric IV ceftriaxone for potential infection, and he was admitted for observation. Over the course of three days, WBC levels decreased, and his altered mental status improved. On day 3, the patient remembered a tick crawling across his hand, and this prompted the ordering of immunoglobulin levels for tick-borne illnesses. IgM for RMSF was positive. This case presentation illustrates the need for clinicians to keep the potential diagnosis of RMSF high on the differential, even in the presence of a paucity of symptoms, as prompt treatment with doxycycline can be lifesaving. This case may also be one of the first reported in the literature of hypertension being a symptom of Rocky Mountain Spotted Fever. It is plausible, however, that this patient's hypertension was due to an acute stress response.

3.
Hand (N Y) ; : 15589447241232095, 2024 Feb 27.
Artículo en Inglés | MEDLINE | ID: mdl-38414220

RESUMEN

BACKGROUND: The National Institutes of Health (NIH) and the American Medical Association (AMA) recommend a sixth-grade reading level for patient-directed content. This study aims to quantitatively evaluate the readability of online information sources related to carpal tunnel surgery using established readability indices. METHODS: Web searches for "carpal tunnel release" and "carpal tunnel decompression surgery" queries were performed using Google, and the first 20 websites were identified per query. WebFX online software tools were utilized to determine readability. Indices included Flesch Kincaid Reading Ease, Flesch Kincaid Grade Level, Coleman Liau Index, Automated Readability Index, Gunning Fog Score, and the Simple Measure of Gobbledygook Index. Health-specific clickthrough rate (CTR) data were used in order to select the first 20 search engine results page from each query. RESULTS: "Carpal tunnel release" had a mean readability of 8.46, and "carpal tunnel decompression surgery" had a mean readability of 8.70. The range of mean readability scores among the indices used for both search queries was 6.17 to 14.0. The total mean readability for carpal tunnel surgery information was found to be 8.58. This corresponds to approximately a ninth-grade reading level in the United States. CONCLUSION: The average readability of carpal tunnel surgery online content is three grade levels above the recommended sixth-grade level for patient-directed materials. This discrepancy indicates that existing online materials related to carpal tunnel surgery are more difficult to understand than the standards set by NIH and AMA.

SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA