Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 3 de 3
Filtrar
Más filtros

Banco de datos
Tipo de estudio
Tipo del documento
País de afiliación
Intervalo de año de publicación
1.
J Neurointerv Surg ; 16(3): 253-260, 2024 Feb 12.
Artículo en Inglés | MEDLINE | ID: mdl-38184368

RESUMEN

BACKGROUND: Artificial intelligence (AI) has become a promising tool in medicine. ChatGPT, a large language model AI Chatbot, shows promise in supporting clinical practice. We assess the potential of ChatGPT as a clinical reasoning tool for mechanical thrombectomy in patients with stroke. METHODS: An internal validation of the abilities of ChatGPT was first performed using artificially created patient scenarios before assessment of real patient scenarios from the medical center's stroke database. All patients with large vessel occlusions who underwent mechanical thrombectomy at Tulane Medical Center between January 1, 2022 and December 31, 2022 were included in the study. The performance of ChatGPT in evaluating which patients should undergo mechanical thrombectomy was compared with the decisions made by board-certified stroke neurologists and neurointerventionalists. The interpretation skills, clinical reasoning, and accuracy of ChatGPT were analyzed. RESULTS: 102 patients with large vessel occlusions underwent mechanical thrombectomy. ChatGPT agreed with the physician's decision whether or not to pursue thrombectomy in 54.3% of the cases. ChatGPT had mistakes in 8.8% of the cases, consisting of mathematics, logic, and misinterpretation errors. In the internal validation phase, ChatGPT was able to provide nuanced clinical reasoning and was able to perform multi-step thinking, although with an increased rate of making mistakes. CONCLUSION: ChatGPT shows promise in clinical reasoning, including the ability to factor a patient's underlying comorbidities when considering mechanical thrombectomy. However, ChatGPT is prone to errors as well and should not be relied on as a sole decision-making tool in its present form, but it has potential to assist clinicians with more efficient work flow.


Asunto(s)
Inteligencia Artificial , Accidente Cerebrovascular , Humanos , Accidente Cerebrovascular/diagnóstico por imagen , Accidente Cerebrovascular/cirugía , Razonamiento Clínico , Bases de Datos Factuales , Trombectomía
2.
BMJ Neurol Open ; 5(2): e000530, 2023.
Artículo en Inglés | MEDLINE | ID: mdl-37936648

RESUMEN

Background and objectives: ChatGPT has shown promise in healthcare. To assess the utility of this novel tool in healthcare education, we evaluated ChatGPT's performance in answering neurology board exam questions. Methods: Neurology board-style examination questions were accessed from BoardVitals, a commercial neurology question bank. ChatGPT was provided a full question prompt and multiple answer choices. First attempts and additional attempts up to three tries were given to ChatGPT to select the correct answer. A total of 560 questions (14 blocks of 40 questions) were used, although any image-based questions were disregarded due to ChatGPT's inability to process visual input. The artificial intelligence (AI) answers were then compared with human user data provided by the question bank to gauge its performance. Results: Out of 509 eligible questions over 14 question blocks, ChatGPT correctly answered 335 questions (65.8%) on the first attempt/iteration and 383 (75.3%) over three attempts/iterations, scoring at approximately the 26th and 50th percentiles, respectively. The highest performing subjects were pain (100%), epilepsy & seizures (85%) and genetic (82%) while the lowest performing subjects were imaging/diagnostic studies (27%), critical care (41%) and cranial nerves (48%). Discussion: This study found that ChatGPT performed similarly to its human counterparts. The accuracy of the AI increased with multiple attempts and performance fell within the expected range of neurology resident learners. This study demonstrates ChatGPT's potential in processing specialised medical information. Future studies would better define the scope to which AI would be able to integrate into medical decision making.

3.
ACS Nano ; 2023 Jan 03.
Artículo en Inglés | MEDLINE | ID: mdl-36595218

RESUMEN

Interferon-gamma release assays (IGRAs) that measure pathogen-specific T-cell response rates can provide a more reliable estimate of protection than specific antibody levels but have limited potential for widespread use due to their workflow, personnel, and instrumentation demands. The major vaccines for SARS-CoV-2 have demonstrated substantial efficacy against all of its current variants, but approaches are needed to determine how these vaccines will perform against future variants, as they arise, to inform vaccine and public health policies. Here we describe a rapid, sensitive, nanolayer polylysine-integrated microfluidic chip IGRA read by a fluorescent microscope that has a 5 h sample-to-answer time and uses ∼25 µL of a fingerstick whole blood sample. Results from this assay correlated with those of a comparable clinical IGRA when used to evaluate the T-cell response to SARS-CoV-2 peptides in a population of vaccinated and/or infected individuals. Notably, this streamlined and inexpensive assay is suitable for high-throughput analyses in resource-limited settings for other infectious diseases.

SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA