Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 5 de 5
Filtrar
Más filtros

Bases de datos
Tipo del documento
País de afiliación
Intervalo de año de publicación
1.
Anesthesiology ; 140(4): 701-714, 2024 Apr 01.
Artículo en Inglés | MEDLINE | ID: mdl-38207329

RESUMEN

BACKGROUND: Understanding factors that explain why some women experience greater postoperative pain and consume more opioids after cesarean delivery is crucial to building an evidence base for personalized prevention. Comprehensive psychosocial assessment with validated questionnaires in the preoperative period can be time-consuming. A three-item questionnaire has shown promise as a simpler tool to be integrated into clinical practice, but its brevity may limit the ability to explain heterogeneity in psychosocial pain modulators among individuals. This study compared the explanatory ability of three models: (1) the 3-item questionnaire, (2) a 58-item questionnaire (long) including validated questionnaires (e.g., Brief Pain Inventory, Patient Reported Outcome Measurement Information System [PROMIS]) plus the 3-item questionnaire, and (3) a novel 19-item questionnaire (brief) assessing several psychosocial factors plus the 3-item questionnaire. Additionally, this study explored the utility of adding a pragmatic quantitative sensory test to models. METHODS: In this prospective, observational study, 545 women undergoing cesarean delivery completed questionnaires presurgery. Pain during local anesthetic skin wheal before spinal placement served as a pragmatic quantitative sensory test. Postoperatively, pain and opioid consumption were assessed. Linear regression analysis assessed model fit and the association of model items with pain and opioid consumption during the 48 h after surgery. RESULTS: A modest amount of variability was explained by each of the three models for postoperative pain and opioid consumption. Both the brief and long questionnaire models performed better than the three-item questionnaire but were themselves statistically indistinguishable. Items that were independently associated with pain and opioid consumption included anticipated postsurgical pain medication requirement, surgical anxiety, poor sleep, pre-existing pain, and catastrophic thinking about pain. The quantitative sensory test was itself independently associated with pain across models but only modestly improved models for postoperative pain. CONCLUSIONS: The brief questionnaire may be more clinically feasible than longer validated questionnaires, while still performing better and integrating a more comprehensive psychosocial assessment than the three-item questionnaire.


Asunto(s)
Analgésicos Opioides , Dolor Postoperatorio , Embarazo , Humanos , Femenino , Analgésicos Opioides/uso terapéutico , Estudios Prospectivos , Dolor Postoperatorio/prevención & control , Encuestas y Cuestionarios , Fenotipo
4.
Anesth Analg ; 138(6): e37-e38, 2024 Jun 01.
Artículo en Inglés | MEDLINE | ID: mdl-38771606

Asunto(s)
Humanos
5.
BJA Open ; 10: 100280, 2024 Jun.
Artículo en Inglés | MEDLINE | ID: mdl-38764485

RESUMEN

Background: Patients are increasingly using artificial intelligence (AI) chatbots to seek answers to medical queries. Methods: Ten frequently asked questions in anaesthesia were posed to three AI chatbots: ChatGPT4 (OpenAI), Bard (Google), and Bing Chat (Microsoft). Each chatbot's answers were evaluated in a randomised, blinded order by five residency programme directors from 15 medical institutions in the USA. Three medical content quality categories (accuracy, comprehensiveness, safety) and three communication quality categories (understandability, empathy/respect, and ethics) were scored between 1 and 5 (1 representing worst, 5 representing best). Results: ChatGPT4 and Bard outperformed Bing Chat (median [inter-quartile range] scores: 4 [3-4], 4 [3-4], and 3 [2-4], respectively; P<0.001 with all metrics combined). All AI chatbots performed poorly in accuracy (score of ≥4 by 58%, 48%, and 36% of experts for ChatGPT4, Bard, and Bing Chat, respectively), comprehensiveness (score ≥4 by 42%, 30%, and 12% of experts for ChatGPT4, Bard, and Bing Chat, respectively), and safety (score ≥4 by 50%, 40%, and 28% of experts for ChatGPT4, Bard, and Bing Chat, respectively). Notably, answers from ChatGPT4, Bard, and Bing Chat differed statistically in comprehensiveness (ChatGPT4, 3 [2-4] vs Bing Chat, 2 [2-3], P<0.001; and Bard 3 [2-4] vs Bing Chat, 2 [2-3], P=0.002). All large language model chatbots performed well with no statistical difference for understandability (P=0.24), empathy (P=0.032), and ethics (P=0.465). Conclusions: In answering anaesthesia patient frequently asked questions, the chatbots perform well on communication metrics but are suboptimal for medical content metrics. Overall, ChatGPT4 and Bard were comparable to each other, both outperforming Bing Chat.

SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA