Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 20
Filtrar
Más filtros

Banco de datos
País/Región como asunto
Tipo del documento
Intervalo de año de publicación
1.
Clin Infect Dis ; 78(4): 825-832, 2024 Apr 10.
Artículo en Inglés | MEDLINE | ID: mdl-37823416

RESUMEN

BACKGROUND: The development of chatbot artificial intelligence (AI) has raised major questions about their use in healthcare. We assessed the quality and safety of the management suggested by Chat Generative Pre-training Transformer 4 (ChatGPT-4) in real-life practice for patients with positive blood cultures. METHODS: Over a 4-week period in a tertiary care hospital, data from consecutive infectious diseases (ID) consultations for a first positive blood culture were prospectively provided to ChatGPT-4. Data were requested to propose a comprehensive management plan (suspected/confirmed diagnosis, workup, antibiotic therapy, source control, follow-up). We compared the management plan suggested by ChatGPT-4 with the plan suggested by ID consultants based on literature and guidelines. Comparisons were performed by 2 ID physicians not involved in patient management. RESULTS: Forty-four cases with a first episode of positive blood culture were included. ChatGPT-4 provided detailed and well-written responses in all cases. AI's diagnoses were identical to those of the consultant in 26 (59%) cases. Suggested diagnostic workups were satisfactory (ie, no missing important diagnostic tests) in 35 (80%) cases; empirical antimicrobial therapies were adequate in 28 (64%) cases and harmful in 1 (2%). Source control plans were inadequate in 4 (9%) cases. Definitive antibiotic therapies were optimal in 16 (36%) patients and harmful in 2 (5%). Overall, management plans were considered optimal in only 1 patient, as satisfactory in 17 (39%), and as harmful in 7 (16%). CONCLUSIONS: The use of ChatGPT-4 without consultant input remains hazardous when seeking expert medical advice in 2023, especially for severe IDs.


Asunto(s)
Médicos , Sepsis , Humanos , Inteligencia Artificial , Estudios Prospectivos , Programas Informáticos
2.
Int J Legal Med ; 138(3): 1173-1178, 2024 May.
Artículo en Inglés | MEDLINE | ID: mdl-38172326

RESUMEN

Technology has greatly influenced and radically changed human life, from communication to creativity and from productivity to entertainment. The authors, starting from considerations concerning the implementation of new technologies with a strong impact on people's everyday lives, take up Collingridge's dilemma and relate it to the application of AI in healthcare. Collingridge's dilemma is an ethical and epistemological problem concerning the relationship between technology and society which involves two approaches. The proactive approach and socio-technological experimentation taken into account in the dilemma are discussed, the former taking health technology assessment (HTA) processes as a reference and the latter the AI studies conducted so far. As a possible prevention of the critical issues raised, the use of the medico-legal method is proposed, which classically lies between the prevention of possible adverse events and the reconstruction of how these occurred.The authors believe that this methodology, adopted as a European guideline in the medico-legal field for the assessment of medical liability, can be adapted to AI applied to the healthcare scenario and used for the assessment of liability issues. The topic deserves further investigation and will certainly be taken into consideration as a possible key to future scenarios.


Asunto(s)
Inteligencia Artificial , Atención a la Salud , Humanos , Atención a la Salud/métodos , Responsabilidad Legal
3.
Ann Emerg Med ; 84(2): 128-138, 2024 Aug.
Artículo en Inglés | MEDLINE | ID: mdl-38483426

RESUMEN

STUDY OBJECTIVE: The workload of clinical documentation contributes to health care costs and professional burnout. The advent of generative artificial intelligence language models presents a promising solution. The perspective of clinicians may contribute to effective and responsible implementation of such tools. This study sought to evaluate 3 uses for generative artificial intelligence for clinical documentation in pediatric emergency medicine, measuring time savings, effort reduction, and physician attitudes and identifying potential risks and barriers. METHODS: This mixed-methods study was performed with 10 pediatric emergency medicine attending physicians from a single pediatric emergency department. Participants were asked to write a supervisory note for 4 clinical scenarios, with varying levels of complexity, twice without any assistance and twice with the assistance of ChatGPT Version 4.0. Participants evaluated 2 additional ChatGPT-generated clinical summaries: a structured handoff and a visit summary for a family written at an 8th grade reading level. Finally, a semistructured interview was performed to assess physicians' perspective on the use of ChatGPT in pediatric emergency medicine. Main outcomes and measures included between subjects' comparisons of the effort and time taken to complete the supervisory note with and without ChatGPT assistance. Effort was measured using a self-reported Likert scale of 0 to 10. Physicians' scoring of and attitude toward the ChatGPT-generated summaries were measured using a 0 to 10 Likert scale and open-ended questions. Summaries were scored for completeness, accuracy, efficiency, readability, and overall satisfaction. A thematic analysis was performed to analyze the content of the open-ended questions and to identify key themes. RESULTS: ChatGPT yielded a 40% reduction in time and a 33% decrease in effort for supervisory notes in intricate cases, with no discernible effect on simpler notes. ChatGPT-generated summaries for structured handoffs and family letters were highly rated, ranging from 7.0 to 9.0 out of 10, and most participants favored their inclusion in clinical practice. However, there were several critical reservations, out of which a set of general recommendations for applying ChatGPT to clinical summaries was formulated. CONCLUSION: Pediatric emergency medicine attendings in our study perceived that ChatGPT can deliver high-quality summaries while saving time and effort in many scenarios, but not all.


Asunto(s)
Inteligencia Artificial , Servicio de Urgencia en Hospital , Humanos , Médicos/psicología , Femenino , Masculino , Actitud del Personal de Salud , Medicina de Urgencia Pediátrica , Documentación/métodos , Documentación/normas , Medicina de Emergencia , Registros Electrónicos de Salud , Adulto
4.
Anesth Analg ; 138(5): 938-950, 2024 May 01.
Artículo en Inglés | MEDLINE | ID: mdl-38055624

RESUMEN

BACKGROUND: This study explored physician anesthesiologists' knowledge, exposure, and perceptions of artificial intelligence (AI) and their associations with attitudes and expectations regarding its use in clinical practice. The findings highlight the importance of understanding anesthesiologists' perspectives for the successful integration of AI into anesthesiology, as AI has the potential to revolutionize the field. METHODS: A cross-sectional survey of 27,056 US physician anesthesiologists was conducted to assess their knowledge, perceptions, and expectations regarding the use of AI in clinical practice. The primary outcome measured was attitude toward the use of AI in clinical practice, with scores of 4 or 5 on a 5-point Likert scale indicating positive attitudes. The anticipated impact of AI on various aspects of professional work was measured using a 3-point Likert scale. Logistic regression was used to explore the relationship between participant responses and attitudes toward the use of AI in clinical practice. RESULTS: A 2021 survey of 27,056 US physician anesthesiologists received 1086 responses (4% response rate). Most respondents were male (71%), active clinicians (93%) under 45 (34%). A majority of anesthesiologists (61%) had some knowledge of AI and 48% had a positive attitude toward using AI in clinical practice. While most respondents believed that AI can improve health care efficiency (79%), timeliness (75%), and effectiveness (69%), they are concerned that its integration in anesthesiology could lead to a decreased demand for anesthesiologists (45%) and decreased earnings (45%). Within a decade, respondents expected AI would outperform them in predicting adverse perioperative events (83%), formulating pain management plans (67%), and conducting airway exams (45%). The absence of algorithmic transparency (60%), an ambiguous environment regarding malpractice (47%), and the possibility of medical errors (47%) were cited as significant barriers to the use of AI in clinical practice. Respondents indicated that their motivation to use AI in clinical practice stemmed from its potential to enhance patient outcomes (81%), lower health care expenditures (54%), reduce bias (55%), and boost productivity (53%). Variables associated with positive attitudes toward AI use in clinical practice included male gender (odds ratio [OR], 1.7; P < .001), 20+ years of experience (OR, 1.8; P < .01), higher AI knowledge (OR, 2.3; P = .01), and greater AI openness (OR, 10.6; P < .01). Anxiety about future earnings was associated with negative attitudes toward AI use in clinical practice (OR, 0.54; P < .01). CONCLUSIONS: Understanding anesthesiologists' perspectives on AI is essential for the effective integration of AI into anesthesiology, as AI has the potential to revolutionize the field.


Asunto(s)
Anestésicos , Médicos , Humanos , Masculino , Femenino , Anestesiólogos , Estudios Transversales , Inteligencia Artificial , Encuestas y Cuestionarios
5.
J Clin Densitom ; 27(2): 101480, 2024.
Artículo en Inglés | MEDLINE | ID: mdl-38401238

RESUMEN

BACKGROUND: Artificial intelligence (AI) large language models (LLMs) such as ChatGPT have demonstrated the ability to pass standardized exams. These models are not trained for a specific task, but instead trained to predict sequences of text from large corpora of documents sourced from the internet. It has been shown that even models trained on this general task can pass exams in a variety of domain-specific fields, including the United States Medical Licensing Examination. We asked if large language models would perform as well on a much narrower subdomain tests designed for medical specialists. Furthermore, we wanted to better understand how progressive generations of GPT (generative pre-trained transformer) models may be evolving in the completeness and sophistication of their responses even while generational training remains general. In this study, we evaluated the performance of two versions of GPT (GPT 3 and 4) on their ability to pass the certification exam given to physicians to work as osteoporosis specialists and become a certified clinical densitometrists. The CCD exam has a possible score range of 150 to 400. To pass, you need a score of 300. METHODS: A 100-question multiple-choice practice exam was obtained from a 3rd party exam preparation website that mimics the accredited certification tests given by the ISCD (International Society for Clinical Densitometry). The exam was administered to two versions of GPT, the free version (GPT Playground) and ChatGPT+, which are based on GPT-3 and GPT-4, respectively (OpenAI, San Francisco, CA). The systems were prompted with the exam questions verbatim. If the response was purely textual and did not specify which of the multiple-choice answers to select, the authors matched the text to the closest answer. Each exam was graded and an estimated ISCD score was provided from the exam website. In addition, each response was evaluated by a rheumatologist CCD and ranked for accuracy using a 5-level scale. The two GPT versions were compared in terms of response accuracy and length. RESULTS: The average response length was 11.6 ±19 words for GPT-3 and 50.0±43.6 words for GPT-4. GPT-3 answered 62 questions correctly resulting in a failing ISCD score of 289. However, GPT-4 answered 82 questions correctly with a passing score of 342. GPT-3 scored highest on the "Overview of Low Bone Mass and Osteoporosis" category (72 % correct) while GPT-4 scored well above 80 % accuracy on all categories except "Imaging Technology in Bone Health" (65 % correct). Regarding subjective accuracy, GPT-3 answered 23 questions with nonsensical or totally wrong responses while GPT-4 had no responses in that category. CONCLUSION: If this had been an actual certification exam, GPT-4 would now have a CCD suffix to its name even after being trained using general internet knowledge. Clearly, more goes into physician training than can be captured in this exam. However, GPT algorithms may prove to be valuable physician aids in the diagnoses and monitoring of osteoporosis and other diseases.


Asunto(s)
Inteligencia Artificial , Certificación , Humanos , Osteoporosis/diagnóstico , Competencia Clínica , Evaluación Educacional/métodos , Estados Unidos
6.
Clin Exp Dermatol ; 49(7): 715-718, 2024 Jun 25.
Artículo en Inglés | MEDLINE | ID: mdl-38180108

RESUMEN

BACKGROUND: ChatGPT is a free artificial intelligence (AI)-based natural language processing tool that generates complex responses to inputs from users. OBJECTIVES: To determine whether ChatGPT is able to generate high-quality responses to patient-submitted questions in the patient portal. METHODS: Patient-submitted questions and the corresponding responses from their dermatology physician were extracted from the electronic medical record for analysis. The questions were input into ChatGPT (version 3.5) and the outputs extracted for analysis, with manual removal of verbiage pertaining to ChatGPT's inability to provide medical advice. Ten blinded reviewers (seven physicians and three nonphysicians) rated and selected their preference in terms of 'overall quality', 'readability', 'accuracy', 'thoroughness' and 'level of empathy' of the physician- and ChatGPT-generated responses. RESULTS: Thirty-one messages and responses were analysed. Physician-generated responses were vastly preferred over the ChatGPT -responses by the physician and nonphysician reviewers and received significantly higher ratings for 'readability' and 'level of empathy'. CONCLUSIONS: The results of this study suggest that physician-generated responses to patients' portal messages are still preferred over ChatGPT, but generative AI tools may be helpful in generating the first drafts of responses and providing information on education resources for patients.


Asunto(s)
Dermatología , Registros Electrónicos de Salud , Procesamiento de Lenguaje Natural , Humanos , Inteligencia Artificial , Portales del Paciente , Relaciones Médico-Paciente , Médicos/psicología
7.
Am J Emerg Med ; 79: 161-166, 2024 May.
Artículo en Inglés | MEDLINE | ID: mdl-38447503

RESUMEN

BACKGROUND AND AIMS: Artificial Intelligence (AI) models like GPT-3.5 and GPT-4 have shown promise across various domains but remain underexplored in healthcare. Emergency Departments (ED) rely on established scoring systems, such as NIHSS and HEART score, to guide clinical decision-making. This study aims to evaluate the proficiency of GPT-3.5 and GPT-4 against experienced ED physicians in calculating five commonly used medical scores. METHODS: This retrospective study analyzed data from 150 patients who visited the ED over one week. Both AI models and two human physicians were tasked with calculating scores for NIH Stroke Scale, Canadian Syncope Risk Score, Alvarado Score for Acute Appendicitis, Canadian CT Head Rule, and HEART Score. Cohen's Kappa statistic and AUC values were used to assess inter-rater agreement and predictive performance, respectively. RESULTS: The highest level of agreement was observed between the human physicians (Kappa = 0.681), while GPT-4 also showed moderate to substantial agreement with them (Kappa values of 0.473 and 0.576). GPT-3.5 had the lowest agreement with human scorers. These results highlight the superior predictive performance of human expertise over the currently available automated systems for this specific medical outcome. Human physicians achieved a higher ROC-AUC on 3 of the 5 scores, but none of the differences were statistically significant. CONCLUSIONS: While AI models demonstrated some level of concordance with human expertise, they fell short in emulating the complex clinical judgments that physicians make. The study suggests that current AI models may serve as supplementary tools but are not ready to replace human expertise in high-stakes settings like the ED. Further research is needed to explore the capabilities and limitations of AI in emergency medicine.


Asunto(s)
Inteligencia Artificial , Médicos , Humanos , Canadá , Estudios Retrospectivos , Servicio de Urgencia en Hospital
8.
Arthroscopy ; 40(7): 2080-2082, 2024 Jul.
Artículo en Inglés | MEDLINE | ID: mdl-38484923

RESUMEN

ChatGPT is designed to provide accurate and reliable information to the best of its abilities based on the data input and knowledge available. Thus, ChatGPT is being studied as a patient information tool. This artificial intelligence (AI) tool has been shown to frequently provide technically correct information but with limitations. ChatGPT provides different answers to similar questions based on the prompts, and patients may not have expertise in prompting ChatGPT to elicit a best answer. (Prompting large language models has been shown to be a skill that can improve.) Of greater concern, ChatGPT fails to provide sources or references for its answers. At present, ChatGPT cannot be relied upon to address patient questions; in the future, ChatGPT will improve. Today, AI requires physician expertise to interpret AI answers for patients.


Asunto(s)
Inteligencia Artificial , Humanos , Educación del Paciente como Asunto , Competencia Clínica , Encuestas y Cuestionarios
9.
Hum Resour Health ; 21(1): 79, 2023 10 06.
Artículo en Inglés | MEDLINE | ID: mdl-37803342

RESUMEN

Health workforce planning has become a significant global problem considering there are estimates of an 18 million healthcare provider shortfall by 2030. There are two mechanisms to address healthcare worker shortages: (1) domestic education of those professions and (2) integration of internationally educated health professionals. Integration of internationally educated health professionals into the Canadian healthcare system requires: (1) reductions in systemic and administrative barriers and (2) development, testing, and implementation of credential equivalency recognition systems. The goal of this scoping review was to identify systems that are employed to determine credential equivalency, with a focus on Canada. The scoping review was carried by employing: (1) a systematic literature search (9) and (2) a website and grey literature Google search of professional governing bodies from a selection of medical/allied healthcare professions, but also other non-medical professions, such as law, engineering and accounting. Seven databases were searched to identify relevant sources: MEDLINE, CINAHL Plus with Full Text, PsycINFO, SPORT Discus, Academic Search Complete, Business Source Complete, and SCOPUS. The search strategy combined keyword, text terms, and medical subject headings (MeSH) and was carried out with the help of a health sciences librarian. Seven articles were included in the final manuscript review from the following professions: nursing; psychology; engineering; pharmacy; and multiple health professions. Twenty-four health-related professional governing body websites were hand searched to determine systems to evaluate international equivalency. There were many systems employed to determine equivalency, but there were no systems that were automated or that employed machine-learning or artificial intelligence to guide the evaluation process.


Asunto(s)
Inteligencia Artificial , Empleos en Salud , Humanos , Canadá , Personal de Salud , Fuerza Laboral en Salud
10.
Hum Resour Health ; 21(1): 45, 2023 06 13.
Artículo en Inglés | MEDLINE | ID: mdl-37312214

RESUMEN

Artificial Intelligence (AI) technologies and data science models may hold potential for enabling an understanding of global health inequities and support decision-making related toward possible interventions. However, AI inputs should not perpetuate the biases and structural issues within our global societies that have created various health inequities. We need AI to be able to 'see' the full context of what it is meant to learn. AI trained with biased data produces biased outputs and providing health workforce training with such outputs further contributes to the buildup of biases and structural inequities. The accelerating and intricately evolving technology and digitalization will influence the education and practice of health care workers. Before we invest in utilizing AI in health workforce training globally, it is important to make sure that multiple stakeholders from the global arena are included in the conversation to address the need for training in 'AI and the role of AI in training'. This is a daunting task for any one entity and a multi-sectorial interactions and solutions are needed. We believe that partnerships among various national, regional, and global stakeholders involved directly or indirectly with health workforce training ranging to name a few, from public health & clinical science training institutions, computer science, learning design, data science, technology companies, social scientists, law, and AI ethicists, need to be developed in ways that enable the formation of an equitable and sustainable Communities of Practice (CoP) to address the use of AI for global health workforce training. This paper has laid out a framework for such CoP.


Asunto(s)
Inteligencia Artificial , Fuerza Laboral en Salud , Humanos , Recursos Humanos , Escolaridad , Aprendizaje
11.
Hum Resour Health ; 20(1): 6, 2022 03 16.
Artículo en Inglés | MEDLINE | ID: mdl-35292073

RESUMEN

BACKGROUND: Despite the growth in mobile technologies (mHealth) to support Community Health Worker (CHW) supervision, the nature of mHealth-facilitated supervision remains underexplored. One strategy to support supervision at scale could be artificial intelligence (AI) modalities, including machine learning. We developed an open access, machine learning web application (CHWsupervisor) to predictively code instant messages exchanged between CHWs based on supervisory interaction codes. We document the development and validation of the web app and report its predictive accuracy. METHODS: CHWsupervisor was developed using 2187 instant messages exchanged between CHWs and their supervisors in Uganda. The app was then validated on 1242 instant messages from a separate digital CHW supervisory network in Kenya. All messages from the training and validation data sets were manually coded by two independent human coders. The predictive performance of CHWsupervisor was determined by comparing the primary supervisory codes assigned by the web app, against those assigned by the human coders and calculating observed percentage agreement and Cohen's kappa coefficients. RESULTS: Human inter-coder reliability for the primary supervisory category of messages across the training and validation datasets was 'substantial' to 'almost perfect', as suggested by observed percentage agreements of 88-95% and Cohen's kappa values of 0.7-0.91. In comparison to the human coders, the predictive accuracy of the CHWsupervisor web app was 'moderate', suggested by observed percentage agreements of 73-78% and Cohen's kappa values of 0.51-0.56. CONCLUSIONS: Augmenting human coding is challenging because of the complexity of supervisory exchanges, which often require nuanced interpretation. A realistic understanding of the potential of machine learning approaches should be kept in mind by practitioners, as although they hold promise, supportive supervision still requires a level of human expertise. Scaling-up digital CHW supervision may therefore prove challenging. TRIAL REGISTRATION: This was not a clinical trial and was therefore not registered as such.


Asunto(s)
Agentes Comunitarios de Salud , Aplicaciones Móviles , Acceso a la Información , Inteligencia Artificial , Agentes Comunitarios de Salud/educación , Humanos , Kenia , Aprendizaje Automático , Reproducibilidad de los Resultados , Uganda
13.
Invest Radiol ; 59(5): 404-412, 2024 May 01.
Artículo en Inglés | MEDLINE | ID: mdl-37843828

RESUMEN

PURPOSE: The aim of this study was to evaluate the impact of implementing an artificial intelligence (AI) solution for emergency radiology into clinical routine on physicians' perception and knowledge. MATERIALS AND METHODS: A prospective interventional survey was performed pre-implementation and 3 months post-implementation of an AI algorithm for fracture detection on radiographs in late 2022. Radiologists and traumatologists were asked about their knowledge and perception of AI on a 7-point Likert scale (-3, "strongly disagree"; +3, "strongly agree"). Self-generated identification codes allowed matching the same individuals pre-intervention and post-intervention, and using Wilcoxon signed rank test for paired data. RESULTS: A total of 47/71 matched participants completed both surveys (66% follow-up rate) and were eligible for analysis (34 radiologists [72%], 13 traumatologists [28%], 15 women [32%]; mean age, 34.8 ± 7.8 years). Postintervention, there was an increase that AI "reduced missed findings" (1.28 [pre] vs 1.94 [post], P = 0.003) and made readers "safer" (1.21 vs 1.64, P = 0.048), but not "faster" (0.98 vs 1.21, P = 0.261). There was a rising disagreement that AI could "replace the radiological report" (-2.04 vs -2.34, P = 0.038), as well as an increase in self-reported knowledge about "clinical AI," its "chances," and its "risks" (0.40 vs 1.00, 1.21 vs 1.70, and 0.96 vs 1.34; all P 's ≤ 0.028). Radiologists used AI results more frequently than traumatologists ( P < 0.001) and rated benefits higher (all P 's ≤ 0.038), whereas senior physicians were less likely to use AI or endorse its benefits (negative correlation with age, -0.35 to 0.30; all P 's ≤ 0.046). CONCLUSIONS: Implementing AI for emergency radiology into clinical routine has an educative aspect and underlines the concept of AI as a "second reader," to support and not replace physicians.


Asunto(s)
Médicos , Radiología , Femenino , Humanos , Adulto , Inteligencia Artificial , Estudios Prospectivos , Percepción
14.
Chest ; 166(1): 157-170, 2024 Jul.
Artículo en Inglés | MEDLINE | ID: mdl-38295950

RESUMEN

BACKGROUND: Chest radiographs (CXRs) are still of crucial importance in primary diagnostics, but their interpretation poses difficulties at times. RESEARCH QUESTION: Can a convolutional neural network-based artificial intelligence (AI) system that interprets CXRs add value in an emergency unit setting? STUDY DESIGN AND METHODS: A total of 563 CXRs acquired in the emergency unit of a major university hospital were retrospectively assessed twice by three board-certified radiologists, three radiology residents, and three emergency unit-experienced nonradiology residents (NRRs). They used a two-step reading process: (1) without AI support; and (2) with AI support providing additional images with AI overlays. Suspicion of four suspected pathologies (pleural effusion, pneumothorax, consolidations suspicious for pneumonia, and nodules) was reported on a five-point confidence scale. Confidence scores of the board-certified radiologists were converted into four binary reference standards of different sensitivities. Performance by radiology residents and NRRs without AI support/with AI support were statistically compared by using receiver-operating characteristics (ROCs), Youden statistics, and operating point metrics derived from fitted ROC curves. RESULTS: NRRs could significantly improve performance, sensitivity, and accuracy with AI support in all four pathologies tested. In the most sensitive reference standard (reference standard IV), NRR consensus improved the area under the ROC curve (mean, 95% CI) in the detection of the time-critical pathology pneumothorax from 0.846 (0.785-0.907) without AI support to 0.974 (0.947-1.000) with AI support (P < .001), which represented a gain of 30% in sensitivity and 2% in accuracy (while maintaining an optimized specificity). The most pronounced effect was observed in nodule detection, with NRR with AI support improving sensitivity by 53% and accuracy by 7% (area under the ROC curve without AI support, 0.723 [0.661-0.785]; with AI support, 0.890 [0.848-0.931]; P < .001). Radiology residents had smaller, mostly nonsignificant gains in performance, sensitivity, and accuracy with AI support. INTERPRETATION: We found that in an emergency unit setting without 24/7 radiology coverage, the presented AI solution features an excellent clinical support tool to nonradiologists, similar to a second reader, and allows for a more accurate primary diagnosis and thus earlier therapy initiation.


Asunto(s)
Inteligencia Artificial , Servicio de Urgencia en Hospital , Radiografía Torácica , Humanos , Radiografía Torácica/métodos , Estudios Retrospectivos , Masculino , Femenino , Competencia Clínica , Persona de Mediana Edad , Curva ROC , Adulto , Anciano
15.
Soc Sci Med ; 347: 116717, 2024 Apr.
Artículo en Inglés | MEDLINE | ID: mdl-38518481

RESUMEN

The advent of AI has ushered in a new era of patient care, but with it emerges a contentious debate surrounding accountability for algorithmic medical decisions. Within this discourse, a spectrum of views prevails, ranging from placing accountability on AI solution providers to laying it squarely on the shoulders of healthcare professionals. In response to this debate, this study, grounded in the mutualistic partner choice (MPC) model of the evolution of morality, seeks to establish a configurational framework for cultivating felt accountability towards AI among healthcare professionals. This framework underscores two pivotal conditions: AI ethics enactment and trusting belief in AI and considers the influence of organizational complexity in the implementation of this framework. Drawing on Fuzzy-set Qualitative Comparative Analysis (fsQCA) of a sample of 401 healthcare professionals, this study reveals that a) focusing justice and autonomy in AI ethics enactment along with building trusting belief in AI reliability and functionality reinforces healthcare professionals' sense of felt accountability towards AI, b) fostering felt accountability towards AI necessitates ensuring the establishment of trust in its functionality for high complexity hospitals, and c) prioritizing justice in AI ethics enactment and trust in AI reliability is essential for low complexity hospitals.


Asunto(s)
Inteligencia Artificial , Responsabilidad Social , Humanos , Reproducibilidad de los Resultados , Justicia Social , Atención a la Salud
16.
J R Coll Physicians Edinb ; 54(1): 84-88, 2024 03.
Artículo en Inglés | MEDLINE | ID: mdl-38523064

RESUMEN

Person-centered care is presently the standard healthcare model, which emphases shared clinical decision-making, patient autonomy and empowerment. However, many aspects of the modern-day clinical practice such as the increased reliance on medical technologies, artificial intelligence, and teleconsultation have significantly altered the quality of patient-physician communications. Moreover, many countries are facing an aging population with longer life expectancies but increasingly complex medical comorbidities, which, coupled with medical subspecialization and competing health systems, often lead to fragmentation of clinical care. In this article, I discuss what it truly means for a clinician to know a patient, which is, in fact, a highly intricate skill that is necessary to meet the high bar of person-centered care. I suggest that this can be achieved through the implementation of a holistic biopsychosocial model of clinical consultation at the physician level and fostering coordinated and continuity of care at the health systems level.


Asunto(s)
Inteligencia Artificial , Médicos , Humanos , Anciano , Médicos/psicología , Atención Dirigida al Paciente , Relaciones Médico-Paciente , Toma de Decisiones Clínicas
19.
JAMA Intern Med ; 184(5): 581-583, 2024 May 01.
Artículo en Inglés | MEDLINE | ID: mdl-38557971

RESUMEN

This cross-sectional study assesses the ability of a large language model to process medical data and display clinical reasoning compared with the ability of attending physicians and residents.


Asunto(s)
Inteligencia Artificial , Razonamiento Clínico , Humanos , Médicos/psicología , Masculino , Femenino
SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA