Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 28
Filtrar
2.
Appl Ergon ; 118: 104275, 2024 Jul.
Artículo en Inglés | MEDLINE | ID: mdl-38574594

RESUMEN

Weaning patients from ventilation in intensive care units (ICU) is a complex task. There is a growing desire to build decision-support tools to help clinicians during this process, especially those employing Artificial Intelligence (AI). However, tools built for this purpose should fit within and ideally improve the current work environment, to ensure they can successfully integrate into clinical practice. To do so, it is important to identify areas where decision-support tools may aid clinicians, and associated design requirements for such tools. This study analysed the work context surrounding the weaning process from mechanical ventilation in ICU environments, via cognitive task and work domain analyses. In doing so, both what cognitive processes clinicians perform during weaning, and the constraints and affordances of the work environment itself, were described. This study found a number of weaning process tasks where decision-support tools may prove beneficial, and from these a set of contextual design requirements were created. This work benefits researchers interested in creating human-centred decision-support tools for mechanical ventilation that are sensitive to the wider work system.


Asunto(s)
Unidades de Cuidados Intensivos , Desconexión del Ventilador , Humanos , Desconexión del Ventilador/métodos , Masculino , Femenino , Adulto , Respiración Artificial , Persona de Mediana Edad , Análisis y Desempeño de Tareas , Técnicas de Apoyo para la Decisión , Inteligencia Artificial , Sistemas de Apoyo a Decisiones Clínicas
3.
Stud Health Technol Inform ; 310: 374-378, 2024 Jan 25.
Artículo en Inglés | MEDLINE | ID: mdl-38269828

RESUMEN

Collaboration across disciplinary boundaries is vital to address the complex challenges and opportunities in Digital Health. We present findings and experiences of applying the principles of Team Science to a digital health research project called 'The Wearable Clinic'. Challenges faced were a lack of shared understanding of key terminology and concepts, and differences in publication cultures between disciplines. We also encountered more profound discrepancies, relating to definitions of "success" in a research project. We recommend that collaborative digital health research projects select a formal Team Science methodology from the outset.


Asunto(s)
Salud Digital , Dispositivos Electrónicos Vestibles , Investigación Interdisciplinaria , Aprendizaje , Instituciones de Atención Ambulatoria
4.
BMJ Qual Saf ; 33(3): 145-148, 2024 02 19.
Artículo en Inglés | MEDLINE | ID: mdl-38050114
5.
Int J Qual Health Care ; 35(4)2023 Oct 10.
Artículo en Inglés | MEDLINE | ID: mdl-37750687

RESUMEN

In the last 6 years, hospitals in developed countries have been trialling the use of command centres for improving organizational efficiency and patient care. However, the impact of these command centres has not been systematically studied in the past. It is a retrospective population-based study. Participants were patients who visited the Bradford Royal Infirmary hospital, Accident and Emergency (A&E) Department, between 1 January 2018 and 31 August 2021. Outcomes were patient flow (measured as A&E waiting time, length of stay, and clinician seen time) and data quality (measured by the proportion of missing treatment and assessment dates and valid transition between A&E care stages). Interrupted time-series segmented regression and process mining were used for analysis. A&E transition time from patient arrival to assessment by a clinician marginally improved during the intervention period; there was a decrease of 0.9 min [95% confidence interval (CI): 0.35-1.4], 3 min (95% CI: 2.4-3.5), 9.7 min (95% CI: 8.4-11.0), and 3.1 min (95% CI: 2.7-3.5) during 'patient flow program', 'command centre display roll-in', 'command centre activation', and 'hospital wide training program', respectively. However, the transition time from patient treatment until the conclusion of consultation showed an increase of 11.5 min (95% CI: 9.2-13.9), 12.3 min (95% CI: 8.7-15.9), 53.4 min (95% CI: 48.1-58.7), and 50.2 min (95% CI: 47.5-52.9) for the respective four post-intervention periods. Furthermore, the length of stay was not significantly impacted; the change was -8.8 h (95% CI: -17.6 to 0.08), -8.9 h (95% CI: -18.6 to 0.65), -1.67 h (95% CI: -10.3 to 6.9), and -0.54 h (95% CI: -13.9 to 12.8) during the four respective post-intervention periods. It was a similar pattern for the waiting and clinician seen times. Data quality as measured by the proportion of missing dates of records was generally poor (treatment date = 42.7% and clinician seen date = 23.4%) and did not significantly improve during the intervention periods. The findings of the study suggest that a command centre package that includes process change and software technology does not appear to have a consistent positive impact on patient safety and data quality based on the indicators and data we used. Therefore, hospitals considering introducing a command centre should not assume there will be benefits in patient flow and data quality.


Asunto(s)
Hospitales , Medicina Estatal , Humanos , Estudios Retrospectivos , Derivación y Consulta , Reino Unido , Servicio de Urgencia en Hospital , Tiempo de Internación
6.
Stud Health Technol Inform ; 302: 38-42, 2023 May 18.
Artículo en Inglés | MEDLINE | ID: mdl-37203605

RESUMEN

Type 2 diabetes is a life-long health condition, and as it progresses, A range of comorbidities can develop. The prevalence of diabetes has increased gradually, and it is expected that 642 million adults will be living with diabetes by 2040. Early and proper interventions for managing diabetes-related comorbidities are important. In this study, we propose a Machine Learning (ML) model for predicting the risk of developing hypertension for patients who already have Type 2 diabetes. We used the Connected Bradford dataset, consisting of 1.4 million patients, as our main dataset for data analysis and model building. As a result of data analysis, we found that hypertension is the most frequent observation among patients having Type 2 diabetes. Since hypertension is very important to predict clinically poor outcomes such as risk of heart, brain, kidney, and other diseases, it is crucial to make early and accurate predictions of the risk of having hypertension for Type 2 diabetic patients. We used Naïve Bayes (NB), Neural Network (NN), Random Forest (RF), and Support Vector Machine (SVM) to train our model. Then we ensembled these models to see the potential performance improvement. The ensemble method gave the best classification performance values of accuracy and kappa values of 0.9525 and 0.2183, respectively. We concluded that predicting the risk of developing hypertension for Type 2 diabetic patients using ML provides a promising stepping stone for preventing the Type 2 diabetes progression.


Asunto(s)
Diabetes Mellitus Tipo 2 , Hipertensión , Adulto , Humanos , Diabetes Mellitus Tipo 2/diagnóstico , Diabetes Mellitus Tipo 2/epidemiología , Teorema de Bayes , Aprendizaje Automático , Hipertensión/diagnóstico , Hipertensión/epidemiología , Atención Primaria de Salud , Máquina de Vectores de Soporte
7.
BMJ Health Care Inform ; 30(1)2023 Jan.
Artículo en Inglés | MEDLINE | ID: mdl-36697032

RESUMEN

BACKGROUND: Command centres have been piloted in some hospitals across the developed world in the last few years. Their impact on patient safety, however, has not been systematically studied. Hence, we aimed to investigate this. METHODS: This is a retrospective population-based cohort study. Participants were patients who visited Bradford Royal Infirmary Hospital and Calderdale & Huddersfield hospitals between 1 January 2018 and 31 August 2021. A five-phase, interrupted time series, linear regression analysis was used. RESULTS: After introduction of a Command Centre, while mortality and readmissions marginally improved, there was no statistically significant impact on postoperative sepsis. In the intervention hospital, when compared with the preintervention period, mortality decreased by 1.4% (95% CI 0.8% to 1.9%), 1.5% (95% CI 0.9% to 2.1%), 1.3% (95% CI 0.7% to 1.8%) and 2.5% (95% CI 1.7% to 3.4%) during successive phases of the command centre programme, including roll-in and activation of the technology and preparatory quality improvement work. However, in the control site, compared with the baseline, the weekly mortality also decreased by 2.0% (95% CI 0.9 to 3.1), 2.3% (95% CI 1.1 to 3.5), 1.3% (95% CI 0.2 to 2.4), 3.1% (95% CI 1.4 to 4.8) for the respective intervention phases. No impact on any of the indicators was observed when only the software technology part of the Command Centre was considered. CONCLUSION: Implementation of a hospital Command Centre may have a marginal positive impact on patient safety when implemented as part of a broader hospital-wide improvement programme including colocation of operations and clinical leads in a central location. However, improvement in patient safety indicators was also observed for a comparable period in the control site. Further evaluative research into the impact of hospital command centres on a broader range of patient safety and other outcomes is warranted.


Asunto(s)
Hospitales , Pacientes , Humanos , Análisis de Series de Tiempo Interrumpido , Estudios Retrospectivos , Estudios de Cohortes
8.
Br Paramed J ; 7(1): 36-42, 2022 Jun 01.
Artículo en Inglés | MEDLINE | ID: mdl-36452023

RESUMEN

Introduction: Early recognition of out-of-hospital cardiac arrest (OHCA) by ambulance service call centre operators is important so that cardiopulmonary resuscitation can be delivered immediately, but around 25% of OHCAs are not picked up by call centre operators. An artificial intelligence (AI) system has been developed to support call centre operators in the detection of OHCA. The study aims to (1) explore ambulance service stakeholder perceptions on the safety of OHCA AI decision support in call centres, and (2) develop a clinical safety case for the OHCA AI decision-support system. Methods and analysis: The study will be undertaken within the Welsh Ambulance Service. The study is part research and part service evaluation. The research utilises a qualitative study design based on thematic analysis of interview data. The service evaluation consists of the development of a clinical safety case based on document analysis, analysis of the AI model and its development process and informal interviews with the technology developer. Conclusions: AI presents many opportunities for ambulance services, but safety assurance requirements need to be understood. The ASSIST project will continue to explore and build the body of knowledge in this area.

9.
BMJ Health Care Inform ; 29(1)2022 Jul.
Artículo en Inglés | MEDLINE | ID: mdl-35851286

RESUMEN

OBJECTIVES: Establishing confidence in the safety of Artificial Intelligence (AI)-based clinical decision support systems is important prior to clinical deployment and regulatory approval for systems with increasing autonomy. Here, we undertook safety assurance of the AI Clinician, a previously published reinforcement learning-based treatment recommendation system for sepsis. METHODS: As part of the safety assurance, we defined four clinical hazards in sepsis resuscitation based on clinical expert opinion and the existing literature. We then identified a set of unsafe scenarios, intended to limit the action space of the AI agent with the goal of reducing the likelihood of hazardous decisions. RESULTS: Using a subset of the Medical Information Mart for Intensive Care (MIMIC-III) database, we demonstrated that our previously published 'AI clinician' recommended fewer hazardous decisions than human clinicians in three out of our four predefined clinical scenarios, while the difference was not statistically significant in the fourth scenario. Then, we modified the reward function to satisfy our safety constraints and trained a new AI Clinician agent. The retrained model shows enhanced safety, without negatively impacting model performance. DISCUSSION: While some contextual patient information absent from the data may have pushed human clinicians to take hazardous actions, the data were curated to limit the impact of this confounder. CONCLUSION: These advances provide a use case for the systematic safety assurance of AI-based clinical systems towards the generation of explicit safety evidence, which could be replicated for other AI applications or other clinical contexts, and inform medical device regulatory bodies.


Asunto(s)
Sistemas de Apoyo a Decisiones Clínicas , Sepsis , Inteligencia Artificial , Cuidados Críticos , Humanos , Sepsis/terapia
10.
Stud Health Technol Inform ; 290: 364-368, 2022 Jun 06.
Artículo en Inglés | MEDLINE | ID: mdl-35673036

RESUMEN

The fourth industrial revolution is based on cyber-physical systems and the connectivity of devices. It is currently unclear what the consequences are for patient safety as existing digital health technologies become ubiquitous with increasing pace and interact in unforeseen ways. In this paper, we describe the output from a workshop focused on identifying the patient safety challenges associated with emerging digital health technologies. We discuss six challenges identified in the workshop and present recommendations to address the patient safety concerns posed by them. A key implication of considering the challenges and opportunities for Patient Safety Informatics is the interdisciplinary contribution required to study digital health technologies within their embedded context. The principles underlying our recommendations are those of proactive and systems approaches that relate the social, technical and regulatory facets underpinning patient safety informatics theory and practice.


Asunto(s)
Informática Médica , Seguridad del Paciente , Humanos , Estudios Interdisciplinarios
11.
BMJ Open ; 12(3): e054090, 2022 Mar 01.
Artículo en Inglés | MEDLINE | ID: mdl-35232784

RESUMEN

INTRODUCTION: This paper presents a mixed-methods study protocol that will be used to evaluate a recent implementation of a real-time, centralised hospital command centre in the UK. The command centre represents a complex intervention within a complex adaptive system. It could support better operational decision-making and facilitate identification and mitigation of threats to patient safety. There is, however, limited research on the impact of such complex health information technology on patient safety, reliability and operational efficiency of healthcare delivery and this study aims to help address that gap. METHODS AND ANALYSIS: We will conduct a longitudinal mixed-method evaluation that will be informed by public-and-patient involvement and engagement. Interviews and ethnographic observations will inform iterations with quantitative analysis that will sensitise further qualitative work. Quantitative work will take an iterative approach to identify relevant outcome measures from both the literature and pragmatically from datasets of routinely collected electronic health records. ETHICS AND DISSEMINATION: This protocol has been approved by the University of Leeds Engineering and Physical Sciences Research Ethics Committee (#MEEC 20-016) and the National Health Service Health Research Authority (IRAS No.: 285933). Our results will be communicated through peer-reviewed publications in international journals and conferences. We will provide ongoing feedback as part of our engagement work with local trust stakeholders.


Asunto(s)
Inteligencia Artificial , Medicina Estatal , Hospitales , Humanos , Participación del Paciente , Reproducibilidad de los Resultados
12.
Philos Trans A Math Phys Eng Sci ; 379(2207): 20200363, 2021 Oct 04.
Artículo en Inglés | MEDLINE | ID: mdl-34398656

RESUMEN

In recent years, several new technical methods have been developed to make AI-models more transparent and interpretable. These techniques are often referred to collectively as 'AI explainability' or 'XAI' methods. This paper presents an overview of XAI methods, and links them to stakeholder purposes for seeking an explanation. Because the underlying stakeholder purposes are broadly ethical in nature, we see this analysis as a contribution towards bringing together the technical and ethical dimensions of XAI. We emphasize that use of XAI methods must be linked to explanations of human decisions made during the development life cycle. Situated within that wider accountability framework, our analysis may offer a helpful starting point for designers, safety engineers, service providers and regulators who need to make practical judgements about which XAI methods to employ or to require. This article is part of the theme issue 'Towards symbiotic autonomous systems'.

13.
Artif Intell Med ; 117: 102087, 2021 07.
Artículo en Inglés | MEDLINE | ID: mdl-34127233

RESUMEN

Weaning from mechanical ventilation covers the process of liberating the patient from mechanical support and removing the associated endotracheal tube. The management of weaning from mechanical ventilation comprises a significant proportion of the care of critically ill intubated patients in Intensive Care Units (ICUs). Both prolonged dependence on mechanical ventilation and premature extubation expose patients to an increased risk of complications and increased health care costs. This work aims to develop a decision support model using routinely-recorded patient information to predict extubation readiness. In order to do so, we have deployed Convolutional Neural Networks (CNN) to predict the most appropriate treatment action in the next hour for a given patient state, using historical ICU data extracted from MIMIC-III. The model achieved 86% accuracy and 0.94 area under the receiver operating characteristic curve (AUC-ROC). We also performed feature importance analysis for the CNN model and interpreted these features using the DeepLIFT method. The results of the feature importance assessment show that the CNN model makes predictions using clinically meaningful and appropriate features. Finally, we implemented counterfactual explanations for the CNN model. This can help clinicians understand what feature changes for a particular patient would lead to a desirable outcome, i.e. readiness to extubate.


Asunto(s)
Redes Neurales de la Computación , Respiración Artificial , Desconexión del Ventilador , Enfermedad Crítica , Humanos , Unidades de Cuidados Intensivos
14.
BMJ Qual Saf ; 30(12): 1047-1050, 2021 12.
Artículo en Inglés | MEDLINE | ID: mdl-34045304
15.
Stud Health Technol Inform ; 281: 659-663, 2021 May 27.
Artículo en Inglés | MEDLINE | ID: mdl-34042658

RESUMEN

The use of Conversational agents (CAs) in healthcare is an emerging field. These CAs seem to be effective in accomplishing administrative tasks, e.g. providing locations of care facilities and scheduling appointments. Modern CAs use machine learning (ML) to recognize, understand and generate a response. Given the criticality of many healthcare settings, ML and other component errors may result in CA failures and may cause adverse effects on patients. Therefore, in-depth assurance is required before the deployment of ML in critical clinical applications, e.g. management of medication dose or medical diagnosis. CA safety issues could arise due to diverse causes, e.g. related to user interactions, environmental factors and ML errors. In this paper, we classify failures of perception (recognition and understanding) of CAs and their sources. We also present a case study of a CA used for calculating insulin dose for gestational diabetes mellitus (GDM) patients. We then correlate identified perception failures of CAs to potential scenarios that might compromise patient safety.


Asunto(s)
Comunicación , Seguridad del Paciente , Atención a la Salud , Humanos , Percepción
16.
J Biomed Inform ; 117: 103762, 2021 05.
Artículo en Inglés | MEDLINE | ID: mdl-33798716

RESUMEN

Machine learning (ML) has the potential to bring significant clinical benefits. However, there are patient safety challenges in introducing ML in complex healthcare settings and in assuring the technology to the satisfaction of the different regulators. The work presented in this paper tackles the urgent problem of proactively assuring ML in its clinical context as a step towards enabling the safe introduction of ML into clinical practice. In particular, the paper considers the use of deep Reinforcement Learning, a type of ML, for sepsis treatment. The methodology starts with the modelling of a clinical workflow that integrates the ML model for sepsis treatment recommendations. Then safety analysis is carried out based on the clinical workflow, identifying hazards and safety requirements for the ML model. In this paper the design of the ML model is enhanced to satisfy the safety requirements for mitigating a major clinical hazard: sudden change of vasopressor dose. A rigorous evaluation is conducted to show how these requirements are met. A safety case is presented, providing a basis for regulators to make a judgement on the acceptability of introducing the ML model into sepsis treatment in a healthcare setting. The overall argument is broad in considering the wider patient safety considerations, but the detailed rationale and supporting evidence presented relate to this specific hazard. Whilst there are no agreed regulatory approaches to introducing ML into healthcare, the work presented in this paper has shown a possible direction for overcoming this barrier and exploit the benefits of ML without compromising safety.


Asunto(s)
Aprendizaje Automático , Sepsis , Atención a la Salud , Humanos , Sepsis/diagnóstico , Sepsis/terapia , Flujo de Trabajo
18.
Stud Health Technol Inform ; 270: 1367-1368, 2020 Jun 16.
Artículo en Inglés | MEDLINE | ID: mdl-32570662

RESUMEN

We discuss the preliminary safety analysis of a smartphone-based intervention for early detection of psychotic relapse. We briefly describe how we identified patient safety hazards associated with the system and how measures were defined to mitigate these hazards.


Asunto(s)
Trastornos Mentales , Dispositivos Electrónicos Vestibles , Humanos , Aplicaciones Móviles , Recurrencia , Teléfono Inteligente
19.
Bull World Health Organ ; 98(4): 251-256, 2020 Apr 01.
Artículo en Inglés | MEDLINE | ID: mdl-32284648

RESUMEN

The prospect of patient harm caused by the decisions made by an artificial intelligence-based clinical tool is something to which current practices of accountability and safety worldwide have not yet adjusted. We focus on two aspects of clinical artificial intelligence used for decision-making: moral accountability for harm to patients; and safety assurance to protect patients against such harm. Artificial intelligence-based tools are challenging the standard clinical practices of assigning blame and assuring safety. Human clinicians and safety engineers have weaker control over the decisions reached by artificial intelligence systems and less knowledge and understanding of precisely how the artificial intelligence systems reach their decisions. We illustrate this analysis by applying it to an example of an artificial intelligence-based system developed for use in the treatment of sepsis. The paper ends with practical suggestions for ways forward to mitigate these concerns. We argue for a need to include artificial intelligence developers and systems safety engineers in our assessments of moral accountability for patient harm. Meanwhile, none of the actors in the model robustly fulfil the traditional conditions of moral accountability for the decisions of an artificial intelligence system. We should therefore update our conceptions of moral accountability in this context. We also need to move from a static to a dynamic model of assurance, accepting that considerations of safety are not fully resolvable during the design of the artificial intelligence system before the system has been deployed.


La perspective que les décisions prises par un outil clinique basé sur l'intelligence artificielle puissent porter préjudice aux patients est un concept dont les bonnes pratiques de responsabilité et de sécurité actuelles ne tiennent pas encore compte à travers le monde. Nous nous concentrons sur deux aspects qui caractérisent les décisions de l'intelligence artificielle à usage clinique : la responsabilité morale des préjudices aux patients, et la garantie de sécurité pour protéger les patients contre de tels préjudices. Les outils fondés sur l'intelligence artificielle remettent en cause les pratiques cliniques conventionnelles d'attribution des responsabilités et de garantie de la sécurité. Les décisions formulées par les systèmes d'intelligence artificielle sont de moins en moins soumises au contrôle des médecins et spécialistes de la sécurité, qui ne comprennent et ne maîtrisent pas toujours les subtilités régissant cette prise de décision. Nous illustrons notre analyse en l'appliquant à un exemple de système d'intelligence artificielle développé dans le cadre du traitement des infections. Le présent document se termine par une série de suggestions concrètes servant à identifier de nouveaux moyens de tempérer ces inquiétudes. Nous estimons qu'il est nécessaire d'inclure les développeurs à l'origine de l'intelligence artificielle ainsi que les spécialistes de la sécurité des systèmes dans notre évaluation de la responsabilité morale des préjudices causés aux patients. Car pour l'instant, aucun des acteurs impliqués dans le modèle ne remplit pleinement les conditions traditionnelles de responsabilité morale pour les décisions prises par un dispositif d'intelligence artificielle. Dans ce contexte, il est donc essentiel revoir notre conception de la responsabilité morale. Nous devons également passer d'un modèle de garantie statique à un modèle de garantie dynamique, et accepter que certains impératifs de sécurité ne puissent être entièrement résolus durant l'élaboration du système d'intelligence artificielle, avant sa mise en œuvre.


La perspectiva de que los pacientes sufran daños a causa de por las decisiones tomadas por un instrumento clínico de inteligencia artificial es un aspecto al que todavía no se han ajustado las prácticas actuales de responsabilidad y seguridad en todo el mundo. El presente documento se centra en dos aspectos de la inteligencia artificial clínica utilizada para la toma de decisiones: la responsabilidad moral por el daño causado a los pacientes y la garantía de seguridad para proteger a los pacientes contra dicho daño. Las herramientas de inteligencia artificial están desafiando las prácticas clínicas estándar de asignación de responsabilidades y de garantía de seguridad. Los médicos clínicos y los ingenieros de seguridad de las personas tienen menos control sobre las decisiones que adoptan por los sistemas de inteligencia artificial y menos conocimiento y comprensión de la forma precisa en que los sistemas de inteligencia artificial adoptan sus decisiones. Este análisis se ilustra aplicándolo a un ejemplo de un sistema de inteligencia artificial desarrollado para su uso en el tratamiento de la sepsis. El documento termina con sugerencias prácticas sobre las vías de acción para mitigar estas preocupaciones. Se sostiene la necesidad de incluir a los desarrolladores de inteligencia artificial y a los ingenieros de seguridad de sistemas en las evaluaciones de la responsabilidad moral por los daños causados a los pacientes. Entretanto, ninguno de los actores del modelo cumple sólidamente las condiciones tradicionales de responsabilidad moral por las decisiones de un sistema de inteligencia artificial. En consecuencia, se debería actualizar nuestra concepción de la responsabilidad moral en este contexto. También es preciso pasar de un modelo de garantía estático a uno dinámico, aceptando que las consideraciones de seguridad no se pueden resolver plenamente durante el diseño del sistema de inteligencia artificial antes de que el sistema sea implementado.


Asunto(s)
Inteligencia Artificial , Atención a la Salud , Administración de la Seguridad , Responsabilidad Social , Instituciones de Salud
20.
Appl Ergon ; 86: 103113, 2020 Jul.
Artículo en Inglés | MEDLINE | ID: mdl-32342897

RESUMEN

Systems contradictions present challenges that need to be effectively managed, e.g. due to conflicting rules and advice, goal conflicts, and mismatches between demand and capacity. We apply FRAM (Functional Resonance Analysis Method) to intravenous infusion practices in an intensive care unit (ICU) to explore how tensions and contradictions are managed by people. A multi-disciplinary team including individuals from nursing, medical, pharmacy, safety, IT and human factors backgrounds contributed to this analysis. A FRAM model investigation resulting in seven functional areas are described. A tabular analysis highlights significant areas of performance variability, e.g. administering medication before a prescription, prioritising drugs, different degrees of double checking and using sites showing early signs of infection for intravenous access. Our FRAM analysis has been non-normative: performance variability is not necessarily wanted or unwanted, it is merely necessary where system contradictions cannot be easily resolved and so adaptive capacity is required to cope.


Asunto(s)
Infusiones Intravenosas/enfermería , Unidades de Cuidados Intensivos/organización & administración , Análisis de Sistemas , Rendimiento Laboral , Inglaterra , Humanos , Estudios de Casos Organizacionales
SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA