Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 80
Filtrar
Más filtros

Bases de datos
País/Región como asunto
Tipo del documento
Intervalo de año de publicación
1.
J Cardiovasc Nurs ; 37(3): E61-E72, 2022.
Artículo en Inglés | MEDLINE | ID: mdl-34238840

RESUMEN

BACKGROUND: Adherence to secondary prevention measures among patients with coronary artery disease (CAD) affects patient prognosis, whereas patients' adherence behaviors change over time. OBJECTIVES: The aims of this study were to identify adherence trajectories to secondary prevention measures including medication-taking and a heart-healthy lifestyle and to estimate predictors of adherence trajectories among patients with CAD. METHODS: This longitudinal study enrolled 698 patients with CAD who received a percutaneous coronary intervention in China. Demographics, clinical characteristics, adherence to secondary prevention measures, and patient-related factors including disease knowledge, self-efficacy, and health literacy were measured during hospitalization. Adherence behaviors were followed at 1, 3, and 6 months, and 1 year after discharge. Group-based trajectory models estimated adherence trajectories, and multinomial logistic regression identified trajectory group predictors. RESULTS: Four trajectory groups were identified for medication-taking adherence: sustained adherence (39.9%), increasing and then decreasing adherence (23.1%), increasing adherence (23.4%), and nonadherence (13.6%). The 3 adherence trajectory groups for a heart-healthy lifestyle were sustained adherence (59.7%), increasing adherence (28.3%), and nonadherence (12.0%). Married patients were more likely (odds ratio [OR], 3.42; 95% confidence interval [CI], 1.56-7.52) to have sustained adherence to medication-taking. However, patients with higher disease knowledge were less likely (OR, 0.93; 95% CI, 0.87-0.99) to be adherent. Patients who were not working (OR, 2.25; 95% CI, 1.03-4.92) had higher self-efficacy (OR, 1.21; 95% CI, 1.08-1.37). Those with higher health literacy (OR, 1.18; 95% CI, 1.01-1.38) were more likely to have sustained adherence to a heart-healthy lifestyle. However, patients having no coronary stents (OR, 0.36; 95% CI, 0.19-0.70) were less likely to have done so. CONCLUSIONS: Trajectories of adherence to secondary prevention measures among mainland Chinese patients with CAD are multipatterned. Healthcare providers should formulate targeted adherence support, which considers the influence of disease knowledge, self-efficacy, and health literacy.


Asunto(s)
Enfermedad de la Arteria Coronaria , Intervención Coronaria Percutánea , Enfermedad de la Arteria Coronaria/tratamiento farmacológico , Enfermedad de la Arteria Coronaria/prevención & control , Humanos , Estudios Longitudinales , Cumplimiento de la Medicación , Prevención Secundaria
2.
J Clin Monit Comput ; 36(2): 397-405, 2022 04.
Artículo en Inglés | MEDLINE | ID: mdl-33558981

RESUMEN

Big data analytics research using heterogeneous electronic health record (EHR) data requires accurate identification of disease phenotype cases and controls. Overreliance on ground truth determination based on administrative data can lead to biased and inaccurate findings. Hospital-acquired venous thromboembolism (HA-VTE) is challenging to identify due to its temporal evolution and variable EHR documentation. To establish ground truth for machine learning modeling, we compared accuracy of HA-VTE diagnoses made by administrative coding to manual review of gold standard diagnostic test results. We performed retrospective analysis of EHR data on 3680 adult stepdown unit patients identifying HA-VTE. International Classification of Diseases, Ninth Revision (ICD-9-CM) codes for VTE were identified. 4544 radiology reports associated with VTE diagnostic tests were screened using terminology extraction and then manually reviewed by a clinical expert to confirm diagnosis. Of 415 cases with ICD-9-CM codes for VTE, 219 were identified with acute onset type codes. Test report review identified 158 new-onset HA-VTE cases. Only 40% of ICD-9-CM coded cases (n = 87) were confirmed by a positive diagnostic test report, leaving the majority of administratively coded cases unsubstantiated by confirmatory diagnostic test. Additionally, 45% of diagnostic test confirmed HA-VTE cases lacked corresponding ICD codes. ICD-9-CM coding missed diagnostic test-confirmed HA-VTE cases and inaccurately assigned cases without confirmed VTE, suggesting dependence on administrative coding leads to inaccurate HA-VTE phenotyping. Alternative methods to develop more sensitive and specific VTE phenotype solutions portable across EHR vendor data are needed to support case-finding in big-data analytics.


Asunto(s)
Tromboembolia Venosa , Macrodatos , Hospitales , Humanos , Aprendizaje Automático , Estudios Retrospectivos , Tromboembolia Venosa/diagnóstico
3.
Crit Care Med ; 49(3): 472-481, 2021 03 01.
Artículo en Inglés | MEDLINE | ID: mdl-33555779

RESUMEN

OBJECTIVES: To formulate new "Choosing Wisely" for Critical Care recommendations that identify best practices to avoid waste and promote value while providing critical care. DATA SOURCES: Semistructured narrative literature review and quantitative survey assessments. STUDY SELECTION: English language publications that examined critical care practices in relation to reducing cost or waste. DATA EXTRACTION: Practices assessed to add no value to critical care were grouped by category. Taskforce assessment, modified Delphi consensus building, and quantitative survey analysis identified eight novel recommendations to avoid wasteful critical care practices. These were submitted to the Society of Critical Care Medicine membership for evaluation and ranking. DATA SYNTHESIS: Results from the quantitative Society of Critical Care Medicine membership survey identified the top scoring five of eight recommendations. These five highest ranked recommendations established Society of Critical Care Medicine's Next Five "Choosing" Wisely for Critical Care practices. CONCLUSIONS: Five new recommendations to reduce waste and enhance value in the practice of critical care address invasive devices, proactive liberation from mechanical ventilation, antibiotic stewardship, early mobilization, and providing goal-concordant care. These recommendations supplement the initial critical care recommendations from the "Choosing Wisely" campaign.


Asunto(s)
Toma de Decisiones Clínicas , Cuidados Críticos/normas , Calidad de la Atención de Salud/normas , Consenso , Humanos , Unidades de Cuidados Intensivos , Guías de Práctica Clínica como Asunto , Pautas de la Práctica en Medicina/normas , Sociedades Médicas/normas
4.
Crit Care Med ; 48(4): 553-561, 2020 04.
Artículo en Inglés | MEDLINE | ID: mdl-32205602

RESUMEN

OBJECTIVES: In 2014, the Tele-ICU Committee of the Society of Critical Care Medicine published an article regarding the state of ICU telemedicine, one better defined today as tele-critical care. Given the rapid evolution in the field, the authors now provide an updated review. DATA SOURCES AND STUDY SELECTION: We searched PubMed and OVID for peer-reviewed literature published between 2010 and 2018 related to significant developments in tele-critical care, including its prevalence, function, activity, and technologies. Search terms included electronic ICU, tele-ICU, critical care telemedicine, and ICU telemedicine with appropriate descriptors relevant to each sub-section. Additionally, information from surveys done by the Society of Critical Care Medicine was included given the relevance to the discussion and was referenced accordingly. DATA EXTRACTION AND DATA SYNTHESIS: Tele-critical care continues to evolve in multiple domains, including organizational structure, technologies, expanded-use case scenarios, and novel applications. Insights have been gained in economic impact and human and organizational factors affecting tele-critical care delivery. Legislation and credentialing continue to significantly influence the pace of tele-critical care growth and adoption. CONCLUSIONS: Tele-critical care is an established mechanism to leverage critical care expertise to ICUs and beyond, but systematic research comparing different models, approaches, and technologies is still needed.


Asunto(s)
Cuidados Críticos/organización & administración , Sistemas de Apoyo a Decisiones Clínicas/organización & administración , Unidades de Cuidados Intensivos/organización & administración , Telemedicina/organización & administración , Actitud del Personal de Salud , Humanos , Revisión de la Investigación por Pares , Consulta Remota/organización & administración , Estados Unidos
5.
Crit Care ; 24(1): 661, 2020 11 25.
Artículo en Inglés | MEDLINE | ID: mdl-33234161

RESUMEN

BACKGROUND: Even brief hypotension is associated with increased morbidity and mortality. We developed a machine learning model to predict the initial hypotension event among intensive care unit (ICU) patients and designed an alert system for bedside implementation. MATERIALS AND METHODS: From the Medical Information Mart for Intensive Care III (MIMIC-3) dataset, minute-by-minute vital signs were extracted. A hypotension event was defined as at least five measurements within a 10-min period of systolic blood pressure ≤ 90 mmHg and mean arterial pressure ≤ 60 mmHg. Using time series data from 30-min overlapping time windows, a random forest (RF) classifier was used to predict risk of hypotension every minute. Chronologically, the first half of extracted data was used to train the model, and the second half was used to validate the trained model. The model's performance was measured with area under the receiver operating characteristic curve (AUROC) and area under the precision recall curve (AUPRC). Hypotension alerts were generated using risk score time series, a stacked RF model. A lockout time were applied for real-life implementation. RESULTS: We identified 1307 subjects (1580 ICU stays) as the hypotension group and 1619 subjects (2279 ICU stays) as the non-hypotension group. The RF model showed AUROC of 0.93 and 0.88 at 15 and 60 min, respectively, before hypotension, and AUPRC of 0.77 at 60 min before. Risk score trajectories revealed 80% and > 60% of hypotension predicted at 15 and 60 min before the hypotension, respectively. The stacked model with 15-min lockout produced on average 0.79 alerts/subject/hour (sensitivity 92.4%). CONCLUSION: Clinically significant hypotension events in the ICU can be predicted at least 1 h before the initial hypotension episode. With a highly sensitive and reliable practical alert system, a vast majority of future hypotension could be captured, suggesting potential real-life utility.


Asunto(s)
Hipotensión/diagnóstico , Monitoreo Fisiológico/normas , Medicina de Precisión/métodos , Signos Vitales/fisiología , Anciano , Área Bajo la Curva , Femenino , Humanos , Hipotensión/fisiopatología , Unidades de Cuidados Intensivos/organización & administración , Unidades de Cuidados Intensivos/estadística & datos numéricos , Aprendizaje Automático/normas , Aprendizaje Automático/estadística & datos numéricos , Masculino , Persona de Mediana Edad , Monitoreo Fisiológico/métodos , Monitoreo Fisiológico/estadística & datos numéricos , Curva ROC , Medición de Riesgo/métodos , Medición de Riesgo/normas , Medición de Riesgo/estadística & datos numéricos
6.
Am J Respir Crit Care Med ; 199(8): 970-979, 2019 04 15.
Artículo en Inglés | MEDLINE | ID: mdl-30352168

RESUMEN

RATIONALE: Telemedicine is an increasingly common care delivery strategy in the ICU. However, ICU telemedicine programs vary widely in their clinical effectiveness, with some studies showing a large mortality benefit and others showing no benefit or even harm. OBJECTIVES: To identify the organizational factors associated with ICU telemedicine effectiveness. METHODS: We performed a focused ethnographic evaluation of 10 ICU telemedicine programs using site visits, interviews, and focus groups in both facilities providing remote care and the target ICUs. Programs were selected based on their change in risk-adjusted mortality after adoption (decreased mortality, no change in mortality, and increased mortality). We used a constant comparative approach to guide data collection and analysis. MEASUREMENTS AND MAIN RESULTS: We conducted 460 hours of direct observation, 222 interviews, and 18 focus groups across six telemedicine facilities and 10 target ICUs. Data analysis revealed three domains that influence ICU telemedicine effectiveness: 1) leadership (i.e., the decisions related to the role of the telemedicine, conflict resolution, and relationship building), 2) perceived value (i.e., expectations of availability and impact, staff satisfaction, and understanding of operations), and 3) organizational characteristics (i.e., staffing models, allowed involvement of the telemedicine unit, and new hire orientation). In the most effective telemedicine programs these factors led to services that are viewed as appropriate, integrated, responsive, and consistent. CONCLUSIONS: The effectiveness of ICU telemedicine programs may be influenced by several potentially modifiable factors within the domains of leadership, perceived value, and organizational structure.


Asunto(s)
Unidades de Cuidados Intensivos , Telemedicina , Antropología Cultural , Actitud del Personal de Salud , Grupos Focales , Humanos , Unidades de Cuidados Intensivos/organización & administración , Entrevistas como Asunto , Liderazgo , Evaluación de Programas y Proyectos de Salud , Telemedicina/métodos , Telemedicina/organización & administración
7.
Nurs Res ; 69(5): E199-E207, 2020.
Artículo en Inglés | MEDLINE | ID: mdl-32205787

RESUMEN

BACKGROUND: Healthcare providers are concerned about adherence to provider recommendations in coronary artery disease management. Seeking patient-related factors influencing changes in adherence over time is necessary for formulating suitable intervention measures-especially among diverse populations. OBJECTIVE: To explore whether health literacy, self-efficacy, and disease knowledge predict changes in adherence over time (between baseline and 3 months) to secondary prevention recommendations for Chinese coronary artery disease patients. METHODS: A longitudinal study was performed for 662 patients following percutaneous coronary intervention in China. Self-reported data were collected at baseline during hospitalization and at a 3-month telephone follow-up. Variables included demographics, health literacy, self-efficacy, disease knowledge, and adherence to secondary prevention recommendations for medication taking and a heart-healthy lifestyle. Multinomial logistic regression identified predictors of adherence changes over time. RESULTS: Patients were categorized into three groups: sustained/declined to nonadherence between baseline and 3 months, improved to adherence, and sustained adherence. The number of patients in sustained/declined to nonadherence group was small. Absence of stents predicted sustained/declined to nonadherence to medication and lifestyle over time. Health literacy was not associated with adherence changes over time. Higher self-efficacy scores were associated with lower likelihood of sustained/declined to nonadherence to a healthy lifestyle over time, whereas higher disease knowledge scores were associated with higher sustained/declined to nonadherence to medication. CONCLUSIONS: Adherence to secondary prevention 3 months after discharge was relatively good in Chinese patients with coronary artery disease who received percutaneous coronary intervention. Absence of stents and lower self-efficacy can predict the poor adherence changes, which should be considered in formulating follow-up care.


Asunto(s)
Enfermedad de la Arteria Coronaria/prevención & control , Prevención Secundaria/normas , Cumplimiento y Adherencia al Tratamiento/psicología , Anciano , China , Enfermedad de la Arteria Coronaria/psicología , Enfermedad de la Arteria Coronaria/terapia , Femenino , Alfabetización en Salud , Humanos , Estudios Longitudinales , Masculino , Persona de Mediana Edad , Prevención Secundaria/métodos , Prevención Secundaria/estadística & datos numéricos , Autoeficacia , Factores de Tiempo , Cumplimiento y Adherencia al Tratamiento/estadística & datos numéricos , Resultado del Tratamiento
8.
J Cardiovasc Nurs ; 35(6): 550-557, 2020.
Artículo en Inglés | MEDLINE | ID: mdl-31977564

RESUMEN

BACKGROUND: The Emergency Severity Index (ESI) is a widely used tool to triage patients in emergency departments. The ESI tool is used to assess all complaints and has significant limitation for accurately triaging patients with suspected acute coronary syndrome (ACS). OBJECTIVE: We evaluated the accuracy of ESI in predicting serious outcomes in suspected ACS and aimed to assess the incremental reclassification performance if ESI is supplemented with a clinically validated tool used to risk-stratify suspected ACS. METHODS: We used existing data from an observational cohort study of patients with chest pain. We extracted ESI scores documented by triage nurses during routine medical care. Two independent reviewers adjudicated the primary outcome, incidence of 30-day major adverse cardiac events. We compared ESI with the well-established modified HEAR/T (patient History, Electrocardiogram, Age, Risk factors, but without Troponin) score. RESULTS: Our sample included 750 patients (age, 59 ± 17 years; 43% female; 40% black). A total of 145 patients (19%) experienced major adverse cardiac event. The area under the receiver operating characteristic curve for ESI score for predicting major adverse cardiac event was 0.656, compared with 0.796 for the modified HEAR/T score. Using the modified HEAR/T score, 181 of the 391 false positives (46%) and 16 of the 19 false negatives (84%) assigned by ESI could be reclassified correctly. CONCLUSION: The ESI score is poorly associated with serious outcomes in patients with suspected ACS. Supplementing the ESI tool with input from other validated clinical tools can greatly improve the accuracy of triage in patients with suspected ACS.


Asunto(s)
Síndrome Coronario Agudo/diagnóstico , Servicio de Urgencia en Hospital , Triaje , Síndrome Coronario Agudo/complicaciones , Síndrome Coronario Agudo/mortalidad , Adulto , Anciano , Electrocardiografía , Femenino , Hospitalización , Humanos , Masculino , Persona de Mediana Edad , Evaluación de Resultado en la Atención de Salud , Valor Predictivo de las Pruebas , Curva ROC , Estudios Retrospectivos , Medición de Riesgo , Factores de Riesgo , Índice de Severidad de la Enfermedad , Tasa de Supervivencia , Evaluación de Síntomas
9.
J Clin Monit Comput ; 33(6): 973-985, 2019 Dec.
Artículo en Inglés | MEDLINE | ID: mdl-30767136

RESUMEN

Tachycardia is a strong though non-specific marker of cardiovascular stress that proceeds hemodynamic instability. We designed a predictive model of tachycardia using multi-granular intensive care unit (ICU) data by creating a risk score and dynamic trajectory. A subset of clinical and numerical signals were extracted from the Multiparameter Intelligent Monitoring in Intensive Care II database. A tachycardia episode was defined as heart rate ≥ 130/min lasting for ≥ 5 min, with ≥ 10% density. Regularized logistic regression (LR) and random forest (RF) classifiers were trained to create a risk score for upcoming tachycardia. Three different risk score models were compared for tachycardia and control (non-tachycardia) groups. Risk trajectory was generated from time windows moving away at 1 min increments from the tachycardia episode. Trajectories were computed over 3 hours leading up to the episode for three different models. From 2809 subjects, 787 tachycardia episodes and 707 control periods were identified. Patients with tachycardia had increased vasopressor support, longer ICU stay, and increased ICU mortality than controls. In model evaluation, RF was slightly superior to LR, which accuracy ranged from 0.847 to 0.782, with area under the curve from 0.921 to 0.842. Risk trajectory analysis showed average risks for tachycardia group evolved to 0.78 prior to the tachycardia episodes, while control group risks remained < 0.3. Among the three models, the internal control model demonstrated evolving trajectory approximately 75 min before tachycardia episode. Clinically relevant tachycardia episodes can be predicted from vital sign time series using machine learning algorithms.


Asunto(s)
Enfermedades Cardiovasculares/diagnóstico , Cuidados Críticos/métodos , Enfermedades Pulmonares/diagnóstico , Monitoreo Intraoperatorio/métodos , Taquicardia/diagnóstico , Adulto , Anciano , Algoritmos , Área Bajo la Curva , Recolección de Datos , Bases de Datos Factuales , Registros Electrónicos de Salud , Frecuencia Cardíaca , Mortalidad Hospitalaria , Humanos , Unidades de Cuidados Intensivos , Modelos Logísticos , Aprendizaje Automático , Persona de Mediana Edad , Curva ROC , Análisis de Regresión , Reproducibilidad de los Resultados , Riesgo , Centros de Atención Terciaria , Adulto Joven
10.
J Electrocardiol ; 51(6S): S44-S48, 2018.
Artículo en Inglés | MEDLINE | ID: mdl-30077422

RESUMEN

Research demonstrates that the majority of alarms derived from continuous bedside monitoring devices are non-actionable. This avalanche of unreliable alerts causes clinicians to experience sensory overload when attempting to sort real from false alarms, causing desensitization and alarm fatigue, which in turn leads to adverse events when true instability is neither recognized nor attended to despite the alarm. The scope of the problem of alarm fatigue is broad, and its contributing mechanisms are numerous. Current and future approaches to defining and reacting to actionable and non-actionable alarms are being developed and investigated, but challenges in impacting alarm modalities, sensitivity and specificity, and clinical activity in order to reduce alarm fatigue and adverse events remain. A multi-faceted approach involving clinicians, computer scientists, industry, and regulatory agencies is needed to battle alarm fatigue.


Asunto(s)
Alarmas Clínicas , Seguridad del Paciente , Sistemas de Atención de Punto , Errores Diagnósticos , Electrocardiografía , Falla de Equipo , Humanos , Sonido
11.
J Clin Monit Comput ; 32(1): 117-126, 2018 Feb.
Artículo en Inglés | MEDLINE | ID: mdl-28229353

RESUMEN

Cardiorespiratory instability (CRI) in monitored step-down unit (SDU) patients has a variety of etiologies, and likely manifests in patterns of vital signs (VS) changes. We explored use of clustering techniques to identify patterns in the initial CRI epoch (CRI1; first exceedances of VS beyond stability thresholds after SDU admission) of unstable patients, and inter-cluster differences in admission characteristics and outcomes. Continuous noninvasive monitoring of heart rate (HR), respiratory rate (RR), and pulse oximetry (SpO2) were sampled at 1/20 Hz. We identified CRI1 in 165 patients, employed hierarchical and k-means clustering, tested several clustering solutions, used 10-fold cross validation to establish the best solution and assessed inter-cluster differences in admission characteristics and outcomes. Three clusters (C) were derived: C1) normal/high HR and RR, normal SpO2 (n = 30); C2) normal HR and RR, low SpO2 (n = 103); and C3) low/normal HR, low RR and normal SpO2 (n = 32). Clusters were significantly different based on age (p < 0.001; older patients in C2), number of comorbidities (p = 0.008; more C2 patients had ≥ 2) and hospital length of stay (p = 0.006; C1 patients stayed longer). There were no between-cluster differences in SDU length of stay, or mortality. Three different clusters of VS presentations for CRI1 were identified. Clusters varied on age, number of comorbidities and hospital length of stay. Future study is needed to determine if there are common physiologic underpinnings of VS clusters which might inform clinical decision-making when CRI first manifests.


Asunto(s)
Cuidados Críticos/métodos , Monitoreo Fisiológico/instrumentación , Procesamiento de Señales Asistido por Computador , Signos Vitales , Adulto , Anciano , Análisis por Conglomerados , Estudios de Cohortes , Comorbilidad , Femenino , Frecuencia Cardíaca , Hospitalización , Humanos , Masculino , Persona de Mediana Edad , Monitoreo Fisiológico/métodos , Oximetría , Admisión del Paciente , Reproducibilidad de los Resultados , Frecuencia Respiratoria
12.
J Emerg Nurs ; 44(2): 132-138, 2018 Mar.
Artículo en Inglés | MEDLINE | ID: mdl-28712527

RESUMEN

INTRODUCTION: Aneurysmal subarachnoid hemorrhage (aSAH) is frequently seen in emergency departments. Secondary injury, such as subarachnoid hemorrhage-associated myocardial injury (SAHMI), affects one third of survivors and contributes to poor outcomes. SAHMI is not attributed to ischemia from myocardial disease but can result in hypotension and arrhythmias. It is important that emergency nurses recognize which clinical presentation characteristics are predictive of SAHMI to initiate proper interventions. The aim of this study was to determine whether patients who present to the emergency department with clinical aSAH are likely to develop SAHMI, as defined by troponin I ≥0.3 ng/mL. METHODS: This was a prospective descriptive study. SAHMI was defined as troponin I ≥0.3 ng/mL. Predictors included demographics and clinical characteristics, severity of injury, admission 12-lead electrogardiogram (ECG), initial emergency department vital signs, and pre-hospital symptoms at time of aneurysm rupture. RESULTS: Of 449 patients, 126 (28%) had SAHMI. Patients with SAHMI were more likely to report seizures and unresponsiveness with significantly lower Glasgow coma score and higher proportion of Hunt and Hess grades 3 to 5 and Fisher grades III and IV (all P < .05). Patients with SAHMI had higher atrial and ventricular rates and longer QTc intervals on initial ECG (P < .05). On multivariable logistic regression, poor Hunt and Hess grade, report of prehospital unresponsiveness, lower admission Glasgow coma score, and longer QTc interval were significantly and independently predictive of SAHMI (P < .05). DISCUSSION: Components of the clinical presentation of subarachnoid hemorrhage to the emergency department predict SAHMI. Identifying patients with SAHMI in the emergency department can be helpful in determining surveillance and care needs and informing transfer unit care. Contribution to Emergency Nursing Practice.


Asunto(s)
Cardiomiopatías/etiología , Cardiomiopatías/prevención & control , Enfermería de Urgencia/métodos , Servicio de Urgencia en Hospital , Hemorragia Subaracnoidea/complicaciones , Hemorragia Subaracnoidea/fisiopatología , Cardiomiopatías/fisiopatología , Femenino , Escala de Coma de Glasgow , Humanos , Masculino , Persona de Mediana Edad , Estudios Prospectivos , Índice de Severidad de la Enfermedad
13.
Nurs Res ; 66(1): 12-19, 2017.
Artículo en Inglés | MEDLINE | ID: mdl-27977564

RESUMEN

BACKGROUND: Patients undergoing continuous vital sign monitoring (heart rate [HR], respiratory rate [RR], pulse oximetry [SpO2]) in real time display interrelated vital sign changes during situations of physiological stress. Patterns in this physiological cross-talk could portend impending cardiorespiratory instability (CRI). Vector autoregressive (VAR) modeling with Granger causality tests is one of the most flexible ways to elucidate underlying causal mechanisms in time series data. PURPOSE: The purpose of this article is to illustrate the development of patient-specific VAR models using vital sign time series data in a sample of acutely ill, monitored, step-down unit patients and determine their Granger causal dynamics prior to onset of an incident CRI. APPROACH: CRI was defined as vital signs beyond stipulated normality thresholds (HR = 40-140/minute, RR = 8-36/minute, SpO2 < 85%) and persisting for 3 minutes within a 5-minute moving window (60% of the duration of the window). A 6-hour time segment prior to onset of first CRI was chosen for time series modeling in 20 patients using a six-step procedure: (a) the uniform time series for each vital sign was assessed for stationarity, (b) appropriate lag was determined using a lag-length selection criteria, (c) the VAR model was constructed, (d) residual autocorrelation was assessed with the Lagrange Multiplier test, (e) stability of the VAR system was checked, and (f) Granger causality was evaluated in the final stable model. RESULTS: The primary cause of incident CRI was low SpO2 (60% of cases), followed by out-of-range RR (30%) and HR (10%). Granger causality testing revealed that change in RR caused change in HR (21%; i.e., RR changed before HR changed) more often than change in HR causing change in RR (15%). Similarly, changes in RR caused changes in SpO2 (15%) more often than changes in SpO2 caused changes in RR (9%). For HR and SpO2, changes in HR causing changes in SpO2 and changes in SpO2 causing changes in HR occurred with equal frequency (18%). DISCUSSION: Within this sample of acutely ill patients who experienced a CRI event, VAR modeling indicated that RR changes tend to occur before changes in HR and SpO2. These findings suggest that contextual assessment of RR changes as the earliest sign of CRI is warranted. Use of VAR modeling may be helpful in other nursing research applications based on time series data.


Asunto(s)
Cuidados Críticos/organización & administración , Modelos de Enfermería , Monitoreo Fisiológico/enfermería , Evaluación en Enfermería/métodos , Investigación en Enfermería , Insuficiencia Respiratoria/diagnóstico , Insuficiencia Respiratoria/enfermería , Determinación de la Presión Sanguínea/enfermería , Indicadores de Salud , Humanos , Medición de Riesgo
15.
Crit Care Med ; 44(7): e456-63, 2016 Jul.
Artículo en Inglés | MEDLINE | ID: mdl-26992068

RESUMEN

OBJECTIVE: The use of machine-learning algorithms to classify alerts as real or artifacts in online noninvasive vital sign data streams to reduce alarm fatigue and missed true instability. DESIGN: Observational cohort study. SETTING: Twenty-four-bed trauma step-down unit. PATIENTS: Two thousand one hundred fifty-three patients. INTERVENTION: Noninvasive vital sign monitoring data (heart rate, respiratory rate, peripheral oximetry) recorded on all admissions at 1/20 Hz, and noninvasive blood pressure less frequently, and partitioned data into training/validation (294 admissions; 22,980 monitoring hours) and test sets (2,057 admissions; 156,177 monitoring hours). Alerts were vital sign deviations beyond stability thresholds. A four-member expert committee annotated a subset of alerts (576 in training/validation set, 397 in test set) as real or artifact selected by active learning, upon which we trained machine-learning algorithms. The best model was evaluated on test set alerts to enact online alert classification over time. MEASUREMENTS AND MAIN RESULTS: The Random Forest model discriminated between real and artifact as the alerts evolved online in the test set with area under the curve performance of 0.79 (95% CI, 0.67-0.93) for peripheral oximetry at the instant the vital sign first crossed threshold and increased to 0.87 (95% CI, 0.71-0.95) at 3 minutes into the alerting period. Blood pressure area under the curve started at 0.77 (95% CI, 0.64-0.95) and increased to 0.87 (95% CI, 0.71-0.98), whereas respiratory rate area under the curve started at 0.85 (95% CI, 0.77-0.95) and increased to 0.97 (95% CI, 0.94-1.00). Heart rate alerts were too few for model development. CONCLUSIONS: Machine-learning models can discern clinically relevant peripheral oximetry, blood pressure, and respiratory rate alerts from artifacts in an online monitoring dataset (area under the curve > 0.87).


Asunto(s)
Artefactos , Alarmas Clínicas/clasificación , Monitoreo Fisiológico/métodos , Aprendizaje Automático Supervisado , Signos Vitales , Determinación de la Presión Sanguínea , Estudios de Cohortes , Frecuencia Cardíaca , Humanos , Oximetría , Frecuencia Respiratoria
16.
Med Care ; 54(3): 319-25, 2016 Mar.
Artículo en Inglés | MEDLINE | ID: mdl-26765148

RESUMEN

BACKGROUND: Intensive care unit (ICU) telemedicine is an increasingly common strategy for improving the outcome of critical care, but its overall impact is uncertain. OBJECTIVES: To determine the effectiveness of ICU telemedicine in a national sample of hospitals and quantify variation in effectiveness across hospitals. RESEARCH DESIGN: We performed a multicenter retrospective case-control study using 2001-2010 Medicare claims data linked to a national survey identifying US hospitals adopting ICU telemedicine. We matched each adopting hospital (cases) to up to 3 nonadopting hospitals (controls) based on size, case-mix, and geographic proximity during the year of adoption. Using ICU admissions from 2 years before and after the adoption date, we compared outcomes between case and control hospitals using a difference-in-differences approach. RESULTS: A total of 132 adopting case hospitals were matched to 389 similar nonadopting control hospitals. The preadoption and postadoption unadjusted 90-day mortality was similar in both case hospitals (24.0% vs. 24.3%, P=0.07) and control hospitals (23.5% vs. 23.7%, P<0.01). In the difference-in-differences analysis, ICU telemedicine adoption was associated with a small relative reduction in 90-day mortality (ratio of odds ratios=0.96; 95% CI, 0.95-0.98; P<0.001). However, there was wide variation in the ICU telemedicine effect across individual hospitals (median ratio of odds ratios=1.01; interquartile range, 0.85-1.12; range, 0.45-2.54). Only 16 case hospitals (12.2%) experienced statistically significant mortality reductions postadoption. Hospitals with a significant mortality reduction were more likely to have large annual admission volumes (P<0.001) and be located in urban areas (P=0.04) compared with other hospitals. CONCLUSIONS: Although ICU telemedicine adoption resulted in a small relative overall mortality reduction, there was heterogeneity in effect across adopting hospitals, with large-volume urban hospitals experiencing the greatest mortality reductions.


Asunto(s)
Mortalidad Hospitalaria/tendencias , Unidades de Cuidados Intensivos/estadística & datos numéricos , Telemedicina/estadística & datos numéricos , Anciano , Anciano de 80 o más Años , Estudios de Casos y Controles , Comorbilidad , Grupos Diagnósticos Relacionados , Femenino , Hospitales de Alto Volumen/estadística & datos numéricos , Humanos , Tiempo de Internación/estadística & datos numéricos , Masculino , Medicare/estadística & datos numéricos , Alta del Paciente/estadística & datos numéricos , Características de la Residencia , Estudios Retrospectivos , Estados Unidos
17.
Crit Care ; 20: 70, 2016 Mar 16.
Artículo en Inglés | MEDLINE | ID: mdl-26984263

RESUMEN

This article is one of ten reviews selected from the Annual Update in Intensive Care and Emergency medicine 2016. Other selected articles can be found online at http://www.biomedcentral.com/collections/annualupdate2016. Further information about the Annual Update in Intensive Care and Emergency Medicine is available from http://www.springer.com/series/8901.


Asunto(s)
Monitoreo Fisiológico/métodos , Insuficiencia Respiratoria/diagnóstico , Insuficiencia Respiratoria/terapia , Signos Vitales/fisiología , Medicina de Emergencia/métodos , Humanos , Unidades de Cuidados Intensivos
18.
J Clin Monit Comput ; 30(6): 875-888, 2016 Dec.
Artículo en Inglés | MEDLINE | ID: mdl-26438655

RESUMEN

Huge hospital information system databases can be mined for knowledge discovery and decision support, but artifact in stored non-invasive vital sign (VS) high-frequency data streams limits its use. We used machine-learning (ML) algorithms trained on expert-labeled VS data streams to automatically classify VS alerts as real or artifact, thereby "cleaning" such data for future modeling. 634 admissions to a step-down unit had recorded continuous noninvasive VS monitoring data [heart rate (HR), respiratory rate (RR), peripheral arterial oxygen saturation (SpO2) at 1/20 Hz, and noninvasive oscillometric blood pressure (BP)]. Time data were across stability thresholds defined VS event epochs. Data were divided Block 1 as the ML training/cross-validation set and Block 2 the test set. Expert clinicians annotated Block 1 events as perceived real or artifact. After feature extraction, ML algorithms were trained to create and validate models automatically classifying events as real or artifact. The models were then tested on Block 2. Block 1 yielded 812 VS events, with 214 (26 %) judged by experts as artifact (RR 43 %, SpO2 40 %, BP 15 %, HR 2 %). ML algorithms applied to the Block 1 training/cross-validation set (tenfold cross-validation) gave area under the curve (AUC) scores of 0.97 RR, 0.91 BP and 0.76 SpO2. Performance when applied to Block 2 test data was AUC 0.94 RR, 0.84 BP and 0.72 SpO2. ML-defined algorithms applied to archived multi-signal continuous VS monitoring data allowed accurate automated classification of VS alerts as real or artifact, and could support data mining for future model building.


Asunto(s)
Alarmas Clínicas , Minería de Datos/métodos , Frecuencia Cardíaca , Monitoreo Fisiológico , Adulto , Anciano , Algoritmos , Área Bajo la Curva , Artefactos , Presión Sanguínea , Interpretación Estadística de Datos , Sistemas de Apoyo a Decisiones Clínicas , Femenino , Sistemas de Información en Hospital , Humanos , Aprendizaje Automático , Masculino , Persona de Mediana Edad , Oscilometría , Riesgo , Signos Vitales
19.
J Radiol Nurs ; 34(1): 29-34, 2015 Mar 01.
Artículo en Inglés | MEDLINE | ID: mdl-25821413

RESUMEN

The purpose of this study was to calculate the event rate for in-patients in the Radiology Department (RD) developing instability leading to calls for Medical Emergency Team assistance (MET-RD) compared to general ward (MET-W) patients. A retrospective comparison was done of MET-RD and MET-W calls in 2009 in a U.S. tertiary hospital with a well-established MET system. MET-RD and MET-W event rates represented as MET calls/hour/1000 admissions, adjusted for length of stay (LOS); rates also calculated for RD modalities. There were 31,320 hospital ward admissions had 1,230 MET-W, and among 149,569 radiology admissions there were 56 MET-RD. When adjusted for LOS, the MET-RD event rate was 2 times higher than the MET-W rate (0.48 vs. 0.24 events/hour/1000 admissions). Event rates differed by procedure: computed tomography (CT) had 38% of MET-RDs (event rate 0.89); magnetic resonance imaging (MRI) accounted for 27% (event rate 1.56). Nuclear medicine had 1% of RD admissions but these patients accounted for 5% of MET-RD (event rate 1.53). Interventional radiology (IR) had 6% of RD admissions but 16% of MET-RD (event rate 0.61). While general x-ray comprised 63% of RD admissions, only 11% of MET-RD involved their care (event rate 0.09). In conclusion, the overall MET-RD event rate was twice the MET-W event rate; CT, MRI and IR rates were 3.7-6.5 times higher than on wards. RD patients are at increased risk for a MET call compared to ward patients when the time at risk is considered. Increased surveillance of RD patients is warranted.

20.
J Nurse Pract ; 11(7): 702-709, 2015.
Artículo en Inglés | MEDLINE | ID: mdl-26273234

RESUMEN

Nurse practitioners may manage patients with coagulopathic bleeding which can lead to life-threatening hemorrhage. Routine plasma-based tests such as prothrombin time and activated partial thromboplastin time are inadequate in diagnosing hemorrhagic coagulopathy. Indiscriminate administration of fresh frozen plasma, platelets or cryoprecipitate for coagulopathic states can be extremely dangerous. The qualitative analysis that thromboelastography provides can facilitate the administration of the right blood product, at the right time, thereby permitting the application of goal-directed therapy for coagulopathic intervention application and patient survival.

SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA