RESUMEN
BACKGROUND: Presenting symptoms of COVID-19 patients are unusual compared with many other illnesses. Blood pressure, heart rate, and respiratory rate may stay within acceptable ranges as the disease progresses. Consequently, intermittent monitoring does not detect deterioration as it is happening. We investigated whether continuously monitoring heart rate and respiratory rate enables earlier detection of deterioration compared with intermittent monitoring, or introduces any risks. METHODS: When available, patients admitted to a COVID-19 ward received a wireless wearable sensor which continuously measured heart rate and respiratory rate. Two intensive care unit (ICU) physicians independently assessed sensor data, indicating when an intervention might be necessary (alarms). A third ICU physician independently extracted clinical events from the electronic medical record (EMR events). The primary outcome was the number of true alarms. Secondary outcomes included the time difference between true alarms and EMR events, interrater agreement for the alarms, and severity of EMR events that were not detected. RESULTS: In clinical practice, 48 (EMR) events occurred. None of the 4 ICU admissions were detected with the sensor. Of the 62 sensor events, 13 were true alarms (also EMR events). Of these, two were related to rapid response team calls. The true alarms were detected 39 min (SD = 113) before EMR events, on average. Interrater agreement was 10%. Severity of the 38 non-detected events was similar to the severity of 10 detected events. CONCLUSION: Continuously monitoring heart rate and respiratory rate does not reliably detect deterioration in COVID-19 patients when assessed by ICU physicians.
Asunto(s)
COVID-19 , Frecuencia Respiratoria , Humanos , Frecuencia Cardíaca , COVID-19/diagnóstico , Monitoreo Fisiológico , Signos Vitales/fisiologíaRESUMEN
Our aim was to determine the agreement of heart rate (HR) and respiratory rate (RR) measurements by the Philips Biosensor with a reference monitor (General Electric Carescape B650) in severely obese patients during and after bariatric surgery. Additionally, sensor reliability was assessed. Ninety-four severely obese patients were monitored with both the Biosensor and reference monitor during and after bariatric surgery. Agreement was defined as the mean absolute difference between both monitoring devices. Bland Altman plots and Clarke Error Grid analysis (CEG) were used to visualise differences. Sensor reliability was reflected by the amount, duration and causes of data loss. The mean absolute difference for HR was 1.26 beats per minute (bpm) (SD 0.84) during surgery and 1.84 bpm (SD 1.22) during recovery, and never exceeded the 8 bpm limit of agreement. The mean absolute difference for RR was 1.78 breaths per minute (brpm) (SD 1.90) during surgery and 4.24 brpm (SD 2.75) during recovery. The Biosensor's RR measurements exceeded the 2 brpm limit of agreement in 58% of the compared measurements. Averaging 15 min of measurements for both devices improved agreement. CEG showed that 99% of averaged RR measurements resulted in adequate treatment. Data loss was limited to 4.5% of the total duration of measurements for RR. No clear causes for data loss were found. The Biosensor is suitable for remote monitoring of HR, but not RR in morbidly obese patients. Future research should focus on improving RR measurements, the interpretation of continuous data, and development of smart alarm systems.
Asunto(s)
Obesidad Mórbida , Dispositivos Electrónicos Vestibles , Frecuencia Cardíaca , Humanos , Monitoreo Fisiológico/métodos , Reproducibilidad de los Resultados , Frecuencia RespiratoriaRESUMEN
BACKGROUND: Continuous monitoring using wireless wearable sensors is a promising solution for use in clinical practice and in the home setting. It is important to involve nurses to ensure successful implementation. This paper aims to provide an overview of 1) factors affecting implementation of continuous monitoring using wireless wearable sensors by evaluating nurses' experiences with its use on the nursing ward, and 2) nurses' expectations for use in the home setting. METHODS: Semi-structured interviews were conducted with 16 nurses from three teaching hospitals in the Netherlands, covering constructs from the Consolidated Framework for Implementation Research (CFIR). A deductive approach of directed content analysis was applied. One additional factor was added using the Unified Theory for Acceptance of Technology (UTAUT). The quotes and domains were rated on valence (positive, neutral, negative) and strength (strong: - 2, + 2, neutral 0, and weak: - 1, + 1). RESULTS: Data was collected on 27 CFIR constructs and 1 UTAUT construct. In the experience of at least 8 nurses, five constructs had a strong positive influence on implementation on the nursing ward, including relative advantage (e.g., early detection of deterioration), patient needs and resources (e.g. feeling safe), networks and communications (e.g. execute tasks together), personal attributes (e.g. experience with intervention), and implementation leaders (e.g., project leader). Five constructs had a strong negative influence: evidence strength and quality (e.g. lack of evidence from practical experience), complexity (e.g. number of process steps), design quality and packaging (e.g., bad sensor quality), compatibility (e.g, change in work) and facilitating conditions (e.g, Wi-Fi connection). Nurses expected continuous monitoring in the home setting to be hindered by compatibility with work processes and to be facilitated by staff's access to information. Technical facilitating conditions (e.g. interoperability) were suggested to be beneficial for further development. CONCLUSIONS: This paper provides an overview, of factors influencing implementation of continuous monitoring including relative importance, based on nurses' experiences with use on nursing wards, and their perspectives for use in the home setting. Implementation of continuous monitoring is affected by a wide range of factors. This overview may be used as a guideline for future implementations.
RESUMEN
BACKGROUND: Telehealth interventions, that is, health care provided over a distance using information and communication technology, are suggested as a solution to rising health care costs by reducing hospital service use. However, the extent to which this is possible is unclear. OBJECTIVE: The aim of this study is to evaluate the effect of telehealth on the use of hospital services, that is, (duration of) hospitalizations, and to compare the effects between telehealth types and health conditions. METHODS: We searched PubMed, Scopus, and the Cochrane Library from inception until April 2019. Peer-reviewed randomized controlled trials (RCTs) reporting the effect of telehealth interventions on hospital service use compared with usual care were included. Risk of bias was assessed using the Cochrane Risk of Bias 2 tool and quality of evidence according to the Grading of Recommendations Assessment, Development and Evaluation guidelines. RESULTS: We included 127 RCTs in the meta-analysis. Of these RCTs, 82.7% (105/127) had a low risk of bias or some concerns overall. High-quality evidence shows that telehealth reduces the risk of all-cause or condition-related hospitalization by 18 (95% CI 0-30) and 37 (95% CI 20-60) per 1000 patients, respectively. We found high-quality evidence that telehealth leads to reductions in the mean all-cause and condition-related hospitalizations, with 50 and 110 fewer hospitalizations per 1000 patients, respectively. Overall, the all-cause hospital days decreased by 1.07 (95% CI -1.76 to -0.39) days per patient. For hospitalized patients, the mean hospital stay for condition-related hospitalizations decreased by 0.89 (95% CI -1.42 to -0.36) days. The effects were similar between telehealth types and health conditions. A trend was observed for studies with longer follow-up periods yielding larger effects. CONCLUSIONS: Small to moderate reductions in hospital service use can be achieved using telehealth. It should be noted that, despite the large number of included studies, uncertainties around the magnitude of effects remain, and not all effects are statistically significant.
Asunto(s)
Telemedicina , Sesgo , Hospitalización , Hospitales , Humanos , Tiempo de InternaciónRESUMEN
OBJECTIVE: To determine the budget impact of virtual care. METHODS: We conducted a budget impact analysis of virtual care from the perspective of a large teaching hospital in the Netherlands. Virtual care included remote monitoring of vital signs and three daily remote contacts. Net budget impact over 5 years and net costs per patient per day (costs/patient/day) were calculated for different scenarios: implementation in one ward, in two different wards, in the entire hospital, and in multiple hospitals. Sensitivity analyses included best-case and worst-case scenarios, and reducing the frequency of daily remote contacts. RESULTS: Net budget impact over 5 years was 2 090 000 for implementation in one ward, 410 000 for two wards and -6 206 000 for the entire hospital. Costs/patient/day in the first year were 303 for implementation in one ward, 94 for two wards and 11 for the entire hospital, decreasing in subsequent years to a mean of 259 (SD=72), 17 (SD=10) and -55 (SD=44), respectively. Projecting implementation in every Dutch hospital resulted in a net budget impact over 5 years of -445 698 500. For this scenario, costs/patient/day decreased to -37 in the first year, and to 54 in subsequent years in the base case. CONCLUSIONS: With present cost levels, virtual care only saves money if it is deployed at sufficient scale or if it can be designed such that the active involvement of health professionals is minimised. Taking a greenfield approach, involving larger numbers of hospitals, further decreases costs compared with implementing virtual care in one hospital alone.