Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 76
Filtrar
1.
BMJ Open ; 14(1): e071598, 2024 01 17.
Artículo en Inglés | MEDLINE | ID: mdl-38233050

RESUMEN

OBJECTIVES: To estimate the potential referral rate and cost impact at different cut-off points of a recently developed sepsis prediction model for general practitioners (GPs). DESIGN: Prospective observational study with decision tree modelling. SETTING: Four out-of-hours GP services in the Netherlands. PARTICIPANTS: 357 acutely ill adult patients assessed during home visits. PRIMARY AND SECONDARY OUTCOME MEASURES: The primary outcome is the cost per patient from a healthcare perspective in four scenarios based on different cut-off points for referral of the sepsis prediction model. Second, the number of hospital referrals for the different scenarios is estimated. The potential impact of referral of patients with sepsis on mortality and hospital admission was estimated by an expert panel. Using these study data, a decision tree with a time horizon of 1 month was built to estimate the referral rate and cost impact in case the model would be implemented. RESULTS: Referral rates at a low cut-off (score 2 or 3 on a scale from 0 to 6) of the prediction model were higher than observed for patients with sepsis (99% and 91%, respectively, compared with 88% observed). However, referral was also substantially higher for patients who did not need hospital assessment. As a consequence, cost-savings due to referral of patients with sepsis were offset by increased costs due to unnecessary referral for all cut-offs of the prediction model. CONCLUSIONS: Guidance for referral of adult patients with suspected sepsis in the primary care setting using any cut-off point of the sepsis prediction model is not likely to save costs. The model should only be incorporated in sepsis guidelines for GPs if improvement of care can be demonstrated in an implementation study. TRIAL REGISTRATION NUMBER: Dutch Trial Register (NTR 7026).


Asunto(s)
Médicos Generales , Sepsis , Adulto , Humanos , Análisis Costo-Beneficio , Estudios Prospectivos , Atención Primaria de Salud , Sepsis/diagnóstico , Sepsis/terapia
2.
Clin Microbiol Infect ; 29(6): 781-788, 2023 Jun.
Artículo en Inglés | MEDLINE | ID: mdl-36736662

RESUMEN

OBJECTIVES: To test whether Bacillus Calmette-Guérin (BCG) vaccination would reduce the incidence of COVID-19 and other respiratory tract infections (RTIs) in older adults with one or more comorbidities. METHODS: Community-dwelling adults aged 60 years or older with one or more underlying comorbidities and no contraindications to BCG vaccination were randomized 1:1 to BCG or placebo vaccination and followed for 6 months. The primary endpoint was a self-reported, test-confirmed COVID-19 incidence. Secondary endpoints included COVID-19 hospital admissions and clinically relevant RTIs (i.e. RTIs including but not limited to COVID-19 requiring medical intervention). COVID-19 and clinically relevant RTI episodes were adjudicated. Incidences were compared using Fine-Gray regression, accounting for competing events. RESULTS: A total of 6112 participants with a median age of 69 years (interquartile range, 65-74) and median of 2 (interquartile range, 1-3) comorbidities were randomized to BCG (n = 3058) or placebo (n = 3054) vaccination. COVID-19 infections were reported by 129 BCG recipients compared to 115 placebo recipients [hazard ratio (HR), 1.12; 95% CI, 0.87-1.44]. COVID-19-related hospitalization occurred in 18 BCG and 21 placebo recipients (HR, 0.86; 95% CI, 0.46-1.61). During the study period, 13 BCG recipients died compared with 18 placebo recipients (HR, 0.71; 95% CI, 0.35-1.43), of which 11 deaths (35%) were COVID-19-related: six in the placebo group and five in the BCG group. Clinically relevant RTI was reported by 66 BCG and 72 placebo recipients (HR, 0.92; 95% CI, 0.66-1.28). DISCUSSION: BCG vaccination does not protect older adults with comorbidities against COVID-19, COVID-19 hospitalization, or clinically relevant RTIs.


Asunto(s)
COVID-19 , Humanos , Anciano , COVID-19/epidemiología , COVID-19/prevención & control , Vacuna BCG , Vacunación , Hospitalización , Incidencia
3.
BMC Emerg Med ; 22(1): 208, 2022 12 23.
Artículo en Inglés | MEDLINE | ID: mdl-36550392

RESUMEN

Accurate sepsis diagnosis is paramount for treatment decisions, especially at the emergency department (ED). To improve diagnosis, clinical decision support (CDS) tools are being developed with machine learning (ML) algorithms, using a wide range of variable groups. ML models can find patterns in Electronic Health Record (EHR) data that are unseen by the human eye. A prerequisite for a good model is the use of high-quality labels. Sepsis gold-standard labels are hard to define due to a lack of reliable diagnostic tools for sepsis at the ED. Therefore, standard clinical tools, such as clinical prediction scores (e.g. modified early warning score and quick sequential organ failure assessment), and claims-based methods (e.g. ICD-10) are used to generate suboptimal labels. As a consequence, models trained with these "silver" labels result in ill-trained models. In this study, we trained ML models for sepsis diagnosis at the ED with labels of 375 ED visits assigned by an endpoint adjudication committee (EAC) that consisted of 18 independent experts. Our objective was to evaluate which routinely measured variables show diagnostic value for sepsis. We performed univariate testing and trained multiple ML models with 95 routinely measured variables of three variable groups; demographic and vital, laboratory and advanced haematological variables. Apart from known diagnostic variables, we identified added diagnostic value for less conventional variables such as eosinophil count and platelet distribution width. In this explorative study, we show that the use of an EAC together with ML can identify new targets for future sepsis diagnosis research.


Asunto(s)
Servicio de Urgencia en Hospital , Sepsis , Humanos , Aprendizaje Automático , Algoritmos , Sepsis/diagnóstico , Grupo Social , Estudios Retrospectivos
4.
PLoS One ; 17(7): e0270858, 2022.
Artículo en Inglés | MEDLINE | ID: mdl-35816504

RESUMEN

OBJECTIVES: To evaluate the prognostic value of the coefficient of variance of axial light loss of monocytes (cv-ALL of monocytes) for adverse clinical outcomes in patients suspected of infection in the emergency department (ED). METHODS: We performed an observational, retrospective monocenter study including all medical patients ≥18 years admitted to the ED between September 2016 and June 2019 with suspected infection. Adverse clinical outcomes included 30-day mortality and ICU/MCU admission <3 days after presentation. We determined the additional value of monocyte cv-ALL and compared to frequently used clinical prediction scores (SIRS, qSOFA, MEWS). Next, we developed a clinical model with routinely available parameters at the ED, including cv-ALL of monocytes. RESULTS: A total of 3526 of patients were included. The OR for cv-ALL of monocytes alone was 2.21 (1.98-2.47) for 30-day mortality and 2.07 (1.86-2.29) for ICU/MCU admission <3 days after ED presentation. When cv-ALL of monocytes was combined with a clinical score, the prognostic accuracy increased significantly for all tested scores (SIRS, qSOFA, MEWS). The maximum AUC for a model with routinely available parameters at the ED was 0.81 to predict 30-day mortality and 0.81 for ICU/MCU admission. CONCLUSIONS: Cv-ALL of monocytes is a readily available biomarker that is useful as prognostic marker to predict 30-day mortality. Furthermore, it can be used to improve routine prediction of adverse clinical outcomes at the ED. CLINICAL TRIAL REGISTRATION: Registered in the Dutch Trial Register (NTR) und number 6916.


Asunto(s)
Puntuaciones en la Disfunción de Órganos , Sepsis , Servicio de Urgencia en Hospital , Mortalidad Hospitalaria , Humanos , Monocitos , Pronóstico , Curva ROC , Estudios Retrospectivos
5.
Clin Microbiol Infect ; 28(11): 1502.e1-1502.e5, 2022 Nov.
Artículo en Inglés | MEDLINE | ID: mdl-35724869

RESUMEN

OBJECTIVE: Detection of the intracellular bacterium Coxiella burnetii, causative agent of chronic Q fever, is notoriously difficult. Diagnosis of and duration of antibiotic treatment for chronic Q fever is partly determined by detection of the bacterium with polymerase chain reaction (PCR). Fluorescence in situ hybridization (FISH) might be a promising technique for detecting C. burnetii in tissue samples from chronic Q fever patients, but its value in comparison with PCR is uncertain. We aim to assess the value of FISH for detecting C. burnetii in tissue of chronic Q fever patients. METHODS: FISH and PCR were performed on tissue samples from Dutch chronic Q fever patients collected during surgery or autopsy. Sensitivity, specificity, and overall diagnostic accuracy were calculated. Additionally, data on patient and disease characteristics were collected from electronic medical records. RESULTS: In total, 49 tissue samples from mainly vascular walls, heart valves, or placentas, obtained from 39 chronic Q fever patients, were examined by FISH and PCR. The sensitivity and specificity of FISH compared to PCR for detecting C. burnetii in tissue samples from chronic Q fever patients was 45.2% (95% confidence interval (CI), 27.3% - 64.0%) and 84.6% (95% CI, 54.6% - 98.1%), respectively. The overall diagnostic accuracy was 56.8% (95% CI, 42.2% - 72.3%). Two C. burnetii PCR negative placentas were FISH positive. Four FISH results (8.2%) were deemed inconclusive because of autofluorescence. CONCLUSION: With an overall diagnostic accuracy of 57.8%, we conclude that FISH has limited value in the routine diagnostics of chronic Q fever.


Asunto(s)
Coxiella burnetii , Fiebre Q , Embarazo , Femenino , Humanos , Coxiella burnetii/genética , Fiebre Q/diagnóstico , Fiebre Q/microbiología , Hibridación Fluorescente in Situ/métodos , Válvulas Cardíacas/microbiología , Antibacterianos
6.
Pathogens ; 11(5)2022 May 09.
Artículo en Inglés | MEDLINE | ID: mdl-35631080

RESUMEN

Early recognition of sepsis is essential for improving outcomes and preventing complications such as organ failure, depression, and neurocognitive impairment. The emergency department (ED) plays a key role in the early identification of sepsis, but clinicians lack diagnostic tools. Potentially, biomarkers could be helpful in assisting clinicians in the ED, but no marker has yet been successfully implemented in daily practice with good clinical performance. Pancreatic stone protein (PSP) is a promising biomarker in the context of sepsis, but little is known about the diagnostic performance of PSP in the ED. We prospectively investigated the diagnostic value of PSP in such a population for patients suspected of infection. PSP was compared with currently used biomarkers, including white blood cell count (WBC) and C-reactive protein (CRP). Of the 156 patients included in this study, 74 (47.4%) were diagnosed with uncomplicated infection and 26 (16.7%) patients with sepsis, while 56 (35.9%) eventually had no infection. PSP was significantly higher for sepsis patients compared to patients with no sepsis. In multivariate regression, PSP was a significant predictor for sepsis, with an area under the curve (AUC) of 0.69. Positive and negative predictive values for this model were 100% and 84.4%, respectively. Altogether, these findings show that PSP, measured at the ED of a tertiary hospital, is associated with sepsis but lacks the diagnostic performance to be used as single marker.

7.
Comput Biol Med ; 146: 105621, 2022 07.
Artículo en Inglés | MEDLINE | ID: mdl-35617725

RESUMEN

Urinary Tract Infections (UTIs) are among the most frequently occurring infections in the hospital. Urinalysis and urine culture are the main tools used for diagnosis. Whereas urinalysis is sufficiently sensitive for detecting UTI, it has a relatively low specificity, leading to unnecessary treatment with antibiotics and the risk of increasing antibiotic resistance. We performed an evaluation of the current diagnostic process with an expert-based label for UTI as outcome, retrospectively established using data from the Electronic Health Records. We found that the combination of urinalysis results with the Gram stain and other readily available parameters can be used effectively for predicting UTI. Based on the obtained information, we engineered a clinical decision support system (CDSS) using the reliable semi-supervised ensemble learning (RESSEL) method, and found it to be more accurate than urinalysis or the urine culture for prediction of UTI. The CDSS provides clinicians with this prediction within hours of ordering a culture and thereby enables them to hold off on prematurely prescribing antibiotics for UTI while awaiting the culture results.


Asunto(s)
Programas de Optimización del Uso de los Antimicrobianos , Infecciones Urinarias , Antibacterianos/uso terapéutico , Humanos , Estudios Retrospectivos , Urinálisis/métodos , Infecciones Urinarias/diagnóstico , Infecciones Urinarias/tratamiento farmacológico
9.
Br J Gen Pract ; 72(719): e437-e445, 2022 06.
Artículo en Inglés | MEDLINE | ID: mdl-35440467

RESUMEN

BACKGROUND: Recognising patients who need immediate hospital treatment for sepsis while simultaneously limiting unnecessary referrals is challenging for GPs. AIM: To develop and validate a sepsis prediction model for adult patients in primary care. DESIGN AND SETTING: This was a prospective cohort study in four out-of-hours primary care services in the Netherlands, conducted between June 2018 and March 2020. METHOD: Adult patients who were acutely ill and received home visits were included. A total of nine clinical variables were selected as candidate predictors, next to the biomarkers C-reactive protein, procalcitonin, and lactate. The primary endpoint was sepsis within 72 hours of inclusion, as established by an expert panel. Multivariable logistic regression with backwards selection was used to design an optimal model with continuous clinical variables. The added value of the biomarkers was evaluated. Subsequently, a simple model using single cut-off points of continuous variables was developed and externally validated in two emergency department populations. RESULTS: A total of 357 patients were included with a median age of 80 years (interquartile range 71-86), of which 151 (42%) were diagnosed with sepsis. A model based on a simple count of one point for each of six variables (aged >65 years; temperature >38°C; systolic blood pressure ≤110 mmHg; heart rate >110/min; saturation ≤95%; and altered mental status) had good discrimination and calibration (C-statistic of 0.80 [95% confidence interval = 0.75 to 0.84]; Brier score 0.175). Biomarkers did not improve the performance of the model and were therefore not included. The model was robust during external validation. CONCLUSION: Based on this study's GP out-of-hours population, a simple model can accurately predict sepsis in acutely ill adult patients using readily available clinical parameters.


Asunto(s)
Modelos Estadísticos , Sepsis , Adulto , Anciano de 80 o más Años , Biomarcadores , Estudios de Cohortes , Humanos , Atención Primaria de Salud , Pronóstico , Estudios Prospectivos , Sepsis/diagnóstico
10.
PLoS One ; 17(4): e0267140, 2022.
Artículo en Inglés | MEDLINE | ID: mdl-35436301

RESUMEN

BACKGROUND: The ability to accurately distinguish bacterial from viral infection would help clinicians better target antimicrobial therapy during suspected lower respiratory tract infections (LRTI). Although technological developments make it feasible to rapidly generate patient-specific microbiota profiles, evidence is required to show the clinical value of using microbiota data for infection diagnosis. In this study, we investigated whether adding nasal cavity microbiota profiles to readily available clinical information could improve machine learning classifiers to distinguish bacterial from viral infection in patients with LRTI. RESULTS: Various multi-parametric Random Forests classifiers were evaluated on the clinical and microbiota data of 293 LRTI patients for their prediction accuracies to differentiate bacterial from viral infection. The most predictive variable was C-reactive protein (CRP). We observed a marginal prediction improvement when 7 most prevalent nasal microbiota genera were added to the CRP model. In contrast, adding three clinical variables, absolute neutrophil count, consolidation on X-ray, and age group to the CRP model significantly improved the prediction. The best model correctly predicted 85% of the 'bacterial' patients and 82% of the 'viral' patients using 13 clinical and 3 nasal cavity microbiota genera (Staphylococcus, Moraxella, and Streptococcus). CONCLUSIONS: We developed high-accuracy multi-parametric machine learning classifiers to differentiate bacterial from viral infections in LRTI patients of various ages. We demonstrated the predictive value of four easy-to-collect clinical variables which facilitate personalized and accurate clinical decision-making. We observed that nasal cavity microbiota correlate with the clinical variables and thus may not add significant value to diagnostic algorithms that aim to differentiate bacterial from viral infections.


Asunto(s)
Infecciones Bacterianas , Microbiota , Infecciones del Sistema Respiratorio , Virosis , Infecciones Bacterianas/tratamiento farmacológico , Proteína C-Reactiva/metabolismo , Humanos , Nariz/microbiología , Infecciones del Sistema Respiratorio/tratamiento farmacológico , Virosis/diagnóstico
11.
Int J Epidemiol ; 51(5): 1481-1488, 2022 10 13.
Artículo en Inglés | MEDLINE | ID: mdl-35352121

RESUMEN

BACKGROUND: A causative role of Coxiella burnetii (the causative agent of Q fever) in the pathogenesis of B-cell non-Hodgkin lymphoma (NHL) has been suggested, although supporting studies show conflicting evidence. We assessed whether this association is present by performing a detailed analysis on the risk of mature B-cell NHL after Q fever during and after the largest Q fever outbreak reported worldwide in the entire Dutch population over a 16-year period. METHODS: We performed an ecological analysis. The incidence of mature B-cell NHL in the entire Dutch population from 2002 until 2017 was studied and modelled with reported acute Q fever cases as the determinant. The adjusted relative risk of NHL after acute Q fever as the primary outcome measure was calculated using a Poisson regression. RESULTS: Between January 2002 and December 2017, 266 050 745 person-years were observed, with 61 424 diagnosed with mature B-cell NHL. In total, 4310 persons were diagnosed with acute Q fever, with the highest incidence in 2009. The adjusted relative risk of NHL after acute Q fever was 1.02 (95% CI 0.97-1.06, P = 0.49) and 0.98 (95% CI 0.89-1.07, P = 0.60), 0.99 (95% CI 0.87-1.12, P = 0.85) and 0.98 (95% 0.88-1.08, P = 0.67) for subgroups of diffuse large B-cell lymphoma, follicular lymphoma or B-cell chronic lymphocytic leukaemia, respectively. Modelling with lag times (1-4 years) did not change interpretation. CONCLUSION: We found no evidence for an association between C. burnetii and NHL after studying the risk of mature B-cell NHL after a large Q fever outbreak in Netherlands.


Asunto(s)
Coxiella burnetii , Linfoma no Hodgkin , Fiebre Q , Brotes de Enfermedades , Humanos , Linfoma no Hodgkin/epidemiología , Fiebre Q/diagnóstico , Fiebre Q/epidemiología , Riesgo
12.
J Clin Med ; 11(3)2022 Jan 20.
Artículo en Inglés | MEDLINE | ID: mdl-35159977

RESUMEN

BACKGROUND: the geographical similarities of the Dutch 2007-2010 Q fever outbreak and the start of the 2020 coronavirus disease 19 (COVID-19) outbreak in the Netherlands raised questions and provided a unique opportunity to study an association between Coxiella burnetii infection and the outcome following SARS-CoV-2 infection. METHODS: We performed a retrospective cohort study in two Dutch hospitals. We assessed evidence of previous C. burnetii infection in COVID-19 patients diagnosed at the ED during the first COVID-19 wave and compared a combined outcome of in-hospital mortality and intensive care unit (ICU) admission using adjusted odds ratios (OR). RESULTS: In total, 629 patients were included with a mean age of 68.0 years. Evidence of previous C. burnetii infection was found in 117 patients (18.6%). The combined primary outcome occurred in 40.2% and 40.4% of patients with and without evidence of previous C. burnetii infection respectively (adjusted OR of 0.926 (95% CI 0.605-1.416)). The adjusted OR of the secondary outcomes in-hospital mortality, ICU-admission and regular ward admission did not show an association either. CONCLUSION: no influence of previous C. burnetii infection on the risk of ICU admission and/or mortality for patients with COVID-19 presenting at the ED was observed.

13.
J Clin Med ; 11(4)2022 Feb 16.
Artículo en Inglés | MEDLINE | ID: mdl-35207289

RESUMEN

The early recognition of acute kidney injury (AKI) is essential to improve outcomes and prevent complications such as chronic kidney disease, the need for renal-replacement therapy, and an increased length of hospital stay. Increasing evidence shows that inflammation plays an important role in the pathophysiology of AKI and mortality. Several inflammatory hematological ratios can be used to measure systemic inflammation. Therefore, the association between these ratios and outcomes (AKI and mortality) in patients suspected of having an infection at the emergency department was investigated. Data from the SPACE cohort were used. Cox regression was performed to investigate the association between seven hematological ratios and outcomes. A total of 1889 patients were included, of which 160 (8.5%) patients developed AKI and 102 (5.4%) died in <30 days. The Cox proportional-hazards model revealed that the neutrophil-to-lymphocyte ratio (NLR), segmented-neutrophil-to-monocyte ratio (SMR), and neutrophil-lymphocyte-platelet ratio (NLPR) are independently associated with AKI <30 days after emergency-department presentation. Additionally, the NLR, SMR and NLPR were associated with 30-day all-cause mortality. These findings are an important step forward for the early recognition of AKI. The use of these markers might enable emergency-department physicians to recognize and treat AKI in an early phase to potentially prevent complications.

14.
Ned Tijdschr Geneeskd ; 1662022 11 02.
Artículo en Holandés | MEDLINE | ID: mdl-36633034

RESUMEN

Acute kidney injury is very common in hospitalized patients and has been described in up to twenty percent of admissions. Although there are many causes of acute kidney injury, one of the more overlooked causes is the antibiotic prescribed during these admissions. In this article we discuss the two main causes of antibiotic induced kidney injury illustrated by two cases, one of ciprofloxacin Kristal nephropathy and of ciprofloxacin tubule-interstial nephritis. We discuss the pathophysiology, most common involved antibiotics, diagnostics and the treatment.


Asunto(s)
Lesión Renal Aguda , Nefritis Intersticial , Humanos , Antibacterianos/efectos adversos , Nefritis Intersticial/inducido químicamente , Riñón , Lesión Renal Aguda/inducido químicamente , Ciprofloxacina/efectos adversos
15.
PLoS One ; 16(12): e0260942, 2021.
Artículo en Inglés | MEDLINE | ID: mdl-34879093

RESUMEN

BACKGROUND: Acute kidney injury (AKI) is a major health problem associated with considerable mortality and morbidity. Studies on clinical outcomes and mortality of AKI in the emergency department are scarce. The aim of this study is to assess incidence, mortality and renal outcomes after AKI in patients with suspected infection at the emergency department. METHODS: We used data from the SPACE-cohort (SePsis in the ACutely ill patients in the Emergency department), which included consecutive patients that presented to the emergency department of the internal medicine with suspected infection. Hazard ratios (HR) were assessed using Cox regression to investigate the association between AKI, 30-days mortality and renal function decline up to 1 year after AKI. Survival in patients with and without AKI was assessed using Kaplan-Meier analyses. RESULTS: Of the 3105 patients in the SPACE-cohort, we included 1716 patients who fulfilled the inclusion criteria. Of these patients, 10.8% had an AKI episode. Mortality was 12.4% for the AKI group and 4.2% for the non-AKI patients. The adjusted HR for all-cause mortality at 30-days in AKI patients was 2.8 (95% CI 1.7-4.8). Moreover, the cumulative incidence of renal function decline was 69.8% for AKI patients and 39.3% for non-AKI patients. Patients with an episode of AKI had higher risk of developing renal function decline (adjusted HR 3.3, 95% CI 2.4-4.5) at one year after initial AKI-episode at the emergency department. CONCLUSION: Acute kidney injury is common in patients with suspected infection in the emergency department and is significantly associated with 30-days mortality and renal function decline one year after AKI.


Asunto(s)
Lesión Renal Aguda/mortalidad , Servicio de Urgencia en Hospital/estadística & datos numéricos , Infecciones/complicaciones , Mortalidad/tendencias , Lesión Renal Aguda/epidemiología , Lesión Renal Aguda/etiología , Lesión Renal Aguda/patología , Anciano , Estudios de Casos y Controles , Estudios de Cohortes , Femenino , Humanos , Incidencia , Masculino , Persona de Mediana Edad , Países Bajos/epidemiología , Pronóstico , Factores de Riesgo , Tasa de Supervivencia
16.
BMJ Open ; 11(11): e050268, 2021 11 10.
Artículo en Inglés | MEDLINE | ID: mdl-34758991

RESUMEN

OBJECTIVES: The COVID-19 pandemic pressurised healthcare with increased shortage of care. This resulted in an increase of awareness for code status documentation (ie, whether limitations to specific life-sustaining treatments are in place), both in the medical field and in public media. However, it is unknown whether the increased awareness changed the prevalence and content of code status documentation for COVID-19 patients. We aim to describe differences in code status documentation between infectious patients before the pandemic and COVID-19 patients. SETTING: University Medical Centre of Utrecht, a tertiary care teaching academic hospital in the Netherlands. PARTICIPANTS: A total of 1715 patients were included, 129 in the COVID-19 cohort (a cohort of COVID-19 patients, admitted from March 2020 to June 2020) and 1586 in the pre-COVID-19 cohort (a cohort of patients with (suspected) infections admitted between September 2016 to September 2018). PRIMARY AND SECONDARY OUTCOME MEASURES: We described frequency of code status documentation, frequency of discussion of this code status with patient and/or family, and content of code status. RESULTS: Frequencies of code status documentation (69.8% vs 72.7%, respectively) and discussion (75.6% vs 73.3%, respectively) were similar in both cohorts. More patients in the COVID-19 cohort than in the before COVID-19 cohort had any treatment limitation as opposed to full code (40% vs 25%). Within the treatment limitations, 'no intensive care admission' (81% vs 51%) and 'no intubation' (69% vs 40%) were more frequently documented in the COVID-19 cohort. A smaller difference was seen in 'other limitation' (17% vs 9%), while 'no resuscitation' (96% vs 92%) was comparable between both periods. CONCLUSION: We observed no difference in the frequency of code status documentation or discussion in COVID-19 patients opposed to a pre-COVID-19 cohort. However, treatment limitations were more prevalent in patients with COVID-19, especially 'no intubation' and 'no intensive care admission'.


Asunto(s)
COVID-19 , Estudios de Cohortes , Documentación , Humanos , Pandemias , SARS-CoV-2
17.
J Emerg Nurs ; 47(6): 860-869, 2021 Nov.
Artículo en Inglés | MEDLINE | ID: mdl-34392956

RESUMEN

INTRODUCTION: Retrospective studies suggest that a rapid initiation of treatment results in a better prognosis for patients in the emergency department. There could be a difference between the actual medication administration time and the documented time in the electronic health record. In this study, the difference between the observed medication administration time and documentation time was investigated. Patient and nurse characteristics were also tested for associations with observed time differences. METHODS: In this prospective study, emergency nurses were followed by observers for a total of 3 months. Patient inclusion was divided over 2 time periods. The difference in the observed medication administration time and the corresponding electronic health record documentation time was measured. The association between patient/nurse characteristics and the difference in medication administration and documentation time was tested with a Spearman correlation or biserial correlation test. RESULTS: In 34 observed patients, the median difference in administration and documentation time was 6.0 minutes (interquartile range 2.0-16.0). In 9 (26.5%) patients, the actual time of medication administration differed more than 15 minutes with the electronic health record documentation time. High temperature, lower saturation, oxygen-dependency, and high Modified Early Warning Score were all correlated with an increasing difference between administration and documentation times. DISCUSSION: A difference between administration and documentation times of medication in the emergency department may be common, especially for more acute patients. This could bias, in part, previously reported time-to-treatment measurements from retrospective research designs, which should be kept in mind when outcomes of retrospective time-to-treatment studies are evaluated.


Asunto(s)
Documentación , Servicio de Urgencia en Hospital , Registros Electrónicos de Salud , Humanos , Estudios Prospectivos , Estudios Retrospectivos
18.
Clin Infect Dis ; 73(8): 1476-1483, 2021 10 20.
Artículo en Inglés | MEDLINE | ID: mdl-34028546

RESUMEN

BACKGROUND: Chronic Q fever usually develops within 2 years after primary infection with Coxiella burnetii. We determined the interval between acute Q fever and diagnosis of chronic infection, assessed what factors contribute to a longer interval, and evaluated the long-term follow-up. METHODS: From 2007 to 2018, patients with chronic Q fever were included from 45 participating hospitals. The interval between acute and chronic infection was calculated in patients with a known day of first symptoms and/or serological confirmation of acute Q fever. Chronic Q fever-related complications and mortality were assessed by 2 investigators based on predefined criteria. RESULTS: In total, 313 (60.3%) proven, 81 (15.6%) probable, and 125 (24.1%) possible chronic Q fever patients were identified. The date of acute Q fever was known in 200 patients: in 45 (22.5%), the interval was longer than 2 years, with the longest observed interval being 9.2 years. Patients in whom serological follow-up was performed after acute Q fever were diagnosed less often after this 2-year interval (odds ratio, 0.26; 95% confidence interval, 0.12-0.54). Chronic Q fever-related complications occurred in 216 patients (41.6%). Chronic Q fever-related mortality occurred in 83 (26.5%) of proven and 3 (3.7%) of probable chronic Q fever patients. CONCLUSIONS: Chronic Q fever is still being diagnosed and mortality keeps occurring 8 years after a large outbreak. Intervals between acute Q fever and diagnosis of chronic infection can reach more than 9 years. We urge physicians to perform microbiological testing for chronic Q fever even many years after an outbreak or acute Q fever disease.


Asunto(s)
Coxiella burnetii , Fiebre Q , Brotes de Enfermedades , Humanos , Fiebre Q/diagnóstico , Fiebre Q/epidemiología
19.
Clin Microbiol Infect ; 27(9): 1273-1278, 2021 Sep.
Artículo en Inglés | MEDLINE | ID: mdl-33813120

RESUMEN

OBJECTIVES: We assessed the prognostic value of phase I IgG titres during treatment and follow-up of chronic Q fever. METHODS: We performed a retrospective cohort study to analyse the course of phase I IgG titres in chronic Q fever. We used a multivariable time-varying Cox regression to assess our primary (first disease-related event) and secondary (therapy failure) outcomes. In a second analysis, we evaluated serological characteristics after 1 year of therapy (fourfold decrease in phase I IgG titre, absence of phase II IgM and reaching phase I IgG titre of ≤1:1024) with multivariable Cox regression. RESULTS: In total, 337 patients that were treated for proven (n = 284, 84.3%) or probable (n = 53, 15.7%) chronic Q fever were included. Complications occurred in 190 (56.4%), disease-related mortality in 71 (21.1%) and therapy failure in 142 (42.1%) patients. The course of phase I IgG titres was not associated with first disease-related event (HR 1.00, 95% CI 0.86-1.15) or therapy failure (HR 1.02, 95% CI 0.91-1.15). Similar results were found for the serological characteristics for the primary (HR 0.97, 95% CI 0.62-1.51; HR 1.12, 95% CI 0.66-1.90; HR 0.99, 95% CI 0.57-1.69, respectively) and secondary outcomes (HR 0.86, 95% CI 0.57-1.29; HR 1.37, 95% CI 0.86-2.18; HR 0.80, 95% CI 0.48-1.34, respectively). DISCUSSION: Coxiella burnetii serology does not reliably predict disease-related events or therapy failure during treatment and follow-up of chronic Q fever. Alternative markers for disease management are needed, but, for now, management should be based on clinical factors, PCR results, and imaging results.


Asunto(s)
Anticuerpos Antibacterianos/sangre , Inmunoglobulina G/sangre , Fiebre Q , Coxiella burnetii , Estudios de Seguimiento , Humanos , Inmunoglobulina M/sangre , Pronóstico , Fiebre Q/diagnóstico , Fiebre Q/tratamiento farmacológico , Estudios Retrospectivos
20.
BMJ Open ; 11(3): e046518, 2021 03 11.
Artículo en Inglés | MEDLINE | ID: mdl-33707275

RESUMEN

OBJECTIVE: The quick Sequential Organ Failure Assessment (qSOFA) is developed as a tool to identify patients with infection with increased risk of dying from sepsis in non-intensive care unit settings, like the emergency department (ED). An abnormal score may trigger the initiation of appropriate therapy to reduce that risk. This study assesses the risk of a treatment paradox: the effect of a strong predictor for mortality will be reduced if that predictor also acts as a trigger for initiating treatment to prevent mortality. DESIGN: Retrospective analysis on data from a large observational cohort. SETTING: ED of a tertiary medical centre in the Netherlands. PARTICIPANTS: 3178 consecutive patients with suspected infection. PRIMARY OUTCOME: To evaluate the existence of a treatment paradox by determining the influence of baseline qSOFA on treatment decisions within the first 24 hours after admission. RESULTS: 226 (7.1%) had a qSOFA ≥2, of which 51 (22.6%) died within 30 days. Area under receiver operating characteristics of qSOFA for 30-day mortality was 0.68 (95% CI 0.61 to 0.75). Patients with a qSOFA ≥2 had higher odds of receiving any form of intensive therapy (OR 11.4 (95% CI 7.5 to 17.1)), such as aggressive fluid resuscitation (OR 8.8 95% CI 6.6 to 11.8), fast antibiotic administration (OR 8.5, 95% CI 5.7 to 12.3) or vasopressic therapy (OR 17.3, 95% CI 11.2 to 26.8), compared with patients with qSOFA <2. CONCLUSION: In ED patients with suspected infection, a qSOFA ≥2 was associated with more intensive treatment. This could lead to inadequate prediction of 30-day mortality due to the presence of a treatment paradox. TRIAL REGISTRATION NUMBER: 6916.


Asunto(s)
Puntuaciones en la Disfunción de Órganos , Sepsis , Servicio de Urgencia en Hospital , Mortalidad Hospitalaria , Humanos , Unidades de Cuidados Intensivos , Países Bajos/epidemiología , Pronóstico , Curva ROC , Estudios Retrospectivos , Sepsis/terapia
SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA
...