Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 123
Filtrar
Mais filtros

Base de dados
País/Região como assunto
Tipo de documento
Intervalo de ano de publicação
1.
Artigo em Inglês | MEDLINE | ID: mdl-38873914

RESUMO

OBJECTIVES: Data regarding the occurrence of complications specifically during pediatric anesthesia for endoscopic procedures is limited. By evaluating such data, factors could be identified to assure proper staffing and preparation to minimize adverse events and improve patient safety during flexible endoscopy. METHODS: This retrospective cohort study included children undergoing anesthesia for gastroscopy, colonoscopy, bronchoscopy, or combined endoscopic procedures over 10-year period. The primary study aim was to evaluate the incidence of complications and identify risk factors for adverse events. RESULTS: Overall, 2064 endoscopic procedures including 1356 gastroscopies (65.7%), 93 colonoscopies (4.5%), 235 bronchoscopies (11.4%), and 380 combined procedures (18.4%) were performed. Of the 1613 patients, 151 (7.3%) patients exhibited an adverse event, with respiratory complications being the most common (65 [3.1%]). Combination of gastrointestinal endoscopies did not lead to an increased adverse event rate (gastroscopy: 5.5%, colonoscopy: 3.2%). Diagnostic endoscopy as compared to interventional had a lower rate. If bronchoscopy was performed, the rate was similar to that of bronchoscopy alone (19.5% vs. 20.4%). Age < 5.8 years or body weight less than 20 kg, bronchoscopy, American Society of Anesthesiologists status ≥ 2 or pre-existing anesthesia-relevant diseases, and urgency of the procedure were independent risk factors for adverse events. For each risk factor, the risk for events increased 2.1-fold [1.8-2.4]. CONCLUSIONS: This study identifies multiple factors that increase the rate of adverse events associated anesthesia-based endoscopy. Combined gastrointestinal procedures did not increase the risk for adverse events while combination of bronchoscopy to gastrointestinal endoscopy showed a similar risk as bronchoscopy alone.

2.
BMC Anesthesiol ; 24(1): 113, 2024 Mar 23.
Artigo em Inglês | MEDLINE | ID: mdl-38521898

RESUMO

BACKGROUND: Chronic heart failure (HF) is a common clinical condition associated with adverse outcomes in elderly patients undergoing non-cardiac surgery. This study aimed to estimate a clinically applicable NT-proBNP cut-off that predicts postoperative 30-day morbidity in a non-cardiac surgical cohort. METHODS: One hundred ninety-nine consecutive patients older than 65 years undergoing elective non-cardiac surgery with intermediate or high surgical risk were analysed. Preoperative NT-proBNP was measured, and clinical events were assessed up to postoperative day 30. The primary endpoint was the composite morbidity endpoint (CME) consisting of rehospitalisation, acute decompensated heart failure (ADHF), acute kidney injury (AKI), and infection at postoperative day 30. Secondary endpoints included perioperative fluid balance and incidence, duration, and severity of perioperative hypotension. RESULTS: NT-proBNP of 443 pg/ml had the highest accuracy in predicting the composite endpoint; a clinical cut-off of 450 pg/ml was implemented to compare clinical endpoints. Although 35.2% of patients had NT-proBNP above the threshold, only 10.6% had a known history of HF. The primary endpoint was the composite morbidity endpoint (CME) consisting of rehospitalisation, acute decompensated heart failure (ADHF), acute kidney injury (AKI), and infection. Event rates were significantly increased in patients with NT-proBNP > 450 pg/ml (70.7% vs. 32.4%, p < 0.001), which was due to the incidence of cardiac rehospitalisation (4.4% vs. 0%, p = 0.018), ADHF (20.1% vs. 4.0%, p < 0.001), AKI (39.8% vs. 8.3%, p < 0.001), and infection (46.3% vs. 24.4%, p < 0.01). Perioperative fluid balance and perioperative hypotension were comparable between groups. Preoperative NT-proBNP > 450 pg/ml was an independent predictor of the CME in a multivariable Cox regression model (hazard ratio 2.92 [1.72-4.94]). CONCLUSIONS: Patients with NT-proBNP > 450 pg/ml exhibited profoundly increased postoperative morbidity. Further studies should focus on interdisciplinary approaches to improve outcomes through integrated interventions in the perioperative period. TRIAL REGISTRATION: German Clinical Trials Register: DRKS00027871, 17/01/2022.


Assuntos
Injúria Renal Aguda , Insuficiência Cardíaca , Hipotensão , Humanos , Idoso , Biomarcadores , Insuficiência Cardíaca/epidemiologia , Peptídeo Natriurético Encefálico , Fragmentos de Peptídeos , Morbidade , Injúria Renal Aguda/diagnóstico , Injúria Renal Aguda/epidemiologia , Prognóstico
3.
J Med Internet Res ; 26: e47070, 2024 Jun 04.
Artigo em Inglês | MEDLINE | ID: mdl-38833299

RESUMO

BACKGROUND: The COVID-19 pandemic posed significant challenges to global health systems. Efficient public health responses required a rapid and secure collection of health data to improve the understanding of SARS-CoV-2 and examine the vaccine effectiveness (VE) and drug safety of the novel COVID-19 vaccines. OBJECTIVE: This study (COVID-19 study on vaccinated and unvaccinated subjects over 16 years; eCOV study) aims to (1) evaluate the real-world effectiveness of COVID-19 vaccines through a digital participatory surveillance tool and (2) assess the potential of self-reported data for monitoring key parameters of the COVID-19 pandemic in Germany. METHODS: Using a digital study web application, we collected self-reported data between May 1, 2021, and August 1, 2022, to assess VE, test positivity rates, COVID-19 incidence rates, and adverse events after COVID-19 vaccination. Our primary outcome measure was the VE of SARS-CoV-2 vaccines against laboratory-confirmed SARS-CoV-2 infection. The secondary outcome measures included VE against hospitalization and across different SARS-CoV-2 variants, adverse events after vaccination, and symptoms during infection. Logistic regression models adjusted for confounders were used to estimate VE 4 to 48 weeks after the primary vaccination series and after third-dose vaccination. Unvaccinated participants were compared with age- and gender-matched participants who had received 2 doses of BNT162b2 (Pfizer-BioNTech) and those who had received 3 doses of BNT162b2 and were not infected before the last vaccination. To assess the potential of self-reported digital data, the data were compared with official data from public health authorities. RESULTS: We enrolled 10,077 participants (aged ≥16 y) who contributed 44,786 tests and 5530 symptoms. In this young, primarily female, and digital-literate cohort, VE against infections of any severity waned from 91.2% (95% CI 70.4%-97.4%) at week 4 to 37.2% (95% CI 23.5%-48.5%) at week 48 after the second dose of BNT162b2. A third dose of BNT162b2 increased VE to 67.6% (95% CI 50.3%-78.8%) after 4 weeks. The low number of reported hospitalizations limited our ability to calculate VE against hospitalization. Adverse events after vaccination were consistent with previously published research. Seven-day incidences and test positivity rates reflected the course of the pandemic in Germany when compared with official numbers from the national infectious disease surveillance system. CONCLUSIONS: Our data indicate that COVID-19 vaccinations are safe and effective, and third-dose vaccinations partially restore protection against SARS-CoV-2 infection. The study showcased the successful use of a digital study web application for COVID-19 surveillance and continuous monitoring of VE in Germany, highlighting its potential to accelerate public health decision-making. Addressing biases in digital data collection is vital to ensure the accuracy and reliability of digital solutions as public health tools.


Assuntos
Vacinas contra COVID-19 , COVID-19 , SARS-CoV-2 , Humanos , Alemanha/epidemiologia , COVID-19/prevenção & controle , COVID-19/epidemiologia , Estudos Prospectivos , Vacinas contra COVID-19/administração & dosagem , Feminino , Masculino , Pessoa de Meia-Idade , Adulto , SARS-CoV-2/imunologia , Pandemias , Eficácia de Vacinas/estatística & dados numéricos , Idoso , Internet , Autorrelato , Adulto Jovem , Estudos de Coortes , Adolescente
4.
Transfus Med Hemother ; 51(1): 12-21, 2024 Feb.
Artigo em Inglês | MEDLINE | ID: mdl-38314244

RESUMO

Introduction: Patients undergoing revision total hip surgery (RTHS) have a high prevalence of mild and moderate preoperative anemia, associated with adverse outcomes. The aim of this study was to investigate the association of perioperative allogeneic blood transfusions (ABT) and postoperative complications in preoperatively mild compared to moderate anemic patients undergoing RTHS who did not receive a diagnostic anemia workup and treatment before surgery. Methods: We included 1,765 patients between 2007 and 2019 at a university hospital. Patients were categorized according to their severity of anemia using the WHO criteria of mild, moderate, and severe anemia in the first Hb level of the case. Patients were grouped as having received no ABT, 1-2 units of ABT, or more than 2 units of ABT. Need for intraoperative ABT was assessed in accordance with institutional standards. Primary endpoint was the compound incidence of postoperative complications. Secondary outcomes included major/minor complications and length of hospital and ICU stay. Results: Of the 1,765 patients, 31.0% were anemic of any cause before surgery. Transfusion rates were 81% in anemic patients and 41.2% in nonanemic patients. The adjusted risks for compound postoperative complication were significantly higher in patients with moderate anemia (OR 4.88, 95% CI: 1.54-13.15, p = 0.003) but not for patients with mild anemia (OR 1.93, 95% CI: 0.85-3.94, p < 0.090). Perioperative ABT was associated with significantly higher risks for complications in nonanemic patients and showed an increased risk for complications in all anemic patients. In RTHS, perioperative ABT as a treatment for moderate preoperative anemia of any cause was associated with a negative compound effect on postoperative complications, compared to anemia or ABT alone. Discussion: ABT is associated with adverse outcomes of patients with moderate preoperative anemia before RTHS. For this reason, medical treatment of moderate preoperative anemia may be considered.

5.
BMC Health Serv Res ; 23(1): 729, 2023 Jul 05.
Artigo em Inglês | MEDLINE | ID: mdl-37407989

RESUMO

BACKGROUND: High rates of clinical alarms in the intensive care unit can result in alarm fatigue among staff. Individualization of alarm thresholds is regarded as one measure to reduce non-actionable alarms. The aim of this study was to investigate staff's perceptions of alarm threshold individualization according to patient characteristics and disease status. METHODS: This is a cross-sectional survey study (February-July 2020). Intensive care nurses and physicians were sampled by convenience. Data was collected using an online questionnaire. RESULTS: Staff view the individualization of alarm thresholds in the monitoring of vital signs as important. The extent to which alarm thresholds are adapted from the normal range varies depending on the vital sign monitored, the reason for clinical deterioration, and the professional group asked. Vital signs used for hemodynamic monitoring (heart rate and blood pressure) were most subject to alarm individualizations. Staff are ambivalent regarding the integration of novel technological features into alarm management. CONCLUSIONS: All relevant stakeholders, including clinicians, hospital management, and industry, must collaborate to establish a "standard for individualization," moving away from ad hoc alarm management to an intelligent, data-driven alarm management. Making alarms meaningful and trustworthy again has the potential to mitigate alarm fatigue - a major cause of stress in clinical staff and considerable hazard to patient safety. TRIAL REGISTRATION: The study was registered at ClinicalTrials.gov (NCT03514173) on 02/05/2018.


Assuntos
Alarmes Clínicos , Unidades de Terapia Intensiva , Humanos , Estudos Transversais , Monitorização Fisiológica , Inquéritos e Questionários
6.
J Med Internet Res ; 25: e46231, 2023 06 20.
Artigo em Inglês | MEDLINE | ID: mdl-37338970

RESUMO

BACKGROUND: Previous studies have revealed that users of symptom checkers (SCs, apps that support self-diagnosis and self-triage) are predominantly female, are younger than average, and have higher levels of formal education. Little data are available for Germany, and no study has so far compared usage patterns with people's awareness of SCs and the perception of usefulness. OBJECTIVE: We explored the sociodemographic and individual characteristics that are associated with the awareness, usage, and perceived usefulness of SCs in the German population. METHODS: We conducted a cross-sectional online survey among 1084 German residents in July 2022 regarding personal characteristics and people's awareness and usage of SCs. Using random sampling from a commercial panel, we collected participant responses stratified by gender, state of residence, income, and age to reflect the German population. We analyzed the collected data exploratively. RESULTS: Of all respondents, 16.3% (177/1084) were aware of SCs and 6.5% (71/1084) had used them before. Those aware of SCs were younger (mean 38.8, SD 14.6 years, vs mean 48.3, SD 15.7 years), were more often female (107/177, 60.5%, vs 453/907, 49.9%), and had higher formal education levels (eg, 72/177, 40.7%, vs 238/907, 26.2%, with a university/college degree) than those unaware. The same observation applied to users compared to nonusers. It disappeared, however, when comparing users to nonusers who were aware of SCs. Among users, 40.8% (29/71) considered these tools useful. Those considering them useful reported higher self-efficacy (mean 4.21, SD 0.66, vs mean 3.63, SD 0.81, on a scale of 1-5) and a higher net household income (mean EUR 2591.63, SD EUR 1103.96 [mean US $2798.96, SD US $1192.28], vs mean EUR 1626.60, SD EUR 649.05 [mean US $1756.73, SD US $700.97]) than those who considered them not useful. More women considered SCs unhelpful (13/44, 29.5%) compared to men (4/26, 15.4%). CONCLUSIONS: Concurring with studies from other countries, our findings show associations between sociodemographic characteristics and SC usage in a German sample: users were on average younger, of higher socioeconomic status, and more commonly female compared to nonusers. However, usage cannot be explained by sociodemographic differences alone. It rather seems that sociodemographics explain who is or is not aware of the technology, but those who are aware of SCs are equally likely to use them, independently of sociodemographic differences. Although in some groups (eg, people with anxiety disorder), more participants reported to know and use SCs, they tended to perceive them as less useful. In other groups (eg, male participants), fewer respondents were aware of SCs, but those who used them perceived them to be more useful. Thus, SCs should be designed to fit specific user needs, and strategies should be developed to help reach individuals who could benefit but are not aware of SCs yet.


Assuntos
Saúde Pública , Telemedicina , Feminino , Humanos , Masculino , Estudos Transversais , Alemanha , Inquéritos e Questionários , Comportamento de Busca de Informação
7.
J Med Internet Res ; 25: e42289, 2023 03 27.
Artigo em Inglês | MEDLINE | ID: mdl-36972116

RESUMO

BACKGROUND: Data provenance refers to the origin, processing, and movement of data. Reliable and precise knowledge about data provenance has great potential to improve reproducibility as well as quality in biomedical research and, therefore, to foster good scientific practice. However, despite the increasing interest on data provenance technologies in the literature and their implementation in other disciplines, these technologies have not yet been widely adopted in biomedical research. OBJECTIVE: The aim of this scoping review was to provide a structured overview of the body of knowledge on provenance methods in biomedical research by systematizing articles covering data provenance technologies developed for or used in this application area; describing and comparing the functionalities as well as the design of the provenance technologies used; and identifying gaps in the literature, which could provide opportunities for future research on technologies that could receive more widespread adoption. METHODS: Following a methodological framework for scoping studies and the PRISMA-ScR (Preferred Reporting Items for Systematic Reviews and Meta-Analyses Extension for Scoping Reviews) guidelines, articles were identified by searching the PubMed, IEEE Xplore, and Web of Science databases and subsequently screened for eligibility. We included original articles covering software-based provenance management for scientific research published between 2010 and 2021. A set of data items was defined along the following five axes: publication metadata, application scope, provenance aspects covered, data representation, and functionalities. The data items were extracted from the articles, stored in a charting spreadsheet, and summarized in tables and figures. RESULTS: We identified 44 original articles published between 2010 and 2021. We found that the solutions described were heterogeneous along all axes. We also identified relationships among motivations for the use of provenance information, feature sets (capture, storage, retrieval, visualization, and analysis), and implementation details such as the data models and technologies used. The important gap that we identified is that only a few publications address the analysis of provenance data or use established provenance standards, such as PROV. CONCLUSIONS: The heterogeneity of provenance methods, models, and implementations found in the literature points to the lack of a unified understanding of provenance concepts for biomedical data. Providing a common framework, a biomedical reference, and benchmarking data sets could foster the development of more comprehensive provenance solutions.


Assuntos
Pesquisa Biomédica , Humanos , Metadados , PubMed , Reprodutibilidade dos Testes , Software
8.
Crit Care ; 26(1): 50, 2022 02 22.
Artigo em Inglês | MEDLINE | ID: mdl-35193645

RESUMO

BACKGROUND: Increased plasma concentrations of circulating cell-free hemoglobin (CFH) are supposed to contribute to the multifactorial etiology of acute kidney injury (AKI) in critically ill patients while the CFH-scavenger haptoglobin might play a protective role. We evaluated the association of CFH and haptoglobin with AKI in patients with an acute respiratory distress syndrome (ARDS) requiring therapy with VV ECMO. METHODS: Patients with CFH and haptoglobin measurements before initiation of ECMO therapy were identified from a cohort of 1044 ARDS patients and grouped into three CFH concentration groups using a risk stratification. The primary objective was to assess the association of CFH and haptoglobin with KDIGO stage 3 AKI. Further objectives included the identification of a target haptoglobin concentration to protect from CFH-associated AKI. MEASUREMENTS AND MAIN RESULTS: Two hundred seventy-three patients fulfilled the inclusion criteria. Of those, 154 patients (56.4%) had AKI at ECMO initiation. The incidence of AKI increased stepwise with increasing concentrations of CFH reaching a plateau at 15 mg/dl. Compared to patients with low [< 5 mg/dl] CFH concentrations, patients with moderate [5-14 mg/dl] and high [≥ 15 mg/dl] CFH concentrations had a three- and five-fold increased risk for AKI (adjusted odds ratio [OR] moderate vs. low, 2.69 [95% CI, 1.25-5.95], P = 0.012; and OR high vs. low, 5.47 [2.00-15.9], P = 0.001). Among patients with increased CFH concentrations, haptoglobin plasma levels were lower in patients with AKI compared to patients without AKI. A haptoglobin concentration greater than 2.7 g/l in the moderate and 2.4 g/l in the high CFH group was identified as clinical cutoff value to protect from CFH-associated AKI (sensitivity 89.5% [95% CI, 83-96] and 90.2% [80-97], respectively). CONCLUSIONS: In critically ill patients with ARDS requiring therapy with VV ECMO, an increased plasma concentration of CFH was identified as independent risk factor for AKI. Among patients with increased CFH concentrations, higher plasma haptoglobin concentrations might protect from CFH-associated AKI and should be subject of future research.


Assuntos
Injúria Renal Aguda , Oxigenação por Membrana Extracorpórea , Síndrome do Desconforto Respiratório , Injúria Renal Aguda/etiologia , Adulto , Estado Terminal/terapia , Haptoglobinas , Hemoglobinas , Humanos , Síndrome do Desconforto Respiratório/terapia , Estudos Retrospectivos
9.
Crit Care ; 26(1): 362, 2022 11 25.
Artigo em Inglês | MEDLINE | ID: mdl-36434724

RESUMO

BACKGROUND: Mobilisation and exercise intervention in general are safe and feasible in critically ill patients. For patients requiring catecholamines, however, doses of norepinephrine safe for mobilisation in the intensive care unit (ICU) are not defined. This study aimed to describe mobilisation practice in our hospital and identify doses of norepinephrine that allowed a safe mobilisation. METHODS: We conducted a retrospective single-centre cohort study of 16 ICUs at a university hospital in Germany with patients admitted between March 2018 and November 2021. Data were collected from our patient data management system. We analysed the effect of norepinephrine on level (ICU Mobility Scale) and frequency (units per day) of mobilisation, early mobilisation (within 72 h of ICU admission), mortality, and rate of adverse events. Data were extracted from free-text mobilisation entries using supervised machine learning (support vector machine). Statistical analyses were done using (generalised) linear (mixed-effect) models, as well as chi-square tests and ANOVAs. RESULTS: A total of 12,462 patients were analysed in this study. They received a total of 59,415 mobilisation units. Of these patients, 842 (6.8%) received mobilisation under continuous norepinephrine administration. Norepinephrine administration was negatively associated with the frequency of mobilisation (adjusted difference -0.07 mobilisations per day; 95% CI - 0.09, - 0.05; p ≤ 0.001) and early mobilisation (adjusted OR 0.83; 95% CI 0.76, 0.90; p ≤ 0.001), while a higher norepinephrine dose corresponded to a lower chance to be mobilised out-of-bed (adjusted OR 0.01; 95% CI 0.00, 0.04; p ≤ 0.001). Mobilisation with norepinephrine did not significantly affect mortality (p > 0.1). Higher compared to lower doses of norepinephrine did not lead to a significant increase in adverse events in our practice (p > 0.1). We identified that mobilisation was safe with up to 0.20 µg/kg/min norepinephrine for out-of-bed (IMS ≥ 2) and 0.33 µg/kg/min for in-bed (IMS 0-1) mobilisation. CONCLUSIONS: Mobilisation with norepinephrine can be done safely when considering the status of the patient and safety guidelines. We demonstrated that safe mobilisation was possible with norepinephrine doses up to 0.20 µg/kg/min for out-of-bed (IMS ≥ 2) and 0.33 µg/kg/min for in-bed (IMS 0-1) mobilisation.


Assuntos
Estado Terminal , Norepinefrina , Humanos , Estado Terminal/terapia , Norepinefrina/farmacologia , Norepinefrina/uso terapêutico , Estudos Retrospectivos , Estudos de Coortes , Estudos Prospectivos
10.
J Med Internet Res ; 24(5): e31810, 2022 05 10.
Artigo em Inglês | MEDLINE | ID: mdl-35536633

RESUMO

BACKGROUND: Symptom checkers are digital tools assisting laypersons in self-assessing the urgency and potential causes of their medical complaints. They are widely used but face concerns from both patients and health care professionals, especially regarding their accuracy. A 2015 landmark study substantiated these concerns using case vignettes to demonstrate that symptom checkers commonly err in their triage assessment. OBJECTIVE: This study aims to revisit the landmark index study to investigate whether and how symptom checkers' capabilities have evolved since 2015 and how they currently compare with laypersons' stand-alone triage appraisal. METHODS: In early 2020, we searched for smartphone and web-based applications providing triage advice. We evaluated these apps on the same 45 case vignettes as the index study. Using descriptive statistics, we compared our findings with those of the index study and with publicly available data on laypersons' triage capability. RESULTS: We retrieved 22 symptom checkers providing triage advice. The median triage accuracy in 2020 (55.8%, IQR 15.1%) was close to that in 2015 (59.1%, IQR 15.5%). The apps in 2020 were less risk averse (odds 1.11:1, the ratio of overtriage errors to undertriage errors) than those in 2015 (odds 2.82:1), missing >40% of emergencies. Few apps outperformed laypersons in either deciding whether emergency care was required or whether self-care was sufficient. No apps outperformed the laypersons on both decisions. CONCLUSIONS: Triage performance of symptom checkers has, on average, not improved over the course of 5 years. It decreased in 2 use cases (advice on when emergency care is required and when no health care is needed for the moment). However, triage capability varies widely within the sample of symptom checkers. Whether it is beneficial to seek advice from symptom checkers depends on the app chosen and on the specific question to be answered. Future research should develop resources (eg, case vignette repositories) to audit the capabilities of symptom checkers continuously and independently and provide guidance on when and to whom they should be recommended.


Assuntos
Serviços Médicos de Emergência , Aplicativos Móveis , Coleta de Dados , Seguimentos , Humanos , Autocuidado , Triagem
11.
J Med Internet Res ; 24(7): e32280, 2022 07 15.
Artigo em Inglês | MEDLINE | ID: mdl-35838765

RESUMO

BACKGROUND: Valuable insights into the pathophysiology and consequences of acute psychosocial stress have been gained using standardized stress induction experiments. However, most protocols are limited to laboratory settings, are labor-intensive, and cannot be scaled to larger cohorts or transferred to daily life scenarios. OBJECTIVE: We aimed to provide a scalable digital tool that enables the standardized induction and recording of acute stress responses in outside-the-laboratory settings without any experimenter contact. METHODS: On the basis of well-described stress protocols, we developed the Digital Stress Test (DST) and evaluated its feasibility and stress induction potential in a large web-based study. A total of 284 participants completed either the DST (n=103; 52/103, 50.5% women; mean age 31.34, SD 9.48 years) or an adapted control version (n=181; 96/181, 53% women; mean age 31.51, SD 11.18 years) with their smartphones via a web application. We compared their affective responses using the international Positive and Negative Affect Schedule Short Form before and after stress induction. In addition, we assessed the participants' stress-related feelings indicated in visual analogue scales before, during, and after the procedure, and further analyzed the implemented stress-inducing elements. Finally, we compared the DST participants' stress reactivity with the results obtained in a classic stress test paradigm using data previously collected in 4 independent Trier Social Stress Test studies including 122 participants overall. RESULTS: Participants in the DST manifested significantly higher perceived stress indexes than the Control-DST participants at all measurements after the baseline (P<.001). Furthermore, the effect size of the increase in DST participants' negative affect (d=0.427) lay within the range of effect sizes for the increase in negative affect in the previously conducted Trier Social Stress Test experiments (0.281-1.015). CONCLUSIONS: We present evidence that a digital stress paradigm administered by smartphone can be used for standardized stress induction and multimodal data collection on a large scale. Further development of the DST prototype and a subsequent validation study including physiological markers are outlined.


Assuntos
Teste de Esforço , Transtornos de Estresse Traumático Agudo , Adulto , Feminino , Humanos , Masculino , Estresse Psicológico/diagnóstico , Estresse Psicológico/psicologia
12.
Infection ; 49(4): 703-714, 2021 Aug.
Artigo em Inglês | MEDLINE | ID: mdl-33890243

RESUMO

PURPOSE: Adequate patient allocation is pivotal for optimal resource management in strained healthcare systems, and requires detailed knowledge of clinical and virological disease trajectories. The purpose of this work was to identify risk factors associated with need for invasive mechanical ventilation (IMV), to analyse viral kinetics in patients with and without IMV and to provide a comprehensive description of clinical course. METHODS: A cohort of 168 hospitalised adult COVID-19 patients enrolled in a prospective observational study at a large European tertiary care centre was analysed. RESULTS: Forty-four per cent (71/161) of patients required invasive mechanical ventilation (IMV). Shorter duration of symptoms before admission (aOR 1.22 per day less, 95% CI 1.10-1.37, p < 0.01) and history of hypertension (aOR 5.55, 95% CI 2.00-16.82, p < 0.01) were associated with need for IMV. Patients on IMV had higher maximal concentrations, slower decline rates, and longer shedding of SARS-CoV-2 than non-IMV patients (33 days, IQR 26-46.75, vs 18 days, IQR 16-46.75, respectively, p < 0.01). Median duration of hospitalisation was 9 days (IQR 6-15.5) for non-IMV and 49.5 days (IQR 36.8-82.5) for IMV patients. CONCLUSIONS: Our results indicate a short duration of symptoms before admission as a risk factor for severe disease that merits further investigation and different viral load kinetics in severely affected patients. Median duration of hospitalisation of IMV patients was longer than described for acute respiratory distress syndrome unrelated to COVID-19.


Assuntos
COVID-19/epidemiologia , COVID-19/virologia , SARS-CoV-2/fisiologia , COVID-19/terapia , Estudos de Coortes , Alemanha/epidemiologia , Hospitalização , Humanos , Hipertensão/complicações , Cinética , Estudos Prospectivos , Respiração Artificial , Fatores de Risco , Centros de Atenção Terciária , Fatores de Tempo , Carga Viral , Eliminação de Partículas Virais
13.
J Med Internet Res ; 23(11): e32264, 2021 11 03.
Artigo em Inglês | MEDLINE | ID: mdl-34730547

RESUMO

BACKGROUND: The role of telemedicine in intensive care has been increasing steadily. Tele-intensive care unit (ICU) interventions are varied and can be used in different levels of treatment, often with direct implications for the intensive care processes. Although a substantial body of primary and secondary literature has been published on the topic, there is a need for broadening the understanding of the organizational factors influencing the effectiveness of telemedical interventions in the ICU. OBJECTIVE: This scoping review aims to provide a map of existing evidence on tele-ICU interventions, focusing on the analysis of the implementation context and identifying areas for further technological research. METHODS: A research protocol outlining the method has been published in JMIR Research Protocols. This review follows the PRISMA-ScR (Preferred Reporting Items for Systematic Reviews and Meta-Analyses Extension for Scoping Reviews). A core research team was assembled to provide feedback and discuss findings. RESULTS: A total of 3019 results were retrieved. After screening, 25 studies were included in the final analysis. We were able to characterize the context of tele-ICU studies and identify three use cases for tele-ICU interventions. The first use case is extending coverage, which describes interventions aimed at extending the availability of intensive care capabilities. The second use case is improving compliance, which includes interventions targeted at improving patient safety, intensive care best practices, and quality of care. The third use case, facilitating transfer, describes telemedicine interventions targeted toward the management of patient transfers to or from the ICU. CONCLUSIONS: The benefits of tele-ICU interventions have been well documented for centralized systems aimed at extending critical care capabilities in a community setting and improving care compliance in tertiary hospitals. No strong evidence has been found on the reduction of patient transfers following tele-ICU intervention. INTERNATIONAL REGISTERED REPORT IDENTIFIER (IRRID): RR2-10.2196/19695.


Assuntos
Unidades de Terapia Intensiva , Telemedicina , Cuidados Críticos , Humanos , Segurança do Paciente
14.
15.
J Med Internet Res ; 23(3): e24475, 2021 03 10.
Artigo em Inglês | MEDLINE | ID: mdl-33688845

RESUMO

BACKGROUND: Symptom checkers (SCs) are tools developed to provide clinical decision support to laypersons. Apart from suggesting probable diagnoses, they commonly advise when users should seek care (triage advice). SCs have become increasingly popular despite prior studies rating their performance as mediocre. To date, it is unclear whether SCs can triage better than those who might choose to use them. OBJECTIVE: This study aims to compare triage accuracy between SCs and their potential users (ie, laypersons). METHODS: On Amazon Mechanical Turk, we recruited 91 adults from the United States who had no professional medical background. In a web-based survey, the participants evaluated 45 fictitious clinical case vignettes. Data for 15 SCs that had processed the same vignettes were obtained from a previous study. As main outcome measures, we assessed the accuracy of the triage assessments made by participants and SCs for each of the three triage levels (ie, emergency care, nonemergency care, self-care) and overall, the proportion of participants outperforming each SC in terms of accuracy, and the risk aversion of participants and SCs by comparing the proportion of cases that were overtriaged. RESULTS: The mean overall triage accuracy was similar for participants (60.9%, SD 6.8%; 95% CI 59.5%-62.3%) and SCs (58%, SD 12.8%). Most participants outperformed all but 5 SCs. On average, SCs more reliably detected emergencies (80.6%, SD 17.9%) than laypersons did (67.5%, SD 16.4%; 95% CI 64.1%-70.8%). Although both SCs and participants struggled with cases requiring self-care (the least urgent triage category), SCs more often wrongly classified these cases as emergencies (43/174, 24.7%) compared with laypersons (56/1365, 4.10%). CONCLUSIONS: Most SCs had no greater triage capability than an average layperson, although the triage accuracy of the five best SCs was superior to the accuracy of most participants. SCs might improve early detection of emergencies but might also needlessly increase resource utilization in health care. Laypersons sometimes require support in deciding when to rely on self-care but it is in that very situation where SCs perform the worst. Further research is needed to determine how to best combine the strengths of humans and SCs.


Assuntos
Serviços Médicos de Emergência , Triagem , Adulto , Benchmarking , Humanos , Autocuidado , Inquéritos e Questionários
16.
J Med Internet Res ; 23(5): e26494, 2021 05 28.
Artigo em Inglês | MEDLINE | ID: mdl-34047701

RESUMO

BACKGROUND: As one of the most essential technical components of the intensive care unit (ICU), continuous monitoring of patients' vital parameters has significantly improved patient safety by alerting staff through an alarm when a parameter deviates from the normal range. However, the vast number of alarms regularly overwhelms staff and may induce alarm fatigue, a condition recently exacerbated by COVID-19 and potentially endangering patients. OBJECTIVE: This study focused on providing a complete and repeatable analysis of the alarm data of an ICU's patient monitoring system. We aimed to develop do-it-yourself (DIY) instructions for technically versed ICU staff to analyze their monitoring data themselves, which is an essential element for developing efficient and effective alarm optimization strategies. METHODS: This observational study was conducted using alarm log data extracted from the patient monitoring system of a 21-bed surgical ICU in 2019. DIY instructions were iteratively developed in informal interdisciplinary team meetings. The data analysis was grounded in a framework consisting of 5 dimensions, each with specific metrics: alarm load (eg, alarms per bed per day, alarm flood conditions, alarm per device and per criticality), avoidable alarms, (eg, the number of technical alarms), responsiveness and alarm handling (eg alarm duration), sensing (eg, usage of the alarm pause function), and exposure (eg, alarms per room type). Results were visualized using the R package ggplot2 to provide detailed insights into the ICU's alarm situation. RESULTS: We developed 6 DIY instructions that should be followed iteratively step by step. Alarm load metrics should be (re)defined before alarm log data are collected and analyzed. Intuitive visualizations of the alarm metrics should be created next and presented to staff in order to help identify patterns in the alarm data for designing and implementing effective alarm management interventions. We provide the script we used for the data preparation and an R-Markdown file to create comprehensive alarm reports. The alarm load in the respective ICU was quantified by 152.5 (SD 42.2) alarms per bed per day on average and alarm flood conditions with, on average, 69.55 (SD 31.12) per day that both occurred mostly in the morning shifts. Most alarms were issued by the ventilator, invasive blood pressure device, and electrocardiogram (ie, high and low blood pressure, high respiratory rate, low heart rate). The exposure to alarms per bed per day was higher in single rooms (26%, mean 172.9/137.2 alarms per day per bed). CONCLUSIONS: Analyzing ICU alarm log data provides valuable insights into the current alarm situation. Our results call for alarm management interventions that effectively reduce the number of alarms in order to ensure patient safety and ICU staff's work satisfaction. We hope our DIY instructions encourage others to follow suit in analyzing and publishing their ICU alarm data.


Assuntos
COVID-19/diagnóstico , COVID-19/fisiopatologia , Alarmes Clínicos/estatística & dados numéricos , Unidades de Terapia Intensiva , Monitorização Fisiológica/métodos , Recursos Humanos em Hospital/educação , Humanos , Monitorização Fisiológica/instrumentação , Segurança do Paciente , Linguagens de Programação
17.
J Med Internet Res ; 23(2): e25283, 2021 02 08.
Artigo em Inglês | MEDLINE | ID: mdl-33497350

RESUMO

BACKGROUND: The COVID-19 outbreak has affected the lives of millions of people by causing a dramatic impact on many health care systems and the global economy. This devastating pandemic has brought together communities across the globe to work on this issue in an unprecedented manner. OBJECTIVE: This case study describes the steps and methods employed in the conduction of a remote online health hackathon centered on challenges posed by the COVID-19 pandemic. It aims to deliver a clear implementation road map for other organizations to follow. METHODS: This 4-day hackathon was conducted in April 2020, based on six COVID-19-related challenges defined by frontline clinicians and researchers from various disciplines. An online survey was structured to assess: (1) individual experience satisfaction, (2) level of interprofessional skills exchange, (3) maturity of the projects realized, and (4) overall quality of the event. At the end of the event, participants were invited to take part in an online survey with 17 (+5 optional) items, including multiple-choice and open-ended questions that assessed their experience regarding the remote nature of the event and their individual project, interprofessional skills exchange, and their confidence in working on a digital health project before and after the hackathon. Mentors, who guided the participants through the event, also provided feedback to the organizers through an online survey. RESULTS: A total of 48 participants and 52 mentors based in 8 different countries participated and developed 14 projects. A total of 75 mentorship video sessions were held. Participants reported increased confidence in starting a digital health venture or a research project after successfully participating in the hackathon, and stated that they were likely to continue working on their projects. Of the participants who provided feedback, 60% (n=18) would not have started their project without this particular hackathon and indicated that the hackathon encouraged and enabled them to progress faster, for example, by building interdisciplinary teams, gaining new insights and feedback provided by their mentors, and creating a functional prototype. CONCLUSIONS: This study provides insights into how online hackathons can contribute to solving the challenges and effects of a pandemic in several regions of the world. The online format fosters team diversity, increases cross-regional collaboration, and can be executed much faster and at lower costs compared to in-person events. Results on preparation, organization, and evaluation of this online hackathon are useful for other institutions and initiatives that are willing to introduce similar event formats in the fight against COVID-19.


Assuntos
COVID-19/terapia , Atenção à Saúde/organização & administração , Internet , Adulto , COVID-19/epidemiologia , Humanos , SARS-CoV-2/isolamento & purificação
18.
Health Info Libr J ; 38(3): 224-230, 2021 Sep.
Artigo em Inglês | MEDLINE | ID: mdl-34549514

RESUMO

The impact of algorithms on everyday life is ever increasing. Medicine and public health are not excluded from this development - algorithms in medicine do not only challenge, change and inform research (methods) but also clinical situations. Given this development, questions arise concerning the competency level of prospective physicians, thus medical students, on algorithm related topics. This paper, based on a master's thesis in library and information science written at Humboldt-Universität zu Berlin, gives an insight into this topic by presenting and analysing the results of a knowledge test conducted among medical students in Germany. F. J.


Assuntos
Estudantes de Medicina , Alemanha , Humanos , Alfabetização , Estudos Prospectivos
19.
Crit Care Med ; 48(4): 459-465, 2020 04.
Artigo em Inglês | MEDLINE | ID: mdl-32205591

RESUMO

OBJECTIVE: Hyperferritinemia is frequently seen in critically ill patients. A rather rare though life-threatening condition related to severely elevated ferritin is hemophagocytic lymphohistiocytosis. We analyze ferritin levels to differentiate hemophagocytic lymphohistiocytosis from other causes of hyperferritinemia in a mixed cohort of critically ill patients. DESIGN: Retrospective observational study. SETTING: Adult surgical, anesthesiologic, and medical ICUs of a university hospital. PATIENTS: Critical care patients (≥ 18 yr old) admitted to any of the adult ICUs at Charité - Universitätsmedizin Berlin between January 2006 and August 2018 with at least one ferritin value and hyperferritinemia (≥ 500 µg/L). INTERVENTIONS: None. MEASUREMENTS AND MAIN RESULTS: Patients were categorized into hemophagocytic lymphohistiocytosis, sepsis, septic shock, and other diagnoses. These were further categorized into 17 subgroups. Hemophagocytic lymphohistiocytosis diagnosis was based on Hemophagocytic Lymphohistiocytosis-2004 criteria and the HScore. Of 2,623 patients with hyperferritinemia, 40 were considered to have hemophagocytic lymphohistiocytosis (1.52%). Maximum ferritin levels were highest in hemophagocytic lymphohistiocytosis patients compared with all other disease groups (each p < 0.001). Sepsis and septic shock patients had higher maximum ferritin levels than patients with other diagnoses (each p < 0.001). A maximum ferritin value of 9,083 µg/L was at 92.5% sensitivity and 91.9% specificity for hemophagocytic lymphohistiocytosis (area under the curve, 0.963; 95% CI, 0.949-0.978). Of all subgroups with other diagnoses, maximum ferritin levels were highest in patients with varicella-zoster virus, hepatitis, or malaria (median, 1,935, 1,928, and 1,587 µg/L, respectively). Maximum ferritin levels were associated with increased in-hospital mortality (odds ratio, 1.518 per log µg/L [95% CI, 1.384-1.665 per log µg/L]; p < 0.001). CONCLUSIONS: This is the largest study of patients with ferritin available in a mixed ICU cohort. Ferritin levels in patients with hemophagocytic lymphohistiocytosis, sepsis, septic shock, and other conditions were distinctly different, with the highest ferritin levels observed in hemophagocytic lymphohistiocytosis patients. Maximum ferritin of 9,083 µg/L showed high sensitivity and specificity and, therefore, may contribute to improved diagnosis of hemophagocytic lymphohistiocytosis in ICU. The inclusion of ferritin into the sepsis laboratory panel is warranted.


Assuntos
Estado Terminal/epidemiologia , Ferritinas/sangue , Hiperferritinemia/diagnóstico , Linfo-Histiocitose Hemofagocítica/diagnóstico , Sepse/diagnóstico , Adulto , Fatores Etários , Biomarcadores/sangue , Feminino , Alemanha , Humanos , Hiperferritinemia/sangue , Hiperferritinemia/epidemiologia , Unidades de Terapia Intensiva , Linfo-Histiocitose Hemofagocítica/sangue , Linfo-Histiocitose Hemofagocítica/epidemiologia , Masculino , Pessoa de Meia-Idade , Estudos Retrospectivos , Sepse/sangue , Sepse/epidemiologia , Adulto Jovem
20.
Crit Care ; 24(1): 244, 2020 05 24.
Artigo em Inglês | MEDLINE | ID: mdl-32448380

RESUMO

BACKGROUND: Hemophagocytic lymphohistiocytosis (HLH) is a rare though often fatal hyperinflammatory syndrome mimicking sepsis in the critically ill. Diagnosis relies on the HLH-2004 criteria and HScore, both of which have been developed in pediatric or adult non-critically ill patients, respectively. Therefore, we aimed to determine the sensitivity and specificity of HLH-2004 criteria and HScore in a cohort of adult critically ill patients. METHODS: In this further analysis of a retrospective observational study, patients ≥ 18 years admitted to at least one adult ICU at Charité - Universitätsmedizin Berlin between January 2006 and August 2018 with hyperferritinemia of ≥ 500 µg/L were included. Patients' charts were reviewed for clinically diagnosed or suspected HLH. Receiver operating characteristics (ROC) analysis was performed to determine prediction accuracy. RESULTS: In total, 2623 patients with hyperferritinemia were included, of whom 40 patients had HLH. We found the best prediction accuracy of HLH diagnosis for a cutoff of 4 fulfilled HLH-2004 criteria (95.0% sensitivity and 93.6% specificity) and HScore cutoff of 168 (100% sensitivity and 94.1% specificity). By adjusting HLH-2004 criteria cutoffs of both hyperferritinemia to 3000 µg/L and fever to 38.2 °C, sensitivity and specificity increased to 97.5% and 96.1%, respectively. Both a higher number of fulfilled HLH-2004 criteria [OR 1.513 (95% CI 1.372-1.667); p <  0.001] and a higher HScore [OR 1.011 (95% CI 1.009-1.013); p <  0.001] were significantly associated with in-hospital mortality. CONCLUSIONS: An HScore cutoff of 168 revealed a sensitivity of 100% and a specificity of 94.1%, thereby providing slightly superior diagnostic accuracy compared to HLH-2004 criteria. Both HLH-2004 criteria and HScore proved to be of good diagnostic accuracy and consequently might be used for HLH diagnosis in critically ill patients. CLINICAL TRIAL REGISTRATION: The study was registered with www.ClinicalTrials.gov (NCT02854943) on August 1, 2016.


Assuntos
Técnicas e Procedimentos Diagnósticos/normas , Linfo-Histiocitose Hemofagocítica/diagnóstico , Adulto , Berlim/epidemiologia , Estado Terminal/mortalidade , Feminino , Ferritinas/análise , Ferritinas/sangue , Humanos , Hiperferritinemia/diagnóstico , Modelos Logísticos , Linfo-Histiocitose Hemofagocítica/classificação , Linfo-Histiocitose Hemofagocítica/epidemiologia , Masculino , Pessoa de Meia-Idade , Curva ROC , Reprodutibilidade dos Testes , Estudos Retrospectivos , Sensibilidade e Especificidade
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA