Your browser doesn't support javascript.
loading
: 20 | 50 | 100
1 - 18 de 18
1.
Article En | MEDLINE | ID: mdl-35897478

(1) Background: The effects of lockdown repetition on work-related stress, expressed through Effort-Reward Imbalance (ERI), during the COVID-19 pandemic are poorly documented. We investigated the effect of repetitive lockdowns on the ERI in French workers, its difference across occupations, and the change in its influencing factors across time. (2) Methods: Participants were included in a prospective cross-sectional observational study from 30 March 2020 to 28 May 2021. The primary outcome was the ERI score (visual analog scale). The ERI score of the population was examined via Generalized Estimating Equations. For each period, the factors influencing ERI were studied by multivariate linear regression. (3) Results: In 8121 participants, the ERI score decreased in the first 2 lockdowns (53.2 ± 0.3, p < 0.001; 50.5 ± 0.7, p < 0.001) and after lockdown 2 (54.8 ± 0.8, p = 0.004) compared with the pre-pandemic period (59 ± 0.4). ERI was higher in medical than in paramedical professionals in the pre-pandemic and the first 2 lockdowns. Higher workloads were associated with better ERI scores. (4) Conclusions: In a large French sample, Effort-Reward Imbalance worsened during the COVID-19 pandemic until the end of the 2nd lockdown. Paramedical professionals experienced a higher burden of stress compared with medical professionals.


COVID-19 , Pandemics , COVID-19/epidemiology , Communicable Disease Control , Cross-Sectional Studies , France/epidemiology , Humans , Job Satisfaction , Prospective Studies , Reward , Stress, Psychological/epidemiology , Surveys and Questionnaires , Workload
2.
Simul Healthc ; 17(1): 42-48, 2022 Feb 01.
Article En | MEDLINE | ID: mdl-35104829

INTRODUCTION: Avoiding coronavirus disease 2019 (COVID-19) work-related infection in frontline healthcare workers is a major challenge. A massive training program was launched in our university hospital for anesthesia/intensive care unit and operating room staff, aiming at upskilling 2249 healthcare workers for COVID-19 patients' management. We hypothesized that such a massive training was feasible in a 2-week time frame and efficient in avoiding sick leaves. METHODS: We performed a retrospective observational study. Training focused on personal protective equipment donning/doffing and airway management in a COVID-19 simulated patient. The educational models used were in situ procedural and immersive simulation, peer-teaching, and rapid cycle deliberate practice. Self-learning organization principles were used for trainers' management. Ordinary disease quantity in full-time equivalent in March and April 2020 were compared with the same period in 2017, 2018, and 2019. RESULTS: A total of 1668 healthcare workers were trained (74.2% of the target population) in 99 training sessions over 11 days. The median number of learners per session was 16 (interquartile range = 9-25). In the first 5 days, the median number of people trained per weekday was 311 (interquartile range = 124-385). Sick leaves did not increase in March to April 2020 compared with the same period in the 3 preceding years. CONCLUSIONS: Massive training for COVID-19 patient management in frontline healthcare workers is feasible in a very short time and efficient in limiting the rate of sick leave. This experience could be used in the anticipation of new COVID-19 waves or for rapidly preparing hospital staff for an unexpected major health crisis.


COVID-19 , Humans , Pandemics , Personnel, Hospital , SARS-CoV-2 , Sick Leave
3.
Front Psychiatry ; 12: 689634, 2021.
Article En | MEDLINE | ID: mdl-34858218

Introduction: COVID-19 lockdown measures have been sources of both potential stress and possible psychological and addiction complications. A lack of activity and isolation during lockdown are among the factors thought to be behind the growth in the use of psychoactive substances and worsening addictive behaviors. Previous studies on the pandemic have attested to an increase in alcohol consumption during lockdowns. Likewise, data suggest there has also been a rise in the use of cannabis, although it is unclear how this is affected by external factors. Our study used quantitative data collected from an international population to evaluate changes in cannabis consumption during the lockdown period between March and October, 2020. We also compared users and non-users of the drug in relation to: (1) socio-demographic differences, (2) emotional experiences, and (3) the information available and the degree of approval of lockdown measures. Methods: An online self-report questionnaire concerning the lockdown was widely disseminated around the globe. Data was collected on sociodemographics and how the rules imposed had influenced the use of cannabis and concerns about health, the economic impact of the measures and the approach taken by government(s). Results: One hundred eighty two respondents consumed cannabis before the lockdown vs. 199 thereafter. The mean cannabis consumption fell from 13 joints per week pre-lockdown to 9.75 after it (p < 0.001). Forty-nine respondents stopped using cannabis at all and 66 admitted to starting to do so. The cannabis users were: less satisfied with government measures; less worried about their health; more concerned about the impact of COVID-19 on the economy and their career; and more frightened of becoming infected in public areas. The risk factors for cannabis use were: age (OR = 0.96); concern for physical health (OR = 0.98); tobacco (OR = 1.1) and alcohol consumption during lockdown (OR = 1.1); the pre-lockdown anger level (OR = 1.01); and feelings of boredom during the restrictions (OR = 1.1). Conclusion: In a specific sub-population, the COVID-19 lockdown brought about either an end to the consumption of cannabis or new use of the drug. The main risk factors for cannabis use were: a lower age, co-addictions and high levels of emotions.

4.
PLoS One ; 16(10): e0257840, 2021.
Article En | MEDLINE | ID: mdl-34614016

INTRODUCTION: The COVID-19 pandemic has initiated an upheaval in society and has been the cause of considerable stress during this period. Healthcare professionals have been on the front line during this health crisis, particularly paramedical staff. The aim of this study was to assess the high level of stress of healthcare workers during the first wave of the pandemic. MATERIALS AND METHODS: The COVISTRESS international study is a questionnaire disseminated online collecting demographic and stress-related data over the globe, during the pandemic. Stress levels were evaluated using non-calibrated visual analog scale, from 0 (no stress) to 100 (maximal stress). RESULTS: Among the 13,537 individuals from 44 countries who completed the survey from January to June 2020, we included 10,051 workers (including 1379 healthcare workers, 631 medical doctors and 748 paramedical staff). The stress levels during the first wave of the pandemic were 57.8 ± 33 in the whole cohort, 65.3 ± 29.1 in medical doctors, and 73.6 ± 27.7 in paramedical staff. Healthcare professionals and especially paramedical staff had the highest levels of stress (p < 0.001 vs non-healthcare workers). Across all occupational categories, women had systematically significantly higher levels of work-related stress than men (p < 0.001). There was a negative correlation between age and stress level (r = -0.098, p < 0.001). Healthcare professionals demonstrated an increased risk of very-high stress levels (>80) compared to other workers (OR = 2.13, 95% CI 1.87-2.41). Paramedical staff risk for very-high levels of stress was higher than doctors' (1.88, 1.50-2.34). The risk of high levels of stress also increased in women (1.83, 1.61-2.09; p < 0.001 vs. men) and in people aged <50 (1.45, 1.26-1.66; p < 0.001 vs. aged >50). CONCLUSIONS: The first wave of the pandemic was a major stressful event for healthcare workers, especially paramedical staff. Among individuals, women were the most at risk while age was a protective factor.


COVID-19/epidemiology , Health Personnel/psychology , Internationality , Pandemics/statistics & numerical data , Stress, Psychological/epidemiology , Surveys and Questionnaires , Adult , Female , Humans , Male , Middle Aged , Young Adult
5.
Nurse Educ Today ; 99: 104792, 2021 Apr.
Article En | MEDLINE | ID: mdl-33578004

BACKGROUND: Simulation is a pedagogical method known to be a generator of stress, that could be influenced by previous stressful experiences. OBJECTIVES: The purpose of this study was to determine the impact of previous experience with a clinical critical event on the stress experienced by nursing students during simulation session of critical events, and on the stress experienced during clinical critical events subsequent to the training. DESIGN: Observational case-control study. SETTINGS: Four critical event scenarios were created using full-scale simulation. PARTICIPANTS: Two hundred and fifteen undergraduate nursing students of semester four. The control group (n = 112) consisted of learners who had not previously experienced a critical event. The prior exposure group (n = 103) consisted of learners who had experienced a critical event prior to the course. METHODS: Stress levels were assessed using the self-report stress numerical rating scale-11. RESULTS: There was no significant difference in the level of stress between the prior exposure group and the control group before, during or expected after the simulation session. A significant decrease in stress was observed in both groups from before the course to during the session (p < 0.05) and expected after the session (p < 0.05). There was no significant difference between the expected post-session stress level and the stress levels reported four months after the training (p = 0.966). At four months, there was no significant difference in stress levels between the groups (p = 0.212). CONCLUSIONS: The prior experience of a clinical critical event before a simulation course did not influence their reported stress level during the simulation session. Conversely, simulation-based training of critical situations appears to reduce the level of self-assessed stress during critical events in clinical practice after the training.


Education, Nursing, Baccalaureate , Simulation Training , Students, Nursing , Case-Control Studies , Clinical Competence , Humans
6.
Eur J Radiol ; 130: 109132, 2020 Sep.
Article En | MEDLINE | ID: mdl-32619753

PURPOSE: The 4-point score is the corner stone of brain death (BD) confirmation using computed tomography angiography (CTA). We hypothesized that considering the superior petrosal veins (SPVs) may improve CTA diagnosis performance in BD setting. We aimed at comparing the diagnosis performance of three revised CTA scores including SPVs and the 4-point score in the confirmation of BD. METHODS: In this retrospective study, 69 consecutive adult-patients admitted in a French University Hospital meeting clinical brain death criteria and receiving at least one CTA were included. CTA images were reviewed by two blinded neuroradiologists. A first analysis compared the 4-point score, considered as the reference and three non-opacification scores: a "Toulouse score" including SPVs and middle cerebral arteries, a "venous score" including SPVs and internal cerebral veins and a "7-score" including all these vessels and the basilar artery. Psychometric tools, observer agreement and misclassification rates were assessed. A second analysis considered clinical examination as the reference. RESULTS: Brain death was confirmed by the 4-score in 59 cases (89.4 %). When compared to the 4-score, the Toulouse score displayed a 100 % positive predictive value, a substantial observer agreement (0.77 [0.53; 1]) and the least misclassification rate (3.03 %). Results were similar in the craniectomy subgroup. The Toulouse score was the only revised test that combined a sensitivity close to that of the 4-score (86.4 % [75.7; 93.6] and 89.4 % [79.4; 95.6], p-value < 0.001, respectively) and a substantial observer agreement. CONCLUSIONS: A score including SPVs and middle cerebral arteries is a valid method for BD confirmation using CTA even in patients receiving craniectomy.


Brain Death/diagnostic imaging , Cerebral Angiography/methods , Cerebral Arteries/diagnostic imaging , Cerebral Veins/diagnostic imaging , Computed Tomography Angiography/methods , Adult , Aged , Female , France , Humans , Male , Middle Aged , Prospective Studies , Reproducibility of Results , Retrospective Studies , Sensitivity and Specificity
7.
J Clin Anesth ; 64: 109811, 2020 Apr 19.
Article En | MEDLINE | ID: mdl-32320919

STUDY OBJECTIVE: To assess incidence and predicting factors of awake craniotomy complications. DESIGN: Retrospective cohort study. SETTING: Operating room and Post Anesthesia Care unit. PATIENTS: 162 patients who underwent 188 awake craniotomy procedures for brain tumor, ASA I to III, with monitored anesthesia care. MEASUREMENTS: We classified procedures in 3 groups: major event group, minor event group, and no event group. Major events were defined as respiratory failure requiring face mask or invasive ventilation; hemodynamic instability treated by vasoactive drugs, or bradycardia treated by atropine, bleeding >500 ml, transfusion, gaseous embolism, cardiac arrest; seizure, cerebral edema, or any events leading to stopping of the cerebral mapping. Minor event was defined as any complication not classified as major. Multivariate logistic regression was used to determine predicting factors of major complication, adjusted for age and ASA score. MAIN RESULTS: 45 procedures (24%) were classified in major event group, 126 (67%) in minor event group, and 17 (9%) in no event group. Seizure was the main complication (n = 13). Asthma (odds ratio: 10.85 [1.34; 235.6]), Remifentanil infusion (odds ratio: 2.97 [1.08; 9.85]) and length of the operation after the brain mapping (odds ratio per supplementary minute: 1.01 [1.01; 1.03]) were associated with major events. CONCLUSIONS: Previous medical history of asthma, remifentanil infusion and a long duration of neurosurgery after cortical mapping appear to be risk factors for major complications during AC.

8.
Emergencias (Sant Vicenç dels Horts) ; 32(2): 111-117, abr. 2020. ilus, tab
Article Es | IBECS | ID: ibc-188159

Objetivo: La simulación interprofesional (SIP) es eficaz para aprender gestión de recursos de crisis. La modalidad de debriefing utilizada en la SIP puede influir en el rendimiento de los participantes y en su integridad psicológica. Se evalúa y compara el rendimiento de un debriefing estándar (DE) –colectivo– con un debriefing combinado (DC) –individual y colectivo– en cursos de SIP en escenarios que simulan pacientes con patología aguda y grave. Método: Ensayo controlado, aleatorizado y multicéntrico. Se aleatorizó el tipo de debriefing realizado (DE o DC) en las sesiones de SIP. El rendimiento del debriefing se evaluó con la escala TEAM (Team Emergency Assessment Measure). La calidad de la SIP fue valorada por los participantes con la escala DASH (Debriefing Assessment for Simulation in Healthcare©). Resultados. Se aleatorizaron 40 cursos de SIP de los que se analizaron 30. Quince realizaron DE y 15 DC. Ambos grupos mejoraron entre la pre y la posprueba (p < 0,01), pero no hubo diferencias en el rendimiento global entre ambas modalidades de debriefing (p = 0,64). El DC obtuvo mejores resultados que el DE en la capacidad de liderazgo (p < 0,05), en la percepción de seguridad psicológica y en la experiencia de aprendizaje eficaz (p < 0,05). Conclusiones: Durante la SIP en situaciones de crisis, el debriefing mejora el rendimiento de los participantes, sin diferencias entre un DE y un DC. El DC podría ser más efectivo para mejorar la capacidad de liderazgo, la seguridad psicológica y la experiencia del aprendizaje


Objective: Interprofessional simulation (IPS) training is an effective way to learn crisis resource management. The type of debriefing used in IPS training may affect participants' performance and their level of psychological safety. We aimed to assess and compare performance after standard collective debriefing versus a combination of individual and collective debriefing ("combined" approach). Methods: Randomized, controlled multicenter trial. IPS sessions were randomized to have either standard or combined debriefing. Each team's performance in the IPS session was assessed with the Team Emergency Assessment Measure. The participants assessed the debriefing quality with the Debriefing Assessment for Simulation in Healthcare. Results: Forty IPS sessions were randomized, and 30 were analyzed, 15 using standard collective debriefing and 15 the combined individual–collective method. Teams' performance improved with both types of debriefing, based on pre-post testing (P<.01), and there were no significant differences in overall performance scores between the 2 types of debriefing (P=.64). However, the combined approach was associated with higher scores for leadership skills (P<.05) and psychological safety, and the participants' learning experience was better (P<.05). Conclusions: During IPS courses on crisis resource management, debriefing improves participants' performance, but similar overall results can be obtained with both debriefing methods. Combined debriefing might be more effective for improving participants' leadership skills and psychological safety and also provide a better learning experience


Humans , 57419/methods , 34003 , Human Resources in Disasters , 28574/methods , Leadership , Prospective Studies , Analysis of Variance
9.
Emergencias ; 32(2): 111-117, 2020.
Article En, Es | MEDLINE | ID: mdl-32125110

OBJECTIVES: Interprofessional simulation (IPS) training is an effective way to learn crisis resource management. The type of debriefing used in IPS training may affect participants' performance and their level of psychological safety. We aimed to assess and compare performance after standard collective debriefing versus a combination of individual and collective debriefing ("combined" approach). MATERIAL AND METHODS: Randomized, controlled multicenter trial. IPS sessions were randomized to have either standard or combined debriefing. Each team's performance in the IPS session was assessed with the Team Emergency Assessment Measure. The participants assessed the debriefing quality with the Debriefing Assessment for Simulation in Healthcare. RESULTS: Forty IPS sessions were randomized, and 30 were analyzed, 15 using standard collective debriefing and 15 the combined individual-collective method. Teams' performance improved with both types of debriefing, based on pre-post testing (P<.01), and there were no significant differences in overall performance scores between the 2 types of debriefing (P=.64). However, the combined approach was associated with higher scores for leadership skills (P<.05) and psychological safety, and the participants' learning experience was better (P<.05). CONCLUSION: During IPS courses on crisis resource management, debriefing improves participants' performance, but similar overall results can be obtained with both debriefing methods. Combined debriefing might be more effective for improving participants' leadership skills and psychological safety and also provide a better learning experience.


OBJETIVO: La simulación interprofesional (SIP) es eficaz para aprender gestión de recursos de crisis. La modalidad de debriefing utilizada en la SIP puede influir en el rendimiento de los participantes y en su integridad psicológica. Se evalúa y compara el rendimiento de un debriefing estándar (DE) ­colectivo­ con un debriefing combinado (DC) ­individual y colectivo­ en cursos de SIP en escenarios que simulan pacientes con patología aguda y grave. METODO: Ensayo controlado, aleatorizado y multicéntrico. Se aleatorizó el tipo de debriefing realizado (DE o DC) en las sesiones de SIP. El rendimiento del debriefing se evaluó con la escala TEAM (Team Emergency Assessment Measure). La calidad de la SIP fue valorada por los participantes con la escala DASH (Debriefing Assessment for Simulation in Healthcare©). RESULTADOS: Se aleatorizaron 40 cursos de SIP de los que se analizaron 30. Quince realizaron DE y 15 DC. Ambos grupos mejoraron entre la pre y la posprueba (p < 0,01), pero no hubo diferencias en el rendimiento global entre ambas modalidades de debriefing (p = 0,64). El DC obtuvo mejores resultados que el DE en la capacidad de liderazgo (p < 0,05), en la percepción de seguridad psicológica y en la experiencia de aprendizaje eficaz (p < 0,05). CONCLUSIONES: Durante la SIP en situaciones de crisis, el debriefing mejora el rendimiento de los participantes, sin diferencias entre un DE y un DC. El DC podría ser más efectivo para mejorar la capacidad de liderazgo, la seguridad psicológica y la experiencia del aprendizaje.


Crew Resource Management, Healthcare , Simulation Training , Humans , Leadership , Learning
10.
Presse Med ; 48(7-8 Pt 1): 780-787, 2019.
Article Fr | MEDLINE | ID: mdl-31383383

Interprofessional simulation-based education is effective for learning non-technical critical care skills and strengthening interprofessional team collaboration to optimize quality of care and patient outcome. Implementation of interprofessional simulation sessions in initial and continuing education is facilitated by a team of "champions" from each discipline/profession to ensure educational quality and logistics. Interprofessional simulation training must be integrated into a broader interprofessional curriculum supported by managers, administrators and clinical colleagues from different professional programs. When conducting interprofessional simulation training, it is essential to account for sociological factors (hierarchy, power, authority, interprofessional conflicts, gender, access to information, professional identity) both in scenario design and debriefing. Teamwork assessment tools in interprofessional simulation training may be used to guide debriefing. The interprofessional simulation setting (in-situ or simulation centre) will be chosen according to the learning objectives and the logistics.


Critical Care/methods , Education, Medical/methods , Interprofessional Relations , Patient Care Team , Simulation Training , Clinical Competence , Critical Care/standards , Curriculum/standards , Education, Medical/standards , Educational Measurement/methods , Humans , Implementation Science , Patient Care Team/organization & administration , Patient Care Team/standards , Simulation Training/methods , Simulation Training/organization & administration , Simulation Training/standards
12.
Eur J Anaesthesiol ; 35(7): 511-518, 2018 07.
Article En | MEDLINE | ID: mdl-29419564

BACKGROUND: Knowledge of the factors associated with the decision to withdraw or withhold life support (WWLS) in brain-injured patients is limited. However, most deaths in these patients may involve such a decision. OBJECTIVES: To identify factors associated with the decision to WWLS in brain-injured patients requiring mechanical ventilation who survive the first 24 h in the ICU, and to analyse the outcomes and time to death. DESIGN: A retrospective observational multicentre study. SETTINGS: Twenty French ICUs in 18 university hospitals. PATIENTS: A total of 793 mechanically ventilated brain-injured adult patients. INTERVENTIONS: None. MAIN OUTCOME MEASURES: Decision to WWLS within 3 months of ICU admission, and death or Glasgow Outcome Scale (GOS) score at day 90. RESULTS: A decision to WWLS was made in 171 patients (22%), of whom 89% were dead at day 90. Out of the 247 deaths recorded at day 90, 153 (62%) were observed after a decision to WWLS. The median time between admission and death when a decision to WWLS was made was 10 (5 to 20) days vs. 10 (5 to 26) days when no end-of-life decision was made (P < 0.924). Among the 18 patients with a decision to WWLS who were still alive at day 90, three patients (2%) had a GOS score of 2, nine patients (5%) had a GOS score of 3 and five patients (3%) a GOS score of 4. Older age, presence of one nonreactive and dilated pupil, Glasgow Coma Scale less than 7, barbiturate use, acute respiratory distress syndrome and worsening lesions on computed tomography scans were each independently associated with decisions to WWLS. CONCLUSION: Using a nationwide cohort of brain-injured patients, we observed a high proportion of deaths associated with an end-of-life decision. Older age and several disease severity factors were associated with the decision to WWLS.


Brain Injuries/therapy , Clinical Decision-Making/methods , Life Support Care/methods , Life Support Care/trends , Ventilators, Mechanical/trends , Withholding Treatment/trends , Adult , Aged , Brain Injuries/diagnosis , Female , Humans , Intensive Care Units/trends , Male , Middle Aged , Prospective Studies , Respiration, Artificial/methods , Respiration, Artificial/trends , Retrospective Studies , Treatment Outcome
13.
Anaesth Crit Care Pain Med ; 36(6): 403-406, 2017 Dec.
Article En | MEDLINE | ID: mdl-28648752

INTRODUCTION: The use of high fidelity simulators in Medicine can improve knowledge, behaviour and practice but may be associated with significant stress. Our objective was to measure physiological and psychological self-assessed intensity of stress before and after a planned simulation training session among third year anaesthesia and critical care residents. METHODS: A convenience sample of 27 residents participating in a simulation training course was studied. Stress was evaluated by self-assessment using a numerical scale and by salivary amylase concentration before and after the session. Technical and non-technical (using the Aberdeen Anaesthetists' Non Technical Skills scale) performances were assessed through videotapes analysis. RESULTS: The median stress score was 5 (2-8) before and 7 (2-10) after the simulation session (P<0.001). For 48% of residents studied, the stress score after the session was superior or equal to 8/10. Salivary amylase concentration increased significantly after the session compared to before the session, respectively (1,250,440±1,216,667 vs. 727,260±603,787IU/L, P=0.008). There was no significant correlation between stress parameters and non-technical performance. DISCUSSION: Simulation-induced stress, as measured by self-assessment and biological parameter, is high before the session and increases significantly during the course. While this stress did not seem to impact performance negatively, it should be taken into account.


Anesthesiology/education , Critical Care , High Fidelity Simulation Training/methods , Internship and Residency/statistics & numerical data , Self-Assessment , Stress, Physiological , Stress, Psychological/psychology , Adult , Amylases/analysis , Amylases/metabolism , Clinical Competence , Female , Humans , Male , Saliva/enzymology , Stress, Psychological/etiology
14.
Int Arch Occup Environ Health ; 90(6): 467-480, 2017 Aug.
Article En | MEDLINE | ID: mdl-28271382

PURPOSE: To compare tachycardia and cardiac strain between 24-hour shifts (24hS) and 14-hour night shifts (14hS) in emergency physicians (EPs), and to investigate key factors influencing tachycardia and cardiac strain. METHODS: We monitored heart rate (HR) with Holter-ECG in a shift-randomized trial comparing a 24hS, a 14hS, and a control day, within a potential for 19 EPs. We also measured 24-h HR the third day (D3) after both shifts. We measured perceived stress by visual analog scale and the number of life-and-death emergencies. RESULTS: The 17 EPs completing the whole protocol reached maximal HR (180.9 ± 6.9 bpm) during both shifts. Minutes of tachycardia >100 bpm were higher in 24hS (208.3 ± 63.8) than in any other days (14hS: 142.3 ± 36.9; D3/14hS: 64.8 ± 31.4; D3/24hS: 57.6 ± 19.1; control day: 39.2 ± 11.6 min, p < .05). Shifts induced a cardiac strain twice higher than in days not involving patients contact. Each life-and-death emergency enhanced 26 min of tachycardia ≥100 bpm (p < .001), 7 min ≥ 110 bpm (p < .001), 2 min ≥ 120 bpm (p < .001) and 19 min of cardiac strain ≥30% (p = .014). Stress was associated with greater duration of tachycardia ≥100, 110 and 120 bpm, and of cardiac strain ≥30% (p < .001). CONCLUSION: We demonstrated several incidences of maximal HR during shifts combined with a high cardiac strain. Duration of tachycardia were the highest in 24hS and lasted several hours. Such values are comparable to those of workers exposed to high physical demanding tasks or heat. Therefore, we suggest that EPs limit their exposure to 24hS. We, furthermore, demonstrated benefits of HR monitoring for identifying stressful events. ClinicalTrials.gov identifier: NCT01874704.


Emergency Medicine , Occupational Exposure/adverse effects , Physicians/psychology , Shift Work Schedule/adverse effects , Stress, Psychological/complications , Tachycardia/psychology , Adult , Body Mass Index , Female , France , Heart Rate , Humans , Male , Middle Aged , Monitoring, Ambulatory , Multivariate Analysis , Risk Factors , Sleep Disorders, Circadian Rhythm , Stress, Physiological , Surveys and Questionnaires , Visual Analog Scale , Work Schedule Tolerance/physiology
15.
Minerva Anestesiol ; 82(11): 1180-1188, 2016 11.
Article En | MEDLINE | ID: mdl-27625121

BACKGROUND: In several countries, a computed tomography angiography (CTA) is used to confirm brain death (BD). A six­hour interval is recommended between clinical diagnosis and CTA acquisition despite the lack of strong evidence to support this interval. The aim of this study was to determine the optimal timing for CTA in the confirmation of BD. METHODS: This retrospective observational study enrolled all adult patients admitted between January 2009 and December 2013 to the intensive care units of a French university hospital with clinically diagnosed BD and at least one CTA performed as a confirmatory test. The CTAs were identified as conclusive (e.g. yielding confirmation of BD) or inconclusive (e.g. showing persistent brain circulation). RESULTS: One hundred and four patients (sex ratio M/F 1.8; age 55 years [41­64]) underwent 117 CTAs. CTAs confirmed cerebral circulatory arrest in 94 cases yielding a sensitivity of 80%. Inconclusive CTAs were performed earlier than conclusive ones (2 hours [1­3] vs. 4 hours [2­9], P=0.03) and were associated with decompressive craniectomy (5 cases [23%] vs. 6 cases [7%], P=0.05) and the failure to complete full neurological examination (5 cases [23%] vs. 4 cases [5%], P=0.02). Six hours after BD clinical diagnosis, the proportion of conclusive CTA was only 51%, with progressive increase overtime with more than 80% of conclusive CTA after 12 hours. CONCLUSIONS: A 12­hour interval might be appropriate in order to limit the risk of inconclusive CTAs.


Brain Death/diagnostic imaging , Computed Tomography Angiography , Adult , Brain Death/diagnosis , Cerebral Angiography , Female , Humans , Male , Middle Aged , Retrospective Studies , Time Factors , Tomography, X-Ray Computed
16.
Heart Lung ; 45(5): 406-8, 2016.
Article En | MEDLINE | ID: mdl-27402629

BACKGROUND: Takotsubo cardiomyopathy can occur at the early phase of severe acute brain injuries. In the case of cardiac output decrease or shock, the optimal treatment is still a matter of debate. Due to massive stress hormone release, the infusion of catecholamines may have limited effects and may even aggravate cardiac failure. Other inotropic agents may be an option. Levosimendan has been shown to have potential beneficial effects in this setting, although milrinone has not been studied. METHODS: We report a case of a young female presenting with inverted Takotsubo cardiomyopathy syndrome after severe traumatic brain injury. RESULTS: Due to hemodynamic instability and increasing levels of infused norepinephrine, dobutamine infusion was begun but rapidly stopped due to tachyarrhythmia. Milrinone infusion stabilized the patient's hemodynamic status and improved cardiac output without deleterious effects. CONCLUSION: Milrinone could be a good alternative when inotropes are required in Takotsubo cardiomyopathy and when dobutamine infusion is associated with tachyarrhythmia.


Brain Injuries, Traumatic/complications , Dobutamine/therapeutic use , Milrinone/administration & dosage , Takotsubo Cardiomyopathy/therapy , Adolescent , Brain Injuries, Traumatic/diagnosis , Cardiotonic Agents/administration & dosage , Female , Hemodynamics/drug effects , Humans , Takotsubo Cardiomyopathy/etiology , Takotsubo Cardiomyopathy/physiopathology , Trauma Severity Indices , Treatment Failure
17.
Acta Neurochir Suppl ; 122: 37-40, 2016.
Article En | MEDLINE | ID: mdl-27165873

In pathophysiology and clinical practice, the intracranial pressure (ICP) profiles in the supratentorial and infratentorial compartments are unclear. We know that the pressure within the skull is unevenly distributed, with demonstrated ICP gradients. We recorded and characterised the supra- and infratentorial ICP patterns to understand what drives the transtentorial ICP gradient.A 70-year-old man was operated on for acute cerebellar infarction. One supratentorial probe and one cerebellar probe were implanted. Both signals were recorded concurrently and analysed off-line. We calculated mean ICP, ICP pulse amplitude, respiratory waves, slow waves and the RAP index of supra- and infratentorial ICP signals. Then, we measured transtentorial difference and performed correlation analysis for every index.Supratentorial ICP mean was 8.5 mmHg lower than infratentorial ICP, but the difference lessens for higher values. Both signals across the tentorium showed close correlation. Supra- and infratentorial pulse amplitude, respiratory waves and slow waves also showed a high degree of correlation. The compensatory reserve (RAP) showed good correlation. In this case report, we demonstrate that the mean value of ICP is higher in the posterior fossa, with a strong correlation across the tentorium. All other ICP-derived parameters display a symmetrical profile.


Brain Infarction/physiopathology , Cerebellar Diseases/physiopathology , Intracranial Pressure/physiology , Aged , Brain Infarction/surgery , Cerebellar Diseases/surgery , Humans , Male , Monitoring, Physiologic , Spinal Cord
...