Your browser doesn't support javascript.
loading
: 20 | 50 | 100
1 - 9 de 9
1.
Neurol Clin Pract ; 14(1): e200225, 2024 Feb.
Article En | MEDLINE | ID: mdl-38173542

Background and Objectives: Patterns of electrical activity in the brain (EEG) during sleep are sensitive to various health conditions even at subclinical stages. The objective of this study was to estimate sleep EEG-predicted incidence of future neurologic, cardiovascular, psychiatric, and mortality outcomes. Methods: This is a retrospective cohort study with 2 data sets. The Massachusetts General Hospital (MGH) sleep data set is a clinic-based cohort, used for model development. The Sleep Heart Health Study (SHHS) is a community-based cohort, used as the external validation cohort. Exposure is good, average, or poor sleep defined by quartiles of sleep EEG-predicted risk. The outcomes include ischemic stroke, intracranial hemorrhage, mild cognitive impairment, dementia, atrial fibrillation, myocardial infarction, type 2 diabetes, hypertension, bipolar disorder, depression, and mortality. Diagnoses were based on diagnosis codes, brain imaging reports, medications, cognitive scores, and hospital records. We used the Cox survival model with death as the competing risk. Results: There were 8673 participants from MGH and 5650 from SHHS. For all outcomes, the model-predicted 10-year risk was within the 95% confidence interval of the ground truth, indicating good prediction performance. When comparing participants with poor, average, and good sleep, except for atrial fibrillation, all other 10-year risk ratios were significant. The model-predicted 10-year risk ratio closely matched the observed event rate in the external validation cohort. Discussion: The incidence of health outcomes can be predicted by brain activity during sleep. The findings strengthen the concept of sleep as an accessible biological window into unfavorable brain and general health outcomes.

2.
J Stroke Cerebrovasc Dis ; 32(9): 107249, 2023 Sep.
Article En | MEDLINE | ID: mdl-37536017

OBJECTIVES: Patients hospitalized with stroke develop delirium at higher rates than general hospitalized patients. While several medications are associated with existing delirium, it is unknown whether early medication exposures are associated with subsequent delirium in patients with stroke. Additionally, it is unknown whether delirium identification is associated with changes in the prescription of these medications. MATERIALS AND METHODS: We conducted a retrospective cohort study of patients admitted to a comprehensive stroke center, who were assessed for delirium by trained nursing staff during clinical care. We analyzed exposures to multiple medication classes in the first 48 h of admission, and compared them between patients who developed delirium >48 hours after admission and those who never developed delirium. Statistical analysis was performed using univariate testing. Multivariable logistic regression was used further to evaluate the significance of univariately significant medications, while controlling for clinical confounders. RESULTS: 1671 unique patients were included in the cohort, of whom 464 (27.8%) developed delirium >48 hours after admission. Delirium was associated with prior exposure to antipsychotics, sedatives, opiates, and antimicrobials. Antipsychotics, sedatives, and antimicrobials remained significantly associated with delirium even after accounting for several clinical covariates. Usage of these medications decreased in the 48 hours following delirium identification, except for atypical antipsychotics, whose use increased. Other medication classes such as steroids, benzodiazepines, and sleep aids were not initially associated with subsequent delirium, but prescription patterns still changed after delirium identification. CONCLUSIONS: Early exposure to multiple medication classes is associated with the subsequent development of delirium in patients with stroke. Additionally, prescription patterns changed following delirium identification, suggesting that some of the associated medication classes may represent modifiable targets for future delirium prevention strategies, although future study is needed.


Antipsychotic Agents , Delirium , Stroke , Humans , Antipsychotic Agents/adverse effects , Retrospective Studies , Delirium/chemically induced , Delirium/diagnosis , Risk Factors , Stroke/diagnosis , Stroke/drug therapy , Stroke/complications , Hypnotics and Sedatives/therapeutic use , Hospitals
3.
Clin Neurophysiol ; 143: 97-106, 2022 11.
Article En | MEDLINE | ID: mdl-36182752

OBJECTIVE: Delayed cerebral ischemia (DCI) is a leading complication of aneurysmal subarachnoid hemorrhage (SAH) and electroencephalography (EEG) is increasingly used to evaluate DCI risk. Our goal is to develop an automated DCI prediction algorithm integrating multiple EEG features over time. METHODS: We assess 113 moderate to severe grade SAH patients to develop a machine learning model that predicts DCI risk using multiple EEG features. RESULTS: Multiple EEG features discriminate between DCI and non-DCI patients when aligned either to SAH time or to DCI onset. DCI and non-DCI patients have significant differences in alpha-delta ratio (0.08 vs 0.05, p < 0.05) and percent alpha variability (0.06 vs 0.04, p < 0.05), Shannon entropy (p < 0.05) and epileptiform discharge burden (205 vs 91 discharges per hour, p < 0.05) based on whole brain and vascular territory averaging. Our model improves predictions by emphasizing the most informative features at a given time with an area under the receiver-operator curve of 0.73, by day 5 after SAH and good calibration between 48-72 hours (calibration error 0.13). CONCLUSIONS: Our proposed model obtains good performance in DCI prediction. SIGNIFICANCE: We leverage machine learning to enable rapid, automated, multi-featured EEG assessment and has the potential to increase the utility of EEG for DCI prediction.


Brain Ischemia , Subarachnoid Hemorrhage , Brain , Brain Ischemia/complications , Brain Ischemia/etiology , Cerebral Infarction , Electroencephalography/adverse effects , Humans , Subarachnoid Hemorrhage/complications , Subarachnoid Hemorrhage/diagnosis
4.
Neurology ; 98(5): e459-e469, 2022 02 01.
Article En | MEDLINE | ID: mdl-34845057

BACKGROUND AND OBJECTIVES: Delayed cerebral ischemia (DCI) is the leading complication of subarachnoid hemorrhage (SAH). Because DCI was traditionally thought to be caused by large vessel vasospasm, transcranial Doppler ultrasounds (TCDs) have been the standard of care. Continuous EEG has emerged as a promising complementary monitoring modality and predicts increased DCI risk. Our objective was to determine whether combining EEG and TCD data improves prediction of DCI after SAH. We hypothesize that integrating these diagnostic modalities improves DCI prediction. METHODS: We retrospectively assessed patients with moderate to severe SAH (2011-2015; Fisher 3-4 or Hunt-Hess 4-5) who had both prospective TCD and EEG acquisition during hospitalization. Middle cerebral artery (MCA) peak systolic velocities (PSVs) and the presence or absence of epileptiform abnormalities (EAs), defined as seizures, epileptiform discharges, and rhythmic/periodic activity, were recorded daily. Logistic regressions were used to identify significant covariates of EAs and TCD to predict DCI. Group-based trajectory modeling (GBTM) was used to account for changes over time by identifying distinct group trajectories of MCA PSV and EAs associated with DCI risk. RESULTS: We assessed 107 patients; DCI developed in 56 (51.9%). Univariate predictors of DCI are presence of high-MCA velocity (PSV ≥200 cm/s, sensitivity 27%, specificity 89%) and EAs (sensitivity 66%, specificity 62%) on or before day 3. Two univariate GBTM trajectories of EAs predicted DCI (sensitivity 64%, specificity 62.75%). Logistic regression and GBTM models using both TCD and EEG monitoring performed better. The best logistic regression and GBTM models used both TCD and EEG data, Hunt-Hess score at admission, and aneurysm treatment as predictors of DCI (logistic regression: sensitivity 90%, specificity 70%; GBTM: sensitivity 89%, specificity 67%). DISCUSSION: EEG and TCD biomarkers combined provide the best prediction of DCI. The conjunction of clinical variables with the timing of EAs and high MCA velocities improved model performance. These results suggest that TCD and cEEG are promising complementary monitoring modalities for DCI prediction. Our model has potential to serve as a decision support tool in SAH management. CLASSIFICATION OF EVIDENCE: This study provides Class II evidence that combined TCD and EEG monitoring can identify delayed cerebral ischemia after SAH.


Brain Ischemia , Subarachnoid Hemorrhage , Vasospasm, Intracranial , Brain Ischemia/diagnostic imaging , Brain Ischemia/etiology , Electroencephalography/methods , Humans , Prospective Studies , Retrospective Studies , Subarachnoid Hemorrhage/complications , Subarachnoid Hemorrhage/diagnostic imaging , Ultrasonography, Doppler, Transcranial , Vasospasm, Intracranial/complications , Vasospasm, Intracranial/etiology
5.
Clin Neurophysiol ; 141: 139-146, 2022 09.
Article En | MEDLINE | ID: mdl-33812771

OBJECTIVE: To investigate whether epileptiform discharge burden can identify those at risk for delayed cerebral ischemia (DCI) after subarachnoid hemorrhage (SAH). METHODS: Retrospective analysis of 113 moderate to severe grade SAH patients who had continuous EEG (cEEG) recordings during their hospitalization. We calculated the burden of epileptiform discharges (ED), measured as number of ED per hour. RESULTS: We find that many SAH patients have an increase in ED burden during the first 3-10 days following rupture, the major risk period for DCI. However, those who develop DCI have a significantly higher hourly burden from days 3.5-6 after SAH vs. those who do not. ED burden is higher in DCI patients when assessed in relation to the onset of DCI (area under the receiver operator curve 0.72). Finally, specific trends of ED burden over time, assessed by group-based trajectory analysis, also help stratify DCI risk. CONCLUSIONS: These results suggest that ED burden is a useful parameter for identifying those at higher risk of developing DCI after SAH. The higher burden rate associated with DCI supports the theory of metabolic supply-demand mismatch which contributes to this complication. SIGNIFICANCE: ED burden is a novel biomarker for predicting those at high risk of DCI.


Brain Ischemia , Subarachnoid Hemorrhage , Brain Ischemia/diagnosis , Brain Ischemia/epidemiology , Brain Ischemia/etiology , Cerebral Infarction , Humans , Periodicity , Retrospective Studies , Subarachnoid Hemorrhage/complications
6.
Ann Neurol ; 90(2): 300-311, 2021 08.
Article En | MEDLINE | ID: mdl-34231244

OBJECTIVE: This study was undertaken to determine the dose-response relation between epileptiform activity burden and outcomes in acutely ill patients. METHODS: A single center retrospective analysis was made of 1,967 neurologic, medical, and surgical patients who underwent >16 hours of continuous electroencephalography (EEG) between 2011 and 2017. We developed an artificial intelligence algorithm to annotate 11.02 terabytes of EEG and quantify epileptiform activity burden within 72 hours of recording. We evaluated burden (1) in the first 24 hours of recording, (2) in the 12-hours epoch with highest burden (peak burden), and (3) cumulatively through the first 72 hours of monitoring. Machine learning was applied to estimate the effect of epileptiform burden on outcome. Outcome measure was discharge modified Rankin Scale, dichotomized as good (0-4) versus poor (5-6). RESULTS: Peak epileptiform burden was independently associated with poor outcomes (p < 0.0001). Other independent associations included age, Acute Physiology and Chronic Health Evaluation II score, seizure on presentation, and diagnosis of hypoxic-ischemic encephalopathy. Model calibration error was calculated across 3 strata based on the time interval between last EEG measurement (up to 72 hours of monitoring) and discharge: (1) <5 days between last measurement and discharge, 0.0941 (95% confidence interval [CI] = 0.0706-0.1191); 5 to 10 days between last measurement and discharge, 0.0946 (95% CI = 0.0631-0.1290); >10 days between last measurement and discharge, 0.0998 (95% CI = 0.0698-0.1335). After adjusting for covariates, increase in peak epileptiform activity burden from 0 to 100% increased the probability of poor outcome by 35%. INTERPRETATION: Automated measurement of peak epileptiform activity burden affords a convenient, consistent, and quantifiable target for future multicenter randomized trials investigating whether suppressing epileptiform activity improves outcomes. ANN NEUROL 2021;90:300-311.


Artificial Intelligence , Cost of Illness , Seizures/diagnosis , Seizures/physiopathology , Aged , Cohort Studies , Electroencephalography/methods , Female , Humans , Male , Middle Aged , Retrospective Studies , Treatment Outcome
7.
Neurocrit Care ; 32(3): 697-706, 2020 06.
Article En | MEDLINE | ID: mdl-32246435

BACKGROUND/OBJECTIVES: Clinical seizures following acute ischemic stroke (AIS) appear to contribute to worse neurologic outcomes. However, the effect of electrographic epileptiform abnormalities (EAs) more broadly is less clear. Here, we evaluate the impact of EAs, including electrographic seizures and periodic and rhythmic patterns, on outcomes in patients with AIS. METHODS: This is a retrospective study of all patients with AIS aged ≥ 18 years who underwent at least 18 h of continuous electroencephalogram (EEG) monitoring at a single center between 2012 and 2017. EAs were classified according to American Clinical Neurophysiology Society (ACNS) nomenclature and included seizures and periodic and rhythmic patterns. EA burden for each 24-h epoch was defined using the following cutoffs: EA presence, maximum daily burden < 10% versus > 10%, maximum daily burden < 50% versus > 50%, and maximum daily burden using categories from ACNS nomenclature ("rare" < 1%; "occasional" 1-9%; "frequent" 10-49%; "abundant" 50-89%; "continuous" > 90%). Maximum EA frequency for each epoch was dichotomized into ≥ 1.5 Hz versus < 1.5 Hz. Poor neurologic outcome was defined as a modified Rankin Scale score of 4-6 (vs. 0-3 as good outcome) at hospital discharge. RESULTS: One hundred and forty-three patients met study inclusion criteria. Sixty-seven patients (46.9%) had EAs. One hundred and twenty-four patients (86.7%) had poor outcome. On univariate analysis, the presence of EAs (OR 3.87 [1.27-11.71], p = 0.024) and maximum daily burden > 10% (OR 12.34 [2.34-210], p = 0.001) and > 50% (OR 8.26 [1.34-122], p = 0.035) were associated with worse outcomes. On multivariate analysis, after adjusting for clinical covariates (age, gender, NIHSS, APACHE II, stroke location, stroke treatment, hemorrhagic transformation, Charlson comorbidity index, history of epilepsy), EA presence (OR 5.78 [1.36-24.56], p = 0.017), maximum daily burden > 10% (OR 23.69 [2.43-230.7], p = 0.006), and maximum daily burden > 50% (OR 9.34 [1.01-86.72], p = 0.049) were associated with worse outcomes. After adjusting for covariates, we also found a dose-dependent association between increasing EA burden and increasing probability of poor outcomes (OR 1.89 [1.18-3.03] p = 0.009). We did not find an independent association between EA frequency and outcomes (OR: 4.43 [.98-20.03] p = 0.053). However, the combined effect of increasing EA burden and frequency ≥ 1.5 Hz (EA burden * frequency) was significantly associated with worse outcomes (OR 1.64 [1.03-2.63] p = 0.039). CONCLUSIONS: Electrographic seizures and periodic and rhythmic patterns in patients with AIS are associated with worse outcomes in a dose-dependent manner. Future studies are needed to assess whether treatment of this EEG activity can improve outcomes.


Brain/physiopathology , Ischemic Stroke/physiopathology , Seizures/physiopathology , Aged , Electroencephalography , Female , Functional Status , Humans , Ischemic Stroke/therapy , Male , Middle Aged , Prognosis , Retrospective Studies , Thrombectomy , Thrombolytic Therapy
8.
Neurocrit Care ; 33(2): 565-574, 2020 10.
Article En | MEDLINE | ID: mdl-32096120

BACKGROUND: Burst suppression in mechanically ventilated intensive care unit (ICU) patients is associated with increased mortality. However, the relative contributions of propofol use and critical illness itself to burst suppression; of burst suppression, propofol, and critical illness to mortality; and whether preventing burst suppression might reduce mortality, have not been quantified. METHODS: The dataset contains 471 adults from seven ICUs, after excluding anoxic encephalopathy due to cardiac arrest or intentional burst suppression for therapeutic reasons. We used multiple prediction and causal inference methods to estimate the effects connecting burst suppression, propofol, critical illness, and in-hospital mortality in an observational retrospective study. We also estimated the effects mediated by burst suppression. Sensitivity analysis was used to assess for unmeasured confounding. RESULTS: The expected outcomes in a "counterfactual" randomized controlled trial (cRCT) that assigned patients to mild versus severe illness are expected to show a difference in burst suppression burden of 39%, 95% CI [8-66]%, and in mortality of 35% [29-41]%. Assigning patients to maximal (100%) burst suppression burden is expected to increase mortality by 12% [7-17]% compared to 0% burden. Burst suppression mediates 10% [2-21]% of the effect of critical illness on mortality. A high cumulative propofol dose (1316 mg/kg) is expected to increase burst suppression burden by 6% [0.8-12]% compared to a low dose (284 mg/kg). Propofol exposure has no significant direct effect on mortality; its effect is entirely mediated through burst suppression. CONCLUSIONS: Our analysis clarifies how important factors contribute to mortality in ICU patients. Burst suppression appears to contribute to mortality but is primarily an effect of critical illness rather than iatrogenic use of propofol.


Critical Illness , Propofol , Adult , Critical Care , Humans , Intensive Care Units , Propofol/adverse effects , Respiration, Artificial , Retrospective Studies
9.
Infect Control Hosp Epidemiol ; 39(7): 826-833, 2018 07.
Article En | MEDLINE | ID: mdl-29769151

OBJECTIVETo validate a system to detect ventilator associated events (VAEs) autonomously and in real time.DESIGNRetrospective review of ventilated patients using a secure informatics platform to identify VAEs (ie, automated surveillance) compared to surveillance by infection control (IC) staff (ie, manual surveillance), including development and validation cohorts.SETTINGThe Massachusetts General Hospital, a tertiary-care academic health center, during January-March 2015 (development cohort) and January-March 2016 (validation cohort).PATIENTSVentilated patients in 4 intensive care units.METHODSThe automated process included (1) analysis of physiologic data to detect increases in positive end-expiratory pressure (PEEP) and fraction of inspired oxygen (FiO2); (2) querying the electronic health record (EHR) for leukopenia or leukocytosis and antibiotic initiation data; and (3) retrieval and interpretation of microbiology reports. The cohorts were evaluated as follows: (1) manual surveillance by IC staff with independent chart review; (2) automated surveillance detection of ventilator-associated condition (VAC), infection-related ventilator-associated complication (IVAC), and possible VAP (PVAP); (3) senior IC staff adjudicated manual surveillance-automated surveillance discordance. Outcomes included sensitivity, specificity, positive predictive value (PPV), and manual surveillance detection errors. Errors detected during the development cohort resulted in algorithm updates applied to the validation cohort.RESULTSIn the development cohort, there were 1,325 admissions, 479 ventilated patients, 2,539 ventilator days, and 47 VAEs. In the validation cohort, there were 1,234 admissions, 431 ventilated patients, 2,604 ventilator days, and 56 VAEs. With manual surveillance, in the development cohort, sensitivity was 40%, specificity was 98%, and PPV was 70%. In the validation cohort, sensitivity was 71%, specificity was 98%, and PPV was 87%. With automated surveillance, in the development cohort, sensitivity was 100%, specificity was 100%, and PPV was 100%. In the validation cohort, sensitivity was 85%, specificity was 99%, and PPV was 100%. Manual surveillance detection errors included missed detections, misclassifications, and false detections.CONCLUSIONSManual surveillance is vulnerable to human error. Automated surveillance is more accurate and more efficient for VAE surveillance.Infect Control Hosp Epidemiol 2018;826-833.


Bias , Cross Infection/epidemiology , Sentinel Surveillance , Ventilator-Induced Lung Injury/epidemiology , Ventilators, Mechanical/adverse effects , Academic Medical Centers , Aged , Aged, 80 and over , Algorithms , Cohort Studies , Electronic Health Records , Female , Humans , Infection Control Practitioners , Intensive Care Units , Male , Massachusetts/epidemiology , Middle Aged , Retrospective Studies , Software
...