Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 20 de 21
Filter
1.
Pediatr Res ; 2024 Feb 16.
Article in English | MEDLINE | ID: mdl-38365874

ABSTRACT

BACKGROUND: Mortality and intraventricular hemorrhage (IVH) are common adverse outcomes in preterm infants and are challenging to predict clinically. Sample entropy (SE), a measure of heart rate variability (HRV), has shown predictive power for sepsis and other morbidities in neonates. We evaluated associations between SE and mortality and IVH in the first week of life. METHODS: Participants were 389 infants born before 32 weeks of gestation for whom bedside monitor data were available. A total of 29 infants had IVH grade 3 or 4 and 31 infants died within 2 weeks of life. SE was calculated with the PhysioNet open-source benchmark. Logistic regressions assessed associations between SE and IVH and/or mortality with and without common clinical covariates over various hour of life (HOL) censor points. RESULTS: Lower SE was associated with mortality by 4 HOL, but higher SE was very strongly associated with IVH and mortality at 24-96 HOL. Bootstrap testing confirmed SE significantly improved prediction using clinical variables at 96 HOL. CONCLUSION: SE is a significant predictor of IVH and mortality in premature infants. Given IVH typically occurs in the first 24-72 HOL, affected infants may initially have low SE followed by a sustained period of high SE. IMPACT: SE correlates with IVH and mortality in preterm infants early in life. SE combined with clinical factors yielded ROC AUCs well above 0.8 and significantly outperformed the clinical model at 96 h of life. Previous studies had not shown predictive power over clinical models. First study using the PhysioNet Cardiovascular Toolbox benchmark in young infants. Relative to the generally accepted timing of IVH in premature infants, we saw lower SE before or around the time of hemorrhage and a sustained period of higher SE after. Higher SE after acute events has not been reported previously.

2.
Sensors (Basel) ; 23(7)2023 Apr 02.
Article in English | MEDLINE | ID: mdl-37050750

ABSTRACT

The continuous monitoring of arterial blood pressure (BP) is vital for assessing and treating cardiovascular instability in a sick infant. Currently, invasive catheters are inserted into an artery to monitor critically-ill infants. Catheterization requires skill, is time consuming, prone to complications, and often painful. Herein, we report on the feasibility and accuracy of a non-invasive, wearable device that is easy to place and operate and continuously monitors BP without the need for external calibration. The device uses capacitive sensors to acquire pulse waveform measurements from the wrist and/or foot of preterm and term infants. Systolic, diastolic, and mean arterial pressures are inferred from the recorded pulse waveform data using algorithms trained using artificial neural network (ANN) techniques. The sensor-derived, continuous, non-invasive BP data were compared with corresponding invasive arterial line (IAL) data from 81 infants with a wide variety of pathologies to conclude that inferred BP values meet FDA-level accuracy requirements for these critically ill, yet normotensive term and preterm infants.


Subject(s)
Blood Pressure Determination , Infant, Premature , Infant , Humans , Infant, Newborn , Blood Pressure/physiology , Blood Pressure Determination/methods , Arterial Pressure , Wrist
3.
Am Heart J ; 200: 1-10, 2018 06.
Article in English | MEDLINE | ID: mdl-29898835

ABSTRACT

BACKGROUND: Automated measurements of electrocardiographic (ECG) intervals by current-generation digital electrocardiographs are critical to computer-based ECG diagnostic statements, to serial comparison of ECGs, and to epidemiological studies of ECG findings in populations. A previous study demonstrated generally small but often significant systematic differences among 4 algorithms widely used for automated ECG in the United States and that measurement differences could be related to the degree of abnormality of the underlying tracing. Since that publication, some algorithms have been adjusted, whereas other large manufacturers of automated ECGs have asked to participate in an extension of this comparison. METHODS: Seven widely used automated algorithms for computer-based interpretation participated in this blinded study of 800 digitized ECGs provided by the Cardiac Safety Research Consortium. All tracings were different from the study of 4 algorithms reported in 2014, and the selected population was heavily weighted toward groups with known effects on the QT interval: included were 200 normal subjects, 200 normal subjects receiving moxifloxacin as part of an active control arm of thorough QT studies, 200 subjects with genetically proved long QT syndrome type 1 (LQT1), and 200 subjects with genetically proved long QT syndrome Type 2 (LQT2). RESULTS: For the entire population of 800 subjects, pairwise differences between algorithms for each mean interval value were clinically small, even where statistically significant, ranging from 0.2 to 3.6milliseconds for the PR interval, 0.1 to 8.1milliseconds for QRS duration, and 0.1 to 9.3milliseconds for QT interval. The mean value of all paired differences among algorithms was higher in the long QT groups than in normals for both QRS duration and QT intervals. Differences in mean QRS duration ranged from 0.2 to 13.3milliseconds in the LQT1 subjects and from 0.2 to 11.0milliseconds in the LQT2 subjects. Differences in measured QT duration (not corrected for heart rate) ranged from 0.2 to 10.5milliseconds in the LQT1 subjects and from 0.9 to 12.8milliseconds in the LQT2 subjects. CONCLUSIONS: Among current-generation computer-based electrocardiographs, clinically small but statistically significant differences exist between ECG interval measurements by individual algorithms. Measurement differences between algorithms for QRS duration and for QT interval are larger in long QT interval subjects than in normal subjects. Comparisons of population study norms should be aware of small systematic differences in interval measurements due to different algorithm methodologies, within-individual interval measurement comparisons should use comparable methods, and further attempts to harmonize interval measurement methodologies are warranted.


Subject(s)
Algorithms , Electrocardiography , Long QT Syndrome/diagnosis , Romano-Ward Syndrome/diagnosis , Adult , Dimensional Measurement Accuracy , Electrocardiography/methods , Electrocardiography/standards , Female , Heart Conduction System/diagnostic imaging , Humans , Male , Outcome Assessment, Health Care , Random Allocation , Signal Processing, Computer-Assisted
4.
J Electrocardiol ; 50(6): 841-846, 2017.
Article in English | MEDLINE | ID: mdl-28918214

ABSTRACT

BACKGROUND: The feasibility of using photoplethysmography (PPG) for estimating heart rate variability (HRV) has been the subject of many recent studies with contradicting results. Accurate measurement of cardiac cycles is more challenging in PPG than ECG due to its inherent characteristics. METHODS: We developed a PPG-only algorithm by computing a robust set of medians of the interbeat intervals between adjacent peaks, upslopes, and troughs. Abnormal intervals are detected and excluded by applying our criteria. RESULTS: We tested our algorithm on a large database from high-risk ICU patients containing arrhythmias and significant amounts of artifact. The average difference between PPG-based and ECG-based parameters is <1% for pNN50, <1bpm for meanHR, <1ms for SDNN, <3ms for meanNN, and <4ms for SDSD and RMSSD. CONCLUSIONS: Our performance testing shows that the pulse rate variability (PRV) parameters are comparable to the HRV parameters from simultaneous ECG recordings.


Subject(s)
Algorithms , Arrhythmias, Cardiac/physiopathology , Heart Rate Determination/methods , Heart Rate/physiology , Photoplethysmography/methods , Artifacts , Electrocardiography/methods , Humans , Intensive Care Units , Sensitivity and Specificity , Signal Processing, Computer-Assisted
5.
Am Heart J ; 167(2): 150-159.e1, 2014 Feb.
Article in English | MEDLINE | ID: mdl-24439975

ABSTRACT

BACKGROUND AND PURPOSE: Automated measurements of electrocardiographic (ECG) intervals are widely used by clinicians for individual patient diagnosis and by investigators in population studies. We examined whether clinically significant systematic differences exist in ECG intervals measured by current generation digital electrocardiographs from different manufacturers and whether differences, if present, are dependent on the degree of abnormality of the selected ECGs. METHODS: Measurements of RR interval, PR interval, QRS duration, and QT interval were made blindly by 4 major manufacturers of digital electrocardiographs used in the United States from 600 XML files of ECG tracings stored in the US FDA ECG warehouse and released for the purpose of this study by the Cardiac Safety Research Consortium. Included were 3 groups based on expected QT interval and degree of repolarization abnormality, comprising 200 ECGs each from (1) placebo or baseline study period in normal subjects during thorough QT studies, (2) peak moxifloxacin effect in otherwise normal subjects during thorough QT studies, and (3) patients with genotyped variants of congenital long QT syndrome (LQTS). RESULTS: Differences of means between manufacturers were generally small in the normal and moxifloxacin subjects, but in the LQTS patients, differences of means ranged from 2.0 to 14.0 ms for QRS duration and from 0.8 to 18.1 ms for the QT interval. Mean absolute differences between algorithms were similar for QRS duration and QT intervals in the normal and in the moxifloxacin subjects (mean ≤6 ms) but were significantly larger in patients with LQTS. CONCLUSIONS: Small but statistically significant group differences in mean interval and duration measurements and means of individual absolute differences exist among automated algorithms of widely used, current generation digital electrocardiographs. Measurement differences, including QRS duration and the QT interval, are greatest for the most abnormal ECGs.


Subject(s)
Algorithms , Electrocardiography/instrumentation , Heart Conduction System/physiology , Heart Rate/physiology , Signal Processing, Computer-Assisted , Adult , Equipment Design , Female , Humans , Male , Middle Aged , Reproducibility of Results
6.
J Electrocardiol ; 47(6): 798-803, 2014.
Article in English | MEDLINE | ID: mdl-25172189

ABSTRACT

Defibrillation is often required to terminate a ventricular fibrillation or fast ventricular tachycardia rhythm and resume a perfusing rhythm in sudden cardiac arrest patients. Automated external defibrillators rely on automatic ECG analysis algorithms to detect the presence of shockable rhythms before advising the rescuer to deliver a shock. For a reliable rhythm analysis, chest compression must be interrupted to prevent corruption of the ECG waveform due to the artifact induced by the mechanical activity of compressions. However, these hands-off intervals adversely affect the success of treatment. To minimize the hands-off intervals and increase the chance of successful resuscitation, we developed a method which asks for interrupting the compressions only if the underlying ECG rhythm cannot be accurately determined during chest compressions. Using this method only a small percentage of cases need compressions interruption, hence a significant reduction in hands-off time is achieved. Our algorithm comprises a novel filtering technique for the ECG and thoracic impedance waveforms, and an innovative method to combine analysis from both filtered and unfiltered data. Requiring compression interruption for only 14% of cases, our algorithm achieved a sensitivity of 92% and specificity of 99%.


Subject(s)
Artifacts , Death, Sudden, Cardiac/prevention & control , Electric Countershock/methods , Heart Massage/methods , Therapy, Computer-Assisted/methods , Ventricular Fibrillation/prevention & control , Algorithms , Clinical Alarms , Combined Modality Therapy/methods , Defibrillators , Diagnosis, Computer-Assisted/methods , Electrocardiography/methods , Humans , Reproducibility of Results , Retrospective Studies , Sensitivity and Specificity , Treatment Outcome , Ventricular Fibrillation/diagnosis
7.
J Electrocardiol ; 47(6): 819-25, 2014.
Article in English | MEDLINE | ID: mdl-25194875

ABSTRACT

BACKGROUND: Respiration rate (RR) is a critical vital sign that can be monitored to detect acute changes in patient condition (e.g., apnea) and potentially provide an early warning of impending life-threatening deterioration. Monitoring respiration signals is also critical for detecting sleep disordered breathing such as sleep apnea. Additionally, analyzing a respiration signal can enhance the quality of medical images by gating image acquisition based on the same phase of the patient's respiratory cycle. Although many methods exist for measuring respiration, in this review we focus on three ECG-derived respiration techniques we developed to obtain respiration from an ECG signal. METHODS: The first step in all three techniques is to analyze the ECG to detect beat locations and classify them. 1) The EDR method is based on analyzing the heart axis shift due to respiration. In our method, one respiration waveform value is calculated for each normal QRS complex by measuring the peak to QRS trough amplitude. Compared to other similar EDR techniques, this method does not need removal of baseline wander from the ECG signal. 2) The RSA method uses instantaneous heart rate variability to derive a respiratory signal. It is based on the observed respiratory sinus arrhythmia governed by baroreflex sensitivity. 3) Our EMGDR method for computing a respiratory waveform uses measurement of electromyogram (EMG) activity created by respiratory effort of the intercostal muscles and diaphragm. The ECG signal is high-pass filtered and processed to reduce ECG components and accentuate the EMG signal before applying RMS and smoothing. RESULTS: Over the last five years, we have performed six studies using the above methods: 1) In 1907 sleep lab patients with >1.5M 30-second epochs, EDR achieved an apnea detection accuracy of 79%. 2) In 24 adult polysomnograms, use of EDR and chest belts for RR computation was compared to airflow RR; mean RR error was EDR: 1.8±2.7 and belts: 0.8±2.1. 3) During cardiac MRI, a comparison of EMGDR breath locations to the reference abdominal belt signal yielded sensitivity/PPV of 94/95%. 4) Another comparison study for breath detection during MRI yielded sensitivity/PPV pairs of EDR: 99/97, RSA: 79/78, and EMGDR: 89/86%. 5) We tested EMGDR performance in the presence of simulated respiratory disease using CPAP to produce PEEP. For 10 patients, no false breath waveforms were generated with mild PEEP, but they appeared in 2 subjects at high PEEP. 6) A patient monitoring study compared RR computation from EDR to impedance-derived RR, and showed that EDR provides a near equivalent RR measurement with reduced hardware circuitry requirements.


Subject(s)
Algorithms , Apnea/diagnosis , Diagnosis, Computer-Assisted/methods , Electrocardiography/methods , Electromyography/methods , Respiratory Rate , Humans , Pattern Recognition, Automated/methods , Polysomnography/methods , Reproducibility of Results , Sensitivity and Specificity
8.
J Electrocardiol ; 46(6): 473-9, 2013.
Article in English | MEDLINE | ID: mdl-23871657

ABSTRACT

Although the importance of quality cardiopulmonary resuscitation (CPR) and its link to survival is still emphasized, there has been recent debate about the balance between CPR and defibrillation, particularly for long response times. Defibrillation shocks for ventricular fibrillation (VF) of recently perfused hearts have high success for the return of spontaneous circulation (ROSC), but hearts with depleted adenosine triphosphate (ATP) stores have low recovery rates. Since quality CPR has been shown to both slow the degradation process and restore cardiac viability, a measurement of patient condition to optimize the timing of defibrillation shocks may improve outcomes compared to time-based protocols. Researchers have proposed numerous predictive features of VF and shockable ventricular tachycardia (VT) which can be computed from the electrocardiogram (ECG) signal to distinguish between the rhythms which convert to spontaneous circulation and those which do not. We looked at the shock-success prediction performance of thirteen of these features on a single evaluation database including the recordings from 116 out-of-hospital cardiac arrest patients which were collected for a separate study using defibrillators in ambulances and medical centers in 4 European regions and the US between March 2002 and September 2004. A total of 469 shocks preceded by VF or shockable VT rhythm episodes were identified in the recordings. Based on the experts' annotation for the post-shock rhythm, the shocks were categorized to result in either pulsatile (ROSC) or non-pulsatile (no-ROSC) rhythm. The features were calculated on a 4-second ECG segment prior to the shock delivery. These features examined were: Mean Amplitude, Average Peak-Peak Amplitude, Amplitude Range, Amplitude Spectrum Analysis (AMSA), Peak Frequency, Centroid Frequency, Spectral Flatness Measure (SFM), Energy, Max Power, Centroid Power, Power Spectrum Analysis (PSA), Mean Slope, and Median Slope. Statistical hypothesis tests (two-tailed t-test and Wilcoxon with 5% significance level) were applied to determine if the means and medians of these features were significantly different between the ROSC and no-ROSC groups. The ROC curve was computed for each feature, and Area Under the Curve (AUC) was calculated. Specificity (Sp) with Sensitivity (Se) held at 90% as well as Se with Sp held at 90% was also computed. All features showed statistically different mean and median values between the ROSC and no-ROSC groups with all p-values less than 0.0001. The AUC was >76% for all features. For Sp = 90%, the Se range was 33-45%; for Se = 90%, the Sp range was 49-63%. The features showed good shock-success prediction performance. We believe that a defibrillator employing a clinical decision tool based on these features has the potential to improve overall survival from cardiac arrest.


Subject(s)
Defibrillators, Implantable/statistics & numerical data , Diagnosis, Computer-Assisted/methods , Electrocardiography/methods , Heart Arrest/mortality , Heart Arrest/prevention & control , Outcome Assessment, Health Care/methods , Survival Analysis , Algorithms , Diagnosis, Computer-Assisted/statistics & numerical data , Electrocardiography/statistics & numerical data , Heart Arrest/diagnosis , Humans , Outcome Assessment, Health Care/statistics & numerical data , Prevalence , Prognosis , Reproducibility of Results , Risk Factors , Sensitivity and Specificity , Treatment Outcome
9.
J Electrocardiol ; 46(6): 528-34, 2013.
Article in English | MEDLINE | ID: mdl-23948522

ABSTRACT

BACKGROUND: ECG detection of ST-segment elevation myocardial infarction (STEMI) in the presence of left bundle-branch block (LBBB) is challenging due to ST deviation from the altered conduction. The purpose of this study was to introduce a new algorithm for STEMI detection in LBBB and compare the performance to three existing algorithms. METHODS: Source data of the study group (143 with acute MI and 239 controls) comes from multiple sources. ECGs were selected by computer interpretation of LBBB. Acute MI reference was hospital discharge diagnosis. Automated measurements came from the Philips DXL algorithm. Three existing algorithms were compared, (1) Sgarbossa criteria, (2) Selvester 10% RS criteria and (3) Smith 25% S-wave criteria. The new algorithm uses an ST threshold based on QRS area. All algorithms share the concordant ST elevation and anterior ST depression criteria from the Sgarbossa score. The difference is in the threshold for discordant ST elevation. The Sgarbossa, Selvester, Smith and Philips discordant ST elevation criteria are (1) ST elevation ≥ 500 µV, (2) ST elevation ≥ 10% of |S|-|R| plus STEMI limits, (3) ST elevation ≥ 25% of the S-wave amplitude and (4) ST elevation ≥ 100 µV + 1050 µV/Ash * QRS area. The Smith S-wave and Philips QRS area criteria were tested using both a single and 2 lead requirement. Algorithm performance was measured by sensitivity, specificity, and positive likelihood ratio (LR+). RESULTS: Algorithm performance can be organized in bands of similar sensitivity and specificity ranging from Sgarbossa score ≥ 3 with the lowest sensitivity and highest specificity, 13.3% and 97.9%, to the Selvester 10% rule with the highest sensitivity and lower specificity of 30.1% and 93.2%. The Smith S-wave and Philips QRS area algorithms were in the middle band with sensitivity and specificity of (20.3%, 94.9%) and (23.8%, 95.8%) respectively. CONCLUSION: As can be seen from the difference between Sgarbossa score ≥ 3 and other algorithms for STEMI in LBBB, a discordant ST elevation criterion improves the sensitivity for detection but also results in a drop in specificity. For applications of automated STEMI detection that require higher sensitivity, the Selvester algorithm is better. For applications that require a low false positive rate such as relying on the algorithm for pre-hospital activation of cardiac catheterization laboratory for urgent PCI, it may be better to use the 2 lead Philips QRS area or Smith 25% S-wave algorithm.


Subject(s)
Algorithms , Bundle-Branch Block/complications , Bundle-Branch Block/diagnosis , Diagnosis, Computer-Assisted/methods , Electrocardiography/methods , Myocardial Infarction/complications , Myocardial Infarction/diagnosis , Diagnosis, Differential , Female , Humans , Male , Reproducibility of Results , Sensitivity and Specificity
10.
Crit Care Med ; 40(2): 394-9, 2012 Feb.
Article in English | MEDLINE | ID: mdl-22001585

ABSTRACT

OBJECTIVE: To test the potential value of more frequent QT interval measurement in hospitalized patients. DESIGN: We performed a prospective, observational study. SETTING: All adult intensive care unit and progressive care unit beds of a university medical center. PATIENTS: All patients admitted to one of six critical care units over a 2-month period were included in analyses. INTERVENTIONS: All critical care beds (n = 154) were upgraded to a continuous QT monitoring system (Philips Healthcare). MEASUREMENTS AND MAIN RESULTS: QT data were extracted from the bedside monitors for offline analysis. A corrected QT interval >500 msecs was considered prolonged. Episodes of QT prolongation were manually over-read. Electrocardiogram data (67,648 hrs, mean 65 hrs/patient) were obtained. QT prolongation was present in 24%. There were 16 cardiac arrests, with one resulting from Torsade de Pointes (6%). Predictors of QT prolongation were female sex, QT-prolonging drugs, hypokalemia, hypocalcemia, hyperglycemia, high creatinine, history of stroke, and hypothyroidism. Patients with QT prolongation had longer hospitalization (276 hrs vs. 132 hrs, p < .0005) and had three times the odds for all-cause in-hospital mortality compared to patients without QT prolongation (odds ratio 2.99 95% confidence interval 1.1-8.1). CONCLUSIONS: We find QT prolongation to be common (24%), with Torsade de Pointes representing 6% of in-hospital cardiac arrests. Predictors of QT prolongation in the acutely ill population are similar to those previously identified in ambulatory populations. Acutely ill patients with QT prolongation have longer lengths of hospitalization and nearly three times the odds for mortality then those without QT prolongation.


Subject(s)
Intensive Care Units , Long QT Syndrome/diagnosis , Long QT Syndrome/epidemiology , Monitoring, Physiologic/methods , Torsades de Pointes/diagnosis , Torsades de Pointes/epidemiology , Academic Medical Centers , Adult , Cause of Death , Cohort Studies , Confidence Intervals , Critical Care/methods , Critical Illness/mortality , Critical Illness/therapy , Electrocardiography/methods , Female , Heart Arrest/mortality , Hospital Mortality , Humans , Long QT Syndrome/therapy , Male , Odds Ratio , Point-of-Care Systems , Predictive Value of Tests , Prevalence , Prognosis , Prospective Studies , Risk Assessment , Severity of Illness Index , Survival Rate , Torsades de Pointes/therapy
11.
J Electrocardiol ; 45(6): 561-5, 2012.
Article in English | MEDLINE | ID: mdl-22995382

ABSTRACT

BACKGROUND: Interpretation of a patient's 12-lead ECG frequently involves comparison to a previously recorded ECG. Automated serial ECG comparison can be helpful not only to note significant ECG changes but also to improve the single-ECG interpretation. Corrections from the previous ECG are carried forward by the serial comparison algorithm when measurements do not change significantly. METHODS: A sample of patients from three hospitals was collected with two or more 12-lead ECGs from each patient. There were 233 serial comparisons from 143 patients. 41% of patients had two ECGs and 59% of patients had more than two ECGs. ECGs were taken from a difficult population as measured by ECG abnormalities, 197/233 abnormal, 11/233 borderline, 14/233 otherwise-normal and 11/233 normal. ECGs were processed with the Philips DXL algorithm and then in time order for each patient with the Philips serial comparison algorithm. To measure accuracy of interpretation and serial change, an expert cardiologist corrected the ECGs in stages. The first ECG was corrected and used as the reference for the second ECG. The second ECG was then corrected and used as the reference for the third ECG and so on. At each stage, the serial comparison algorithm compared an unedited ECG to an earlier edited ECG. Interpretation accuracy was measured by comparing the algorithm to the cardiologist on a statement by statement basis. The effect of serial comparison was measured by the sum of interpretive statement mismatches between the algorithm and cardiologist. Statement mismatches were measured in two ways, (1) exact match and (2) match within the same diagnostic category. RESULTS: The cardiologist used 910 statements over 233 ECGs for an average number of 3.9 statements per ECG and a mode of 4 statements. When automated serial comparison was used, the total number of exact statement mismatches decreased by 29% and the total same-category statement mismatches decreased by 47%. CONCLUSION: Automated serial comparison improves interpretation accuracy in addition to its main role of noting differences between ECGs.


Subject(s)
Algorithms , Arrhythmias, Cardiac/diagnosis , Artificial Intelligence , Diagnosis, Computer-Assisted/methods , Electrocardiography/methods , Pattern Recognition, Automated/methods , Humans , Reproducibility of Results , Sensitivity and Specificity
12.
J Electrocardiol ; 43(6): 572-6, 2010.
Article in English | MEDLINE | ID: mdl-21040827

ABSTRACT

UNLABELLED: Recent Scientific Statement from the American Heart Association (AHA) recommends that hospital patients should receive QT interval monitoring if certain conditions are present: QT-prolonging drug administration or admission for drug overdose, electrolyte disturbances (K, Mg), and bradycardia. No studies have quantified the proportion of critical care patients that meet the AHA's indications for QT interval monitoring. This is a prospective study of 1039 critical care patients to determine the proportion of patients that meet the AHA's indications for QT interval monitoring. Secondary aim is to evaluate the predictive value of the AHA's indications in identifying patients who actually develop QT interval prolongation. METHODS: Continuous QT interval monitoring software was installed in all monitored beds (n = 154) across 5 critical care units. This system uses outlier rejection and median filtering in all available leads to construct an root-mean-squared wave from which the QT measurement is made. Fridericia formula was used for heart rate correction. A QT interval greater than 500 milliseconds for 15 minutes or longer was considered prolonged for analyses. To minimize false positives all episodes of QT prolongation were manually over read. Clinical data was abstracted from the medical record. RESULTS: Overall 69% of patients had 1 or more AHA indications for QT interval monitoring. More women (74%) had indications than men (64%, P = .001). One quarter (24%) had QT interval prolongation (>500 ms for ≥15 minutes). The odds for QT interval prolongation increased with the number of AHA indications present; 1 indication, odds ratio (OR) = 3.2 (2.1-5.0); 2 indications, OR = 7.3(4.6-11.7); and 3 or more indications OR = 9.2(4.8-17.4). Positive predictive value of the AHA indications for QT interval prolongation was 31.2%; negative predictive value was 91.3%. CONCLUSION: Most critically ill patients (69%) have AHA indications for QT interval monitoring. One quarter of critically ill patients (24%) developed QT interval prolongation. The AHA indications for QT interval monitoring successfully captured the majority of critically ill patients developing QT interval prolongation.


Subject(s)
Critical Care/statistics & numerical data , Electrocardiography/statistics & numerical data , Long QT Syndrome/diagnosis , Long QT Syndrome/epidemiology , Monitoring, Physiologic/statistics & numerical data , California , Female , Humans , Incidence , Male , Middle Aged , Patient Selection , Pilot Projects , Risk Assessment , Risk Factors
13.
Ann Noninvasive Electrocardiol ; 14 Suppl 1: S3-8, 2009 Jan.
Article in English | MEDLINE | ID: mdl-19143739

ABSTRACT

BACKGROUND: Commonly used techniques for QT measurement that identify T wave end using amplitude thresholds or the tangent method are sensitive to baseline drift and to variations of terminal T wave shape. Such QT measurement techniques commonly underestimate or overestimate the "true" QT interval. METHODS: To find the end of the T wave, the new Philips QT interval measurement algorithms use the distance from an ancillary line drawn from the peak of the T wave to a point beyond the expected inflection point at the end of the T wave. We have adapted and optimized modifications of this basic approach for use in three different ECG application areas: resting diagnostic, ambulatory Holter, and in-hospital patient monitoring. The Philips DXL resting diagnostic algorithm uses an alpha-trimming technique and a measure of central tendency to determine the median QT value of eight most reliable leads. In ambulatory Holter ECG analysis, generally only two or three channels are available. QT is measured on a root-mean-square vector magnitude signal. Finally, QT measurement in the real time in-hospital application is among the most challenging areas of QT measurement. The Philips real time QT interval measurement algorithm employs features from both Philips DXL 12-lead and ambulatory Holter QT algorithms with further enhancements. RESULTS: The diagnostic 12-lead algorithm has been tested against the gold standard measurement database established by the CSE group with results surpassing the industrial ECG measurement accuracy standards. Holter and monitoring algorithm performance data on the PhysioNet QT database were shown to be similar to the manual measurements by two cardiologists. CONCLUSION: The three variations of the QT measurement algorithm we developed are suitable for diagnostic 12-lead, Holter, and patient monitoring applications.


Subject(s)
Electrocardiography/methods , Algorithms , Electrocardiography/standards , Electrocardiography, Ambulatory , Heart Rate , Humans , Monitoring, Physiologic , Rest , Signal Processing, Computer-Assisted
14.
J Electrocardiol ; 42(6): 522-6, 2009.
Article in English | MEDLINE | ID: mdl-19608194

ABSTRACT

Electrocardiographic (ECG) monitoring plays an important role in the management of patients with atrial fibrillation (AF). Automated real-time AF detection algorithm is an integral part of ECG monitoring during AF therapy. Before and after antiarrhythmic drug therapy and surgical procedures require ECG monitoring to ensure the success of AF therapy. This article reports our experience in developing a real-time AF monitoring algorithm and techniques to eliminate false-positive AF alarms. We start by designing an algorithm based on R-R intervals. This algorithm uses a Markov modeling approach to calculate an R-R Markov score. This score reflects the relative likelihood of observing a sequence of R-R intervals in AF episodes versus making the same observation outside AF episodes. Enhancement of the AF algorithm is achieved by adding atrial activity analysis. P-R interval variability and a P wave morphology similarity measure are used in addition to R-R Markov score in classification. A hysteresis counter is applied to eliminate short AF segments to reduce false AF alarms for better suitability in a monitoring environment. A large ambulatory Holter database (n = 633) was used for algorithm development and the publicly available MIT-BIH AF database (n = 23) was used for algorithm validation. This validation database allowed us to compare our algorithm performance with previously published algorithms. Although R-R irregularity is the main characteristic and strongest discriminator of AF rhythm, by adding atrial activity analysis and techniques to eliminate very short AF episodes, we have achieved 92% sensitivity and 97% positive predictive value in detecting AF episodes, and 93% sensitivity and 98% positive predictive value in quantifying AF segment duration.


Subject(s)
Algorithms , Atrial Fibrillation/diagnosis , Diagnosis, Computer-Assisted/methods , Electrocardiography, Ambulatory/methods , Software , Computer Systems , Humans , Reproducibility of Results , Sensitivity and Specificity , Software Design
15.
J Electrocardiol ; 41(1): 8-14, 2008.
Article in English | MEDLINE | ID: mdl-18191652

ABSTRACT

The details of digital recording and computer processing of a 12-lead electrocardiogram (ECG) remain a source of confusion for many health care professionals. A better understanding of the design and performance tradeoffs inherent in the electrocardiograph design might lead to better quality in ECG recording and better interpretation in ECG reading. This paper serves as a tutorial from an engineering point of view to those who are new to the field of ECG and to those clinicians who want to gain a better understanding of the engineering tradeoffs involved. The problem arises when the benefit of various electrocardiograph features is widely understood while the cost or the tradeoffs are not equally well understood. An electrocardiograph is divided into 2 main components, the patient module for ECG signal acquisition and the remainder for ECG processing which holds the main processor, fast printer, and display. The low-level ECG signal from the body is amplified and converted to a digital signal for further computer processing. The Electrocardiogram is processed for display by user selectable filters to reduce various artifacts. A high-pass filter is used to attenuate the very low frequency baseline sway or wander. A low-pass filter attenuates the high-frequency muscle artifact and a notch filter attenuates interference from alternating current power. Although the target artifact is reduced in each case, the ECG signal is also distorted slightly by the applied filter. The low-pass filter attenuates high-frequency components of the ECG such as sharp R waves and a high-pass filter can cause ST segment distortion for instance. Good skin preparation and electrode placement reduce artifacts to eliminate the need for common usage of these filters.


Subject(s)
Diagnosis, Computer-Assisted/instrumentation , Diagnosis, Computer-Assisted/methods , Electrocardiography/instrumentation , Electrocardiography/methods , Electronics, Medical , Signal Processing, Computer-Assisted/instrumentation , Equipment Design
16.
J Electrocardiol ; 41(6): 466-73, 2008.
Article in English | MEDLINE | ID: mdl-18954606

ABSTRACT

Reduced-lead electrocardiographic systems are currently a widely accepted medical technology used in a number of applications. They provide increased patient comfort and superior performance in arrhythmia and ST monitoring. These systems have unique and compelling advantages over the traditional multichannel monitoring lead systems. However, the design and development of reduced-lead systems create numerous technical challenges. This article summarizes the major technical challenges commonly encountered in lead reconstruction for reduced-lead systems. We discuss the effects of basis lead and target lead selections, the differences between interpolated vs extrapolated leads, the database dependency of the coefficients, and the approaches in quantitative performance evaluation, and provide a comparison of different lead systems. In conclusion, existing reduced-lead systems differ significantly in regard to trade-offs from the technical, practical, and clinical points of view. Understanding the technical limitations, the strengths, and the trade-offs of these reduced-lead systems will hopefully guide future research.


Subject(s)
Electrocardiography/instrumentation , Electrocardiography/trends , Electrodes/trends , Forecasting , Internationality , Reproducibility of Results , Sensitivity and Specificity
17.
J Electrocardiol ; 41(6): 546-52, 2008.
Article in English | MEDLINE | ID: mdl-18817921

ABSTRACT

A 12-lead electrocardiogram (ECG) reconstructed from a reduced subset of leads is desired in continued arrhythmia and ST monitoring for less tangled wires and increased patient comfort. However, the impact of reconstructed 12-lead lead ECG on clinical ECG diagnosis has not been studied thoroughly. This study compares the differences between recorded and reconstructed 12-lead diagnostic ECG interpretation with 2 commonly used configurations: reconstruct precordial leads V(2), V(3), V(5), and V(6) from V(1),V(4), or reconstruct V(1), V(3), V(4), and V(6) from V(2),V(5). Limb leads are recorded in both configurations. A total of 1785 ECGs were randomly selected from a large database of 50,000 ECGs consecutively collected from 2 teaching hospitals. ECGs with extreme artifact and paced rhythm were excluded. Manual ECG annotations by 2 cardiologists were categorized and used in testing. The Philips resting 12-lead ECG algorithm was used to generate computer measurements and interpretations for comparison. Results were compared for both arrhythmia and morphology categories with high prevalence interpretations including atrial fibrillation, anterior myocardial infarct, right bundle-branch block, left bundle-branch block, left atrial enlargement, and left ventricular hypertrophy. Sensitivity and specificity were calculated for each reconstruction configuration in these arrhythmia and morphology categories. Compared to recorded 12-leads, the V(2),V(5) lead configuration shows weakness in interpretations where V(1) is important such as atrial arrhythmia, atrial enlargement, and bundle-branch blocks. The V(1),V(4) lead configuration shows a decreased sensitivity in detection of anterior myocardial infarct, left bundle-branch block (LBBB), and left ventricular hypertrophy (LVH). In conclusion, reconstructed precordial leads are not equivalent to recorded leads for clinical ECG diagnoses especially in ECGs presenting rhythm and morphology abnormalities. In addition, significant accuracy reduction in ECG interpretation is not strongly correlated with waveform differences between reconstructed and recorded 12-lead ECGs.


Subject(s)
Arrhythmias, Cardiac/diagnosis , Diagnostic Errors/prevention & control , Electrocardiography/instrumentation , Electrocardiography/methods , Humans , Reproducibility of Results , Sensitivity and Specificity
18.
J Electrocardiol ; 40(6 Suppl): S103-10, 2007.
Article in English | MEDLINE | ID: mdl-17993306

ABSTRACT

QT surveillance of neonatal patients, and especially premature infants, may be important because of the potential for concomitant exposure to QT-prolonging medications and because of the possibility that they may have hereditary QT prolongation (long-QT syndrome), which is implicated in the pathogenesis of approximately 10% of sudden infant death syndrome. In-hospital automated continuous QT interval monitoring for neonatal and pediatric patients may be beneficial but is difficult because of high heart rates; inverted, biphasic, or low-amplitude T waves; noisy signal; and a limited number of electrocardiogram (ECG) leads available. Based on our previous work on an automated adult QT interval monitoring algorithm, we further enhanced and expanded the algorithm for application in the neonatal and pediatric patient population. This article presents results from evaluation of the new algorithm in neonatal patients. Neonatal-monitoring ECGs (n = 66; admission age range, birth to 2 weeks) were collected from the neonatal intensive care unit in 2 major teaching hospitals in the United States. Each digital recording was at least 10 minutes in length with a sampling rate of 500 samples per second. Special handling of high heart rate was implemented, and threshold values were adjusted specifically for neonatal ECG. The ECGs studied were divided into a development/training ECG data set (TRN), with 24 recordings from hospital 1, and a testing data set (TST), with 42 recordings composed of cases from both hospital 1 (n = 16) and hospital 2 (n = 26). Each ECG recording was manually annotated for QT interval in a 15-second period by 2 cardiologists. Mean and standard deviation of the difference (algorithm minus cardiologist), regression slope, and correlation coefficient were used to describe algorithm accuracy. Considering the technical problems due to noisy recordings, a high fraction (approximately 80%) of the ECGs studied were measurable by the algorithm. Mean and standard deviation of the error were both low (TRN = -3 +/- 8 milliseconds; TST = 1 +/- 20 milliseconds); regression slope (TRN = 0.94; TST = 0.83) and correlation coefficients (TRN = 0.96; TST = 0.85) (P < .0001) were fairly high. Performance on the TST was similar to that on the TRN with the exception of 2 cases. These results confirm that automated continuous QT interval monitoring in the neonatal intensive care setting is feasible and accurate and may lead to earlier recognition of the "vulnerable" infant.


Subject(s)
Algorithms , Critical Care/methods , Diagnosis, Computer-Assisted/methods , Electrocardiography/methods , Long QT Syndrome/diagnosis , Humans , Infant, Newborn , Reproducibility of Results , Sensitivity and Specificity
19.
Am J Cardiol ; 98(1): 88-92, 2006 Jul 01.
Article in English | MEDLINE | ID: mdl-16784927

ABSTRACT

QT-interval measurements have clinical importance for the electrocardiographic recognition of congenital and acquired heart disease and as markers of arrhythmogenic risk during drug therapy, but software algorithms for the automated measurement of electrocardiographic durations differ among manufacturers and evolve within manufacturers. To compare automated QT-interval measurements, simultaneous paired electrocardiograms were obtained in 218 subjects using digital recorders from the 2 major manufacturers of electrocardiographs used in the United States and analyzed by 2 currently used versions of each manufacturer's software. The 4 automated QT and QTc durations were examined by repeated-measures analysis of variance with post hoc testing. Significantly larger automated QT-interval measurements were found with the most recent software of each manufacturer (12- to 24-ms mean differences from earlier algorithms). Systematic differences in QT measurements between manufacturers were significant for the earlier algorithms (11-ms mean difference) but not for the most recent software (1.3-ms mean difference). Similar relations were found for the rate-corrected QTc, with large mean differences between earlier and later algorithms (15 to 26 ms). Although there was a <2-ms mean difference between the most recent automated QTc measurements of the 2 manufacturers, the SD of the difference was 12 ms. In conclusion, reference values for automated electrocardiographic intervals and serial QT measurements vary among electrocardiographs and analysis software. Technically based differences in automated QT and QTc measurements must be considered when these intervals are used as markers of heart disease, prognosis, or arrhythmogenic risk.


Subject(s)
Algorithms , Electrocardiography/instrumentation , Heart Rate/physiology , Evaluation Studies as Topic , Humans , Reaction Time , Regression Analysis
20.
J Electrocardiol ; 39(4 Suppl): S123-7, 2006 Oct.
Article in English | MEDLINE | ID: mdl-16920145

ABSTRACT

QT interval measurement in the patient monitoring environment is receiving much interest because of the potential for proarrhythmic effects from both cardiac and noncardiac drugs. The American Heart Association and American Association of Critical Care Nurses practice standards for ECG monitoring in hospital settings now recommend frequent monitoring of QT interval when patients are started on a potentially proarrhythmic drug. We developed an algorithm to continuously measure QT interval in real-time in the patient monitoring setting. This study reports our experience in developing and testing this automated QT algorithm. Compared with the environment of resting ECG analysis, real-time ECG monitoring has a number of challenges: significantly more amounts of muscle and motion artifact, increased baseline wander, a varied number and location of ECG leads, and the need for trending and for alarm generation when QT interval prolongation is detected. We have used several techniques to address these challenges. In contiguous 15-second time windows, we average the signal of tightly clustered normal beats detected by a real-time arrhythmia-monitoring algorithm to minimize the impact of artifact. Baseline wander is reduced by zero-phase high-pass filtering and subtraction of isoelectric points as determined by median signal values in a localized region. We compute a root-mean-squared ECG waveform from all available leads and use a novel technique to measure the QT interval. We have tested this algorithm against standard and proprietary ECG databases. Our real-time QT interval measurement algorithm proved to be stable, accurate, and able to track changing QT values.


Subject(s)
Algorithms , Arrhythmias, Cardiac/diagnosis , Diagnosis, Computer-Assisted/methods , Electrocardiography/methods , Monitoring, Physiologic/methods , Computer Systems , Humans , Long QT Syndrome/diagnosis , Reproducibility of Results , Retrospective Studies , Sensitivity and Specificity
SELECTION OF CITATIONS
SEARCH DETAIL