Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 32
Filtrar
Más filtros

Banco de datos
Tipo del documento
País de afiliación
Intervalo de año de publicación
1.
J Electrocardiol ; 74: 104-108, 2022.
Artículo en Inglés | MEDLINE | ID: mdl-36095923

RESUMEN

BACKGROUND: Standard 12­lead ECG is used for diagnosis and risk stratification in suspected acute coronary syndrome (ACS) patients. Artifacts have significant impact on the measuring quality, which consequently affect the diagnostic decision. We used a signal quality indicator (SQI) to identify the ECG segments with lower artifact levels which we hypothesized would improve ST measurements. METHODS: The Staff III 12­lead ECG database was used with the ECG segments before balloon inflation (n = 185). SQI scores per second were calculated and a 10-s ECG segment with least noise and artifacts (Clean10) was identified for each minute of recording. The first 10 s of ECG recordings (First10) for each minute were selected as a reference. The Philips DXL™ algorithm was used to measure the ST levels at J-point, +20 ms, +40 ms, +60 ms, and + 80 ms after the J-point. Standard deviations (SDs) for the ST measurements for each of the 185 ECG records were calculated for the Clean10 and for the First10 across records. The resulting SDs for the Clean10 were compared with the SDs for the First10 using the Wilcoxon signed rank test. RESULTS: The results indicated that 1) The SDs for the Clean10 are lower than that of the First10; 2) The SDs for J+20 ms and J+40 ms are lowest among the 5 different measuring points although similar improvement for the Clean10 over the First10 is observed for J+60 ms and J+80 ms as well; 3) The improvement at the J-point was not as high as other ST measurements. CONCLUSIONS: The SQI is demonstrated as an efficient tool to identify the ECG segments with lower artifacts that produce more consistent and reliable ST measurement. The measurements at J+20 ms demonstrated the highest consistency among the five studied measuring points.


Asunto(s)
Síndrome Coronario Agudo , Humanos , Síndrome Coronario Agudo/diagnóstico , Electrocardiografía , Reproducibilidad de los Resultados
2.
J Electrocardiol ; 69: 6-14, 2021.
Artículo en Inglés | MEDLINE | ID: mdl-34474312

RESUMEN

This paper proposes a two-dimensional (2D) bidirectional long short-term memory generative adversarial network (GAN) to produce synthetic standard 12-lead ECGs corresponding to four types of signals-left ventricular hypertrophy (LVH), left branch bundle block (LBBB), acute myocardial infarction (ACUTMI), and Normal. It uses a fully automatic end-to-end process to generate and verify the synthetic ECGs that does not require any visual inspection. The proposed model is able to produce synthetic standard 12-lead ECG signals with success rates of 98% for LVH, 93% for LBBB, 79% for ACUTMI, and 59% for Normal. Statistical evaluation of the data confirms that the synthetic ECGs are not biased towards or overfitted to the training ECGs, and span a wide range of morphological features. This study demonstrates that it is feasible to use a 2D GAN to produce standard 12-lead ECGs suitable to augment artificially a diverse database of real ECGs, thus providing a possible solution to the demand for extensive ECG datasets.


Asunto(s)
Electrocardiografía , Infarto del Miocardio , Bloqueo de Rama , Bases de Datos Factuales , Humanos , Hipertrofia Ventricular Izquierda/diagnóstico
3.
J Electrocardiol ; 69S: 12-22, 2021.
Artículo en Inglés | MEDLINE | ID: mdl-34579960

RESUMEN

BACKGROUND: Not every lead contributes equally in the interpretation of an ECG. There are some abnormalities in which the lead importance is not clear either from cardiac electrophysiology or experience. Therefore, it is beneficial to develop an algorithm to quantify the lead importance in the reading of ECGs, namely to determine how much to weigh the evidence from each individual lead when interpreting ECG. METHODS: One representative beat per ECG lead was constructed for each ECG in a database. An algorithm was developed to find the top K (K = 1, 5, 10, 20, 50, 100) ECGs in the database that had the most similar morphology to the query ECG, independently for each lead. For each lead, the query ECG was interpreted based on the weighted average voting on the most similar ECGs by applying a variety of thresholds. For each category of abnormality, we found the threshold that maximized the median F1 score of sensitivity and positive predictive value among all ECG leads. Finally, the F1 score of each lead at this chosen threshold was defined as the importance value for that lead. RESULTS: Eighteen morphology-based categories of abnormality were investigated for two databases. For most, the lead importance confirmed what expert ECG readers already know. However, it also revealed new insights. For example, lead aVR appeared in the top 6 most important leads in 11 and 12 categories of abnormality in two databases respectively, and ranked first among 12 leads if summarizing all categories. CONCLUSIONS: Lead importance information may be useful in selecting only the most important leads to screen for a specific abnormality, for example using wearable patches.


Asunto(s)
Macrodatos , Electrocardiografía , Algoritmos , Bases de Datos Factuales , Humanos , Valor Predictivo de las Pruebas
4.
J Electrocardiol ; 69S: 75-78, 2021.
Artículo en Inglés | MEDLINE | ID: mdl-34544590

RESUMEN

Many studies that rely on manual ECG interpretation as a reference use multiple ECG expert interpreters and a method to resolve differences between interpreters, reflecting the fact that experts sometimes use different criteria. The aim of this study was to show the effect of manual ECG interpretation style on training automated ECG interpretation. METHODS: The effect of ECG interpretation style or differing ECG criteria on algorithm training was shown in this study by careful analysis of the changes in algorithm performance when the algorithm was trained on one database and tested on a different database. Morphology related ECG interpretation was summarized in eleven abnormalities such as left bundle branch block (LBBB) and old anterior myocardial infarction (MI). Each of the two databases used in the study had a reference interpretation mapped to those eleven abnormalities. F1 algorithm performance scores across abnormalities were compared for four cases. First, the algorithm was trained and tested on randomly split database A and then trained on the training set of database A and tested on randomly chosen test set of database B. The previous two test cases were repeated for opposite databases, train and test on database B and then train on database B and test on the test set of database A. RESULTS: F1 scores across abnormalities were generally higher when training and testing on the same database. F1 scores were high for bundle branch blocks (BBB) no matter the training and testing database combination. Old anterior MI F1 score dropped for one cross-database comparison and not the other suggesting a difference in manual interpretation. CONCLUSION: For some abnormalities, human experts appear to have used different criteria for ECG interpretation, as evident by the difference between cross-database and within-database performance. Bundle branch blocks appear to be interpreted in a consistent manner.


Asunto(s)
Infarto del Miocardio , Lectura , Arritmias Cardíacas , Bloqueo de Rama , Electrocardiografía , Humanos
5.
J Electrocardiol ; 69: 60-64, 2021.
Artículo en Inglés | MEDLINE | ID: mdl-34571467

RESUMEN

BACKGROUND: Early and correct diagnosis of ST-segment elevation myocardial infarction (STEMI) is crucial for providing timely reperfusion therapy. Patients with ischemic symptoms presenting with ST-segment elevation on the electrocardiogram (ECG) are preferably transported directly to a catheterization laboratory (Cath-lab) for primary percutaneous coronary intervention (PPCI). However, the ECG often contains confounding factors making the STEMI diagnosis challenging leading to false positive Cath-lab activation. The objective of this study was to test the performance of a standard automated algorithm against an additional high specificity setting developed for reducing the false positive STEMI calls. METHODS: We included consecutive patients with an available digital prehospital ECG triaged directly to Cath-lab for acute coronary angiography between 2009 and 2012. An adjudicated discharge diagnosis of STEMI or no myocardial infarction (no-MI) was assigned for each patient. The new automatic algorithm contains a feature to reduce false positive STEMI interpretation. The STEMI performance with the standard setting (STD) and the high specificity setting (HiSpec) was tested against the adjudicated discharge diagnosis in a retrospective manner. RESULTS: In total, 2256 patients with an available digital prehospital ECG (mean age 63 ± 13 years, male gender 71%) were included in the analysis. The discharge diagnosis of STEMI was assigned in 1885 (84%) patients. The STD identified 165 true negative and 1457 true positive (206 false positive and 428 false negative) cases (77.3%, 44.5%, 87.6% and 17.3% for sensitivity, specificity, PPV and NPV, respectively). The HiSpec identified 191 true negative and 1316 true positive (180 false positive and 569 false negative) cases (69.8%, 51.5%, 88.0% and 25.1% for sensitivity, specificity, PPV and NPV, respectively). From STD to HiSpec, false positive cases were reduced by 26 (12,6%), but false negative results were increased by 33%. CONCLUSIONS: Implementing an automated ECG algorithm with a high specificity setting was able to reduce the number of false positive STEMI cases. However, the predictive values for both positive and negative STEMI identification were moderate in this highly selected STEMI population. Finally, due the reduced sensitivity/increased false negatives, a negative AMI statement should not be solely based on the automated ECG statement.


Asunto(s)
Síndrome Coronario Agudo , Servicios Médicos de Urgencia , Infarto del Miocardio con Elevación del ST , Síndrome Coronario Agudo/diagnóstico , Anciano , Algoritmos , Electrocardiografía , Humanos , Masculino , Persona de Mediana Edad , Estudios Retrospectivos , Infarto del Miocardio con Elevación del ST/diagnóstico
6.
J Electrocardiol ; 57S: S70-S74, 2019.
Artículo en Inglés | MEDLINE | ID: mdl-31416598

RESUMEN

Due to its simplicity and low cost, analyzing an electrocardiogram (ECG) is the most common technique for detecting cardiac arrhythmia. The massive amount of ECG data collected every day, in home and hospital, may preclude data review by human operators/technicians. Therefore, several methods are proposed for either fully automatic arrhythmia detection or event selection for further verification by human experts. Traditional machine learning approaches have made significant progress in the past years. However, those methods rely on hand-crafted feature extraction, which requires in-depth domain knowledge and preprocessing of the signal (e.g., beat detection). This, plus the high variability in wave morphology among patients and the presence of noise, make it challenging for computerized interpretation to achieve high accuracy. Recent advances in deep learning make it possible to perform automatic high-level feature extraction and classification. Therefore, deep learning approaches have gained interest in arrhythmia detection. In this work, we reviewed the recent advancement of deep learning methods for automatic arrhythmia detection. We summarized existing literature from five aspects: utilized dataset, application, type of input data, model architecture, and performance evaluation. We also reported limitations of reviewed papers and potential future opportunities.


Asunto(s)
Arritmias Cardíacas , Aprendizaje Profundo , Arritmias Cardíacas/diagnóstico , Trastorno del Sistema de Conducción Cardíaco , Electrocardiografía , Humanos , Aprendizaje Automático
7.
J Electrocardiol ; 57S: S79-S85, 2019.
Artículo en Inglés | MEDLINE | ID: mdl-31519393

RESUMEN

BACKGROUND: Automated ECG interpretation is most often a rule-based expert system, though experts may disagree on the exact ECG criteria. One method to automate ECG analysis while indirectly using varied sets of expert rules is to base the automated interpretation on similar ECGs that already have a physician interpretation. The aim of this study is to develop and test an ECG interpretation algorithm based on such similar ECGs. METHODS: The study database consists of approximately 146,000 sequential 12-lead 10 s ECGs taken over the course of three years from a single hospital. All patient ECGs were included. Computer interpretation was corrected by physicians as part of standard care. The ECG algorithm developed here consisted of an ECG similarity search along with a method for estimating the interpretation from a small set of similar ECGs. A second level of differential diagnosis differentiated ECG categories with substantial similarity, such as LVH and LBBB. Interpretation performance was tested by ROC analysis including sensitivity (SE), specificity (SP), positive predictive value (PPV) and area under the ROC curve (AUC). RESULTS: LBBB was the category with the best ECG interpretation performance with an AUC of 0.981 while RBBB, LAFB and ventricular paced rhythm also had an AUC at 0.95 or above. AUC was 0.9 and above for the ischemic repolarization abnormality, LVH, old anterior MI, and early repolarization categories. All other morphology categories had an AUC over 0.8. CONCLUSION: ECG interpretation by analysis of ECG similarity provides adequate ECG interpretation performance on an unselected database using only strategies to weight the interpretation from those similar ECGs. Although this algorithm may not be ready to replace rule-based computer ECG analysis, it may be a useful adjunct recommender.


Asunto(s)
Electrocardiografía , Infarto del Miocardio , Algoritmos , Humanos , Infarto del Miocardio/diagnóstico , Reproducibilidad de los Resultados , Sensibilidad y Especificidad
8.
Am Heart J ; 200: 1-10, 2018 06.
Artículo en Inglés | MEDLINE | ID: mdl-29898835

RESUMEN

BACKGROUND: Automated measurements of electrocardiographic (ECG) intervals by current-generation digital electrocardiographs are critical to computer-based ECG diagnostic statements, to serial comparison of ECGs, and to epidemiological studies of ECG findings in populations. A previous study demonstrated generally small but often significant systematic differences among 4 algorithms widely used for automated ECG in the United States and that measurement differences could be related to the degree of abnormality of the underlying tracing. Since that publication, some algorithms have been adjusted, whereas other large manufacturers of automated ECGs have asked to participate in an extension of this comparison. METHODS: Seven widely used automated algorithms for computer-based interpretation participated in this blinded study of 800 digitized ECGs provided by the Cardiac Safety Research Consortium. All tracings were different from the study of 4 algorithms reported in 2014, and the selected population was heavily weighted toward groups with known effects on the QT interval: included were 200 normal subjects, 200 normal subjects receiving moxifloxacin as part of an active control arm of thorough QT studies, 200 subjects with genetically proved long QT syndrome type 1 (LQT1), and 200 subjects with genetically proved long QT syndrome Type 2 (LQT2). RESULTS: For the entire population of 800 subjects, pairwise differences between algorithms for each mean interval value were clinically small, even where statistically significant, ranging from 0.2 to 3.6milliseconds for the PR interval, 0.1 to 8.1milliseconds for QRS duration, and 0.1 to 9.3milliseconds for QT interval. The mean value of all paired differences among algorithms was higher in the long QT groups than in normals for both QRS duration and QT intervals. Differences in mean QRS duration ranged from 0.2 to 13.3milliseconds in the LQT1 subjects and from 0.2 to 11.0milliseconds in the LQT2 subjects. Differences in measured QT duration (not corrected for heart rate) ranged from 0.2 to 10.5milliseconds in the LQT1 subjects and from 0.9 to 12.8milliseconds in the LQT2 subjects. CONCLUSIONS: Among current-generation computer-based electrocardiographs, clinically small but statistically significant differences exist between ECG interval measurements by individual algorithms. Measurement differences between algorithms for QRS duration and for QT interval are larger in long QT interval subjects than in normal subjects. Comparisons of population study norms should be aware of small systematic differences in interval measurements due to different algorithm methodologies, within-individual interval measurement comparisons should use comparable methods, and further attempts to harmonize interval measurement methodologies are warranted.


Asunto(s)
Algoritmos , Electrocardiografía , Síndrome de QT Prolongado/diagnóstico , Síndrome de Romano-Ward/diagnóstico , Adulto , Precisión de la Medición Dimensional , Electrocardiografía/métodos , Electrocardiografía/normas , Femenino , Sistema de Conducción Cardíaco/diagnóstico por imagen , Humanos , Masculino , Evaluación de Resultado en la Atención de Salud , Distribución Aleatoria , Procesamiento de Señales Asistido por Computador
9.
J Electrocardiol ; 51(6S): S18-S21, 2018.
Artículo en Inglés | MEDLINE | ID: mdl-30122456

RESUMEN

The development of new technology such as wearables that record high-quality single channel ECG, provides an opportunity for ECG screening in a larger population, especially for atrial fibrillation screening. The main goal of this study is to develop an automatic classification algorithm for normal sinus rhythm (NSR), atrial fibrillation (AF), other rhythms (O), and noise from a single channel short ECG segment (9-60 s). For this purpose, we combined a signal quality index (SQI) algorithm, to assess noisy instances, and trained densely connected convolutional neural networks to classify ECG recordings. Two convolutional neural network (CNN) models (a main model that accepts 15 s ECG segments and a secondary model that processes shorter 9 s segments) were trained using the training data set. If the recording is determined to be of low quality by SQI, it is immediately classified as noisy. Otherwise, it is transformed to a time-frequency representation and classified with the CNN as NSR, AF, O, or noise. The results achieved on the 2017 PhysioNet/Computing in Cardiology challenge test dataset were an overall F1 score of 0.82 (F1 for NSR, AF, and O were 0.91, 0.83, and 0.72, respectively). Compared with 80 challenge entries, this was the third best overall score achieved on the evaluation dataset.


Asunto(s)
Algoritmos , Fibrilación Atrial/diagnóstico , Electrocardiografía/métodos , Redes Neurales de la Computación , Humanos , Procesamiento de Señales Asistido por Computador
10.
J Electrocardiol ; 50(5): 615-619, 2017.
Artículo en Inglés | MEDLINE | ID: mdl-28476433

RESUMEN

A large number of ST-elevation notifications are generated by cardiac monitoring systems, but only a fraction of them is related to the critical condition known as ST-segment elevation myocardial infarction (STEMI) in which the blockage of coronary artery causes ST-segment elevation. Confounders such as acute pericarditis and benign early repolarization create electrocardiographic patterns mimicking STEMI but usually do not benefit from a real-time notification. A STEMI screening algorithm able to recognize those confounders utilizing capabilities of diagnostic ECG algorithms in variation analysis of ST segments helps to avoid triggering a non-actionable ST-elevation notification. However, diagnostic algorithms are generally designed to analyze short ECG snapshots collected in low-noise resting position and hence are susceptible to high levels of noise common in a monitoring environment. We developed a STEMI screening algorithm which performs a real-time signal quality evaluation on the ECG waveform to select the segments with quality high enough for subsequent analysis by a diagnostic ECG algorithm. The STEMI notifications generated by this multi-stage STEMI screening algorithm are significantly fewer than ST-elevation notifications generated by a continuous ST monitoring strategy.


Asunto(s)
Síndrome Coronario Agudo/diagnóstico , Algoritmos , Electrocardiografía Ambulatoria , Infarto del Miocardio con Elevación del ST/diagnóstico , Diagnóstico Diferencial , Femenino , Humanos , Masculino
11.
J Electrocardiol ; 50(6): 841-846, 2017.
Artículo en Inglés | MEDLINE | ID: mdl-28918214

RESUMEN

BACKGROUND: The feasibility of using photoplethysmography (PPG) for estimating heart rate variability (HRV) has been the subject of many recent studies with contradicting results. Accurate measurement of cardiac cycles is more challenging in PPG than ECG due to its inherent characteristics. METHODS: We developed a PPG-only algorithm by computing a robust set of medians of the interbeat intervals between adjacent peaks, upslopes, and troughs. Abnormal intervals are detected and excluded by applying our criteria. RESULTS: We tested our algorithm on a large database from high-risk ICU patients containing arrhythmias and significant amounts of artifact. The average difference between PPG-based and ECG-based parameters is <1% for pNN50, <1bpm for meanHR, <1ms for SDNN, <3ms for meanNN, and <4ms for SDSD and RMSSD. CONCLUSIONS: Our performance testing shows that the pulse rate variability (PRV) parameters are comparable to the HRV parameters from simultaneous ECG recordings.


Asunto(s)
Algoritmos , Arritmias Cardíacas/fisiopatología , Determinación de la Frecuencia Cardíaca/métodos , Frecuencia Cardíaca/fisiología , Fotopletismografía/métodos , Artefactos , Electrocardiografía/métodos , Humanos , Unidades de Cuidados Intensivos , Sensibilidad y Especificidad , Procesamiento de Señales Asistido por Computador
12.
J Electrocardiol ; 50(6): 769-775, 2017.
Artículo en Inglés | MEDLINE | ID: mdl-29021091

RESUMEN

Interest in the effects of drugs on the heart rate-corrected JTpeak (JTpc) interval from the body-surface ECG has spawned an increasing number of scientific investigations in the field of regulatory sciences, and more specifically in the context of the Comprehensive in vitro Proarrhythmia Assay (CiPA) initiative. We conducted a novel initiative to evaluate the role of automatic JTpc measurement technologies by comparing their ability to distinguish multi- from single-channel blocking drugs. A set of 5232 ECGs was shared by the FDA (through the Telemetric and Holter ECG Warehouse) with 3 ECG device companies (AMPS, Mortara, and Philips). We evaluated the differences in drug-concentration effects on these measurements between the commercial and the FDA technologies. We provide a description of the drug-induced placebo-corrected changes from baseline for dofetilide, quinidine, ranolazine, and verapamil, and discuss the various differences across all technologies. The results revealed only small differences between measurement technologies evaluated in this study. It also confirms that, in this dataset, the JTpc interval distinguishes between multi- and single-channel (hERG) blocking drugs when evaluating the effects of dofetilide, quinidine, ranolazine, and verapamil. However, in the case of quinidine and dofetilide, we noticed a poor consistency across technologies because of the lack of standard definitions for the location of the peak of the T-wave (T-apex) when the T-wave morphology is abnormal.


Asunto(s)
Algoritmos , Biomarcadores/análisis , Electrocardiografía Ambulatoria/métodos , Sistema de Conducción Cardíaco/efectos de los fármacos , Canales Iónicos/efectos de los fármacos , Síndrome de QT Prolongado/inducido químicamente , Bloqueadores de los Canales de Potasio/farmacología , Bloqueadores de los Canales de Sodio/farmacología , Torsades de Pointes/inducido químicamente , Adolescente , Adulto , Voluntarios Sanos , Humanos , Fenetilaminas/farmacología , Quinidina/farmacología , Ranolazina/farmacología , Sulfonamidas/farmacología , Verapamilo/farmacología
13.
J Electrocardiol ; 49(1): 55-9, 2016.
Artículo en Inglés | MEDLINE | ID: mdl-26607407

RESUMEN

In this work we studied a computer-aided approach using QRS slopes as unconventional ECG features to identify the exercise-induced ischemia during exercise stress testing and demonstrated that the performance is comparable to the experts' manual analysis using standard criteria involving ST-segment depression. We evaluated the performance of our algorithm using a database including 927 patients undergoing exercise stress tests and simultaneously collecting the ECG recordings and SPECT results. High resolution 12-lead ECG recordings were collected continuously throughout the rest, exercise, and recovery phases. Patients in the database were classified into three categories of moderate/severe ischemia, mild ischemia, and normal according to the differences in sum of the individual segment scores for the rest and stress SPECT images. Philips DXL 16-lead diagnostic algorithm was run on all 10-s segments of 12-lead ECG recordings for each patient to acquire the representative beats, ECG fiducial points from the representative beats, and other ECG parameters. The QRS slopes were extracted for each lead from the averaged representative beats and the leads with highest classification power were selected. We employed linear discriminant analysis and measured the performance using 10-fold cross-validation. Comparable performance of this method to the conventional ST-segment analysis exhibits the classification power of QRS slopes as unconventional ECG parameters contributing to improved identification of exercise-induced ischemia.


Asunto(s)
Algoritmos , Diagnóstico por Computador , Electrocardiografía , Prueba de Esfuerzo/métodos , Isquemia/diagnóstico , Isquemia Miocárdica/diagnóstico , Humanos , Reconocimiento de Normas Patrones Automatizadas , Reproducibilidad de los Resultados , Sensibilidad y Especificidad
14.
Am Heart J ; 167(2): 150-159.e1, 2014 Feb.
Artículo en Inglés | MEDLINE | ID: mdl-24439975

RESUMEN

BACKGROUND AND PURPOSE: Automated measurements of electrocardiographic (ECG) intervals are widely used by clinicians for individual patient diagnosis and by investigators in population studies. We examined whether clinically significant systematic differences exist in ECG intervals measured by current generation digital electrocardiographs from different manufacturers and whether differences, if present, are dependent on the degree of abnormality of the selected ECGs. METHODS: Measurements of RR interval, PR interval, QRS duration, and QT interval were made blindly by 4 major manufacturers of digital electrocardiographs used in the United States from 600 XML files of ECG tracings stored in the US FDA ECG warehouse and released for the purpose of this study by the Cardiac Safety Research Consortium. Included were 3 groups based on expected QT interval and degree of repolarization abnormality, comprising 200 ECGs each from (1) placebo or baseline study period in normal subjects during thorough QT studies, (2) peak moxifloxacin effect in otherwise normal subjects during thorough QT studies, and (3) patients with genotyped variants of congenital long QT syndrome (LQTS). RESULTS: Differences of means between manufacturers were generally small in the normal and moxifloxacin subjects, but in the LQTS patients, differences of means ranged from 2.0 to 14.0 ms for QRS duration and from 0.8 to 18.1 ms for the QT interval. Mean absolute differences between algorithms were similar for QRS duration and QT intervals in the normal and in the moxifloxacin subjects (mean ≤6 ms) but were significantly larger in patients with LQTS. CONCLUSIONS: Small but statistically significant group differences in mean interval and duration measurements and means of individual absolute differences exist among automated algorithms of widely used, current generation digital electrocardiographs. Measurement differences, including QRS duration and the QT interval, are greatest for the most abnormal ECGs.


Asunto(s)
Algoritmos , Electrocardiografía/instrumentación , Sistema de Conducción Cardíaco/fisiología , Frecuencia Cardíaca/fisiología , Procesamiento de Señales Asistido por Computador , Adulto , Diseño de Equipo , Femenino , Humanos , Masculino , Persona de Mediana Edad , Reproducibilidad de los Resultados
15.
J Electrocardiol ; 47(6): 890-4, 2014.
Artículo en Inglés | MEDLINE | ID: mdl-25194873

RESUMEN

BACKGROUND: Pre-hospital 12-lead ECG interpretation is important because pre-hospital activation of the coronary catheterization laboratory reduces ST-segment elevation myocardial infarction (STEMI) discovery-to-treatment time. In addition, some ECG features indicate higher risk in STEMI such as proximal left anterior descending (LAD) culprit lesion location. The challenging nature of the pre-hospital environment can lead to noisier ECGs which make automated STEMI detection difficult. We describe an automated system to classify lesion location as proximal LAD, LAD, right coronary artery (RCA) and left circumflex (LCx) and test the performance on pre-hospital 12-lead ECG. METHODS: The overall classifier was designed from three linked classifiers to separate LAD from non-LAD (RCA or LCx) in the first step, RCA from LCx in a second classifier and proximal from non-proximal LAD in the third classifier. The proximal LAD classifier was designed for high specificity because the output may be used in the decision to modify treatment. The LCx classifier was designed for high specificity because RCA is dominant in most people. The system was trained on a set of emergency department ECGs (n=181) and tested on a set of pre-hospital ECGs (n=80). Both sets were based on a sequential sample starting with symptoms suggesting acute coronary syndromes. Culprit lesion location was determined from coronary catheterization laboratory reports. Inclusion criteria included STEMI interpretation by computer and culprit lesion with 70% or more narrowing. Algorithm accuracy was measured on the test set by sensitivity (SE), specificity (SP), and positive predictive value (PPV). RESULTS: SE, SP and PPV were 50, 100 and 100% respectively for proximal LAD lesion location; 90, 100 and 100% for all LAD; 98, 72 and 78% for RCA; and 50, 98 and 90% for LCx. Specificity and PPV were high for proximal LAD, LAD and LCx. Specificity and PPV are not as high for RCA by design since the RCA-LCx tradeoff favors high specificity in LCx. CONCLUSION: Although our test database is not large, algorithm performance suggests culprit lesion location can be reliably determined from pre-hospital ECG. Further research is needed however to evaluate the impact of automated culprit lesion location on patient treatment and outcomes.


Asunto(s)
Algoritmos , Enfermedad de la Arteria Coronaria/diagnóstico , Diagnóstico por Computador/métodos , Electrocardiografía/métodos , Servicios Médicos de Urgencia/métodos , Infarto del Miocardio/diagnóstico , Anciano , Enfermedad de la Arteria Coronaria/complicaciones , Femenino , Humanos , Masculino , Infarto del Miocardio/etiología , Reconocimiento de Normas Patrones Automatizadas/métodos , Reproducibilidad de los Resultados , Sensibilidad y Especificidad
16.
J Electrocardiol ; 47(6): 798-803, 2014.
Artículo en Inglés | MEDLINE | ID: mdl-25172189

RESUMEN

Defibrillation is often required to terminate a ventricular fibrillation or fast ventricular tachycardia rhythm and resume a perfusing rhythm in sudden cardiac arrest patients. Automated external defibrillators rely on automatic ECG analysis algorithms to detect the presence of shockable rhythms before advising the rescuer to deliver a shock. For a reliable rhythm analysis, chest compression must be interrupted to prevent corruption of the ECG waveform due to the artifact induced by the mechanical activity of compressions. However, these hands-off intervals adversely affect the success of treatment. To minimize the hands-off intervals and increase the chance of successful resuscitation, we developed a method which asks for interrupting the compressions only if the underlying ECG rhythm cannot be accurately determined during chest compressions. Using this method only a small percentage of cases need compressions interruption, hence a significant reduction in hands-off time is achieved. Our algorithm comprises a novel filtering technique for the ECG and thoracic impedance waveforms, and an innovative method to combine analysis from both filtered and unfiltered data. Requiring compression interruption for only 14% of cases, our algorithm achieved a sensitivity of 92% and specificity of 99%.


Asunto(s)
Artefactos , Muerte Súbita Cardíaca/prevención & control , Cardioversión Eléctrica/métodos , Masaje Cardíaco/métodos , Terapia Asistida por Computador/métodos , Fibrilación Ventricular/prevención & control , Algoritmos , Alarmas Clínicas , Terapia Combinada/métodos , Desfibriladores , Diagnóstico por Computador/métodos , Electrocardiografía/métodos , Humanos , Reproducibilidad de los Resultados , Estudios Retrospectivos , Sensibilidad y Especificidad , Resultado del Tratamiento , Fibrilación Ventricular/diagnóstico
17.
J Electrocardiol ; 47(6): 781-7, 2014.
Artículo en Inglés | MEDLINE | ID: mdl-25200900

RESUMEN

BACKGROUND: ECG cable interchange can generate erroneous diagnoses. For algorithms detecting ECG cable interchange, high specificity is required to maintain a low total false positive rate because the prevalence of interchange is low. In this study, we propose and evaluate an improved algorithm for automatic detection and classification of ECG cable interchange. METHOD: The algorithm was developed by using both ECG morphology information and redundancy information. ECG morphology features included QRS-T and P-wave amplitude, frontal axis and clockwise vector loop rotation. The redundancy features were derived based on the EASI™ lead system transformation. The classification was implemented using linear support vector machine. The development database came from multiple sources including both normal subjects and cardiac patients. An independent database was used to test the algorithm performance. Common cable interchanges were simulated by swapping either limb cables or precordial cables. RESULTS: For the whole validation database, the overall sensitivity and specificity for detecting precordial cable interchange were 56.5% and 99.9%, and the sensitivity and specificity for detecting limb cable interchange (excluding left arm-left leg interchange) were 93.8% and 99.9%. Defining precordial cable interchange or limb cable interchange as a single positive event, the total false positive rate was 0.7%. When the algorithm was designed for higher sensitivity, the sensitivity for detecting precordial cable interchange increased to 74.6% and the total false positive rate increased to 2.7%, while the sensitivity for detecting limb cable interchange was maintained at 93.8%. The low total false positive rate was maintained at 0.6% for the more abnormal subset of the validation database including only hypertrophy and infarction patients. CONCLUSION: The proposed algorithm can detect and classify ECG cable interchanges with high specificity and low total false positive rate, at the cost of decreased sensitivity for certain precordial cable interchanges. The algorithm could also be configured for higher sensitivity for different applications where a lower specificity can be tolerated.


Asunto(s)
Algoritmos , Diagnóstico por Computador/métodos , Electrocardiografía/instrumentación , Electrocardiografía/métodos , Electrodos , Reconocimiento de Normas Patrones Automatizadas/métodos , Inteligencia Artificial , Humanos , Reproducibilidad de los Resultados , Sensibilidad y Especificidad
18.
J Electrocardiol ; 47(6): 819-25, 2014.
Artículo en Inglés | MEDLINE | ID: mdl-25194875

RESUMEN

BACKGROUND: Respiration rate (RR) is a critical vital sign that can be monitored to detect acute changes in patient condition (e.g., apnea) and potentially provide an early warning of impending life-threatening deterioration. Monitoring respiration signals is also critical for detecting sleep disordered breathing such as sleep apnea. Additionally, analyzing a respiration signal can enhance the quality of medical images by gating image acquisition based on the same phase of the patient's respiratory cycle. Although many methods exist for measuring respiration, in this review we focus on three ECG-derived respiration techniques we developed to obtain respiration from an ECG signal. METHODS: The first step in all three techniques is to analyze the ECG to detect beat locations and classify them. 1) The EDR method is based on analyzing the heart axis shift due to respiration. In our method, one respiration waveform value is calculated for each normal QRS complex by measuring the peak to QRS trough amplitude. Compared to other similar EDR techniques, this method does not need removal of baseline wander from the ECG signal. 2) The RSA method uses instantaneous heart rate variability to derive a respiratory signal. It is based on the observed respiratory sinus arrhythmia governed by baroreflex sensitivity. 3) Our EMGDR method for computing a respiratory waveform uses measurement of electromyogram (EMG) activity created by respiratory effort of the intercostal muscles and diaphragm. The ECG signal is high-pass filtered and processed to reduce ECG components and accentuate the EMG signal before applying RMS and smoothing. RESULTS: Over the last five years, we have performed six studies using the above methods: 1) In 1907 sleep lab patients with >1.5M 30-second epochs, EDR achieved an apnea detection accuracy of 79%. 2) In 24 adult polysomnograms, use of EDR and chest belts for RR computation was compared to airflow RR; mean RR error was EDR: 1.8±2.7 and belts: 0.8±2.1. 3) During cardiac MRI, a comparison of EMGDR breath locations to the reference abdominal belt signal yielded sensitivity/PPV of 94/95%. 4) Another comparison study for breath detection during MRI yielded sensitivity/PPV pairs of EDR: 99/97, RSA: 79/78, and EMGDR: 89/86%. 5) We tested EMGDR performance in the presence of simulated respiratory disease using CPAP to produce PEEP. For 10 patients, no false breath waveforms were generated with mild PEEP, but they appeared in 2 subjects at high PEEP. 6) A patient monitoring study compared RR computation from EDR to impedance-derived RR, and showed that EDR provides a near equivalent RR measurement with reduced hardware circuitry requirements.


Asunto(s)
Algoritmos , Apnea/diagnóstico , Diagnóstico por Computador/métodos , Electrocardiografía/métodos , Electromiografía/métodos , Frecuencia Respiratoria , Humanos , Reconocimiento de Normas Patrones Automatizadas/métodos , Polisomnografía/métodos , Reproducibilidad de los Resultados , Sensibilidad y Especificidad
19.
J Electrocardiol ; 46(6): 473-9, 2013.
Artículo en Inglés | MEDLINE | ID: mdl-23871657

RESUMEN

Although the importance of quality cardiopulmonary resuscitation (CPR) and its link to survival is still emphasized, there has been recent debate about the balance between CPR and defibrillation, particularly for long response times. Defibrillation shocks for ventricular fibrillation (VF) of recently perfused hearts have high success for the return of spontaneous circulation (ROSC), but hearts with depleted adenosine triphosphate (ATP) stores have low recovery rates. Since quality CPR has been shown to both slow the degradation process and restore cardiac viability, a measurement of patient condition to optimize the timing of defibrillation shocks may improve outcomes compared to time-based protocols. Researchers have proposed numerous predictive features of VF and shockable ventricular tachycardia (VT) which can be computed from the electrocardiogram (ECG) signal to distinguish between the rhythms which convert to spontaneous circulation and those which do not. We looked at the shock-success prediction performance of thirteen of these features on a single evaluation database including the recordings from 116 out-of-hospital cardiac arrest patients which were collected for a separate study using defibrillators in ambulances and medical centers in 4 European regions and the US between March 2002 and September 2004. A total of 469 shocks preceded by VF or shockable VT rhythm episodes were identified in the recordings. Based on the experts' annotation for the post-shock rhythm, the shocks were categorized to result in either pulsatile (ROSC) or non-pulsatile (no-ROSC) rhythm. The features were calculated on a 4-second ECG segment prior to the shock delivery. These features examined were: Mean Amplitude, Average Peak-Peak Amplitude, Amplitude Range, Amplitude Spectrum Analysis (AMSA), Peak Frequency, Centroid Frequency, Spectral Flatness Measure (SFM), Energy, Max Power, Centroid Power, Power Spectrum Analysis (PSA), Mean Slope, and Median Slope. Statistical hypothesis tests (two-tailed t-test and Wilcoxon with 5% significance level) were applied to determine if the means and medians of these features were significantly different between the ROSC and no-ROSC groups. The ROC curve was computed for each feature, and Area Under the Curve (AUC) was calculated. Specificity (Sp) with Sensitivity (Se) held at 90% as well as Se with Sp held at 90% was also computed. All features showed statistically different mean and median values between the ROSC and no-ROSC groups with all p-values less than 0.0001. The AUC was >76% for all features. For Sp = 90%, the Se range was 33-45%; for Se = 90%, the Sp range was 49-63%. The features showed good shock-success prediction performance. We believe that a defibrillator employing a clinical decision tool based on these features has the potential to improve overall survival from cardiac arrest.


Asunto(s)
Desfibriladores Implantables/estadística & datos numéricos , Diagnóstico por Computador/métodos , Electrocardiografía/métodos , Paro Cardíaco/mortalidad , Paro Cardíaco/prevención & control , Evaluación de Resultado en la Atención de Salud/métodos , Análisis de Supervivencia , Algoritmos , Diagnóstico por Computador/estadística & datos numéricos , Electrocardiografía/estadística & datos numéricos , Paro Cardíaco/diagnóstico , Humanos , Evaluación de Resultado en la Atención de Salud/estadística & datos numéricos , Prevalencia , Pronóstico , Reproducibilidad de los Resultados , Factores de Riesgo , Sensibilidad y Especificidad , Resultado del Tratamiento
20.
J Electrocardiol ; 46(6): 528-34, 2013.
Artículo en Inglés | MEDLINE | ID: mdl-23948522

RESUMEN

BACKGROUND: ECG detection of ST-segment elevation myocardial infarction (STEMI) in the presence of left bundle-branch block (LBBB) is challenging due to ST deviation from the altered conduction. The purpose of this study was to introduce a new algorithm for STEMI detection in LBBB and compare the performance to three existing algorithms. METHODS: Source data of the study group (143 with acute MI and 239 controls) comes from multiple sources. ECGs were selected by computer interpretation of LBBB. Acute MI reference was hospital discharge diagnosis. Automated measurements came from the Philips DXL algorithm. Three existing algorithms were compared, (1) Sgarbossa criteria, (2) Selvester 10% RS criteria and (3) Smith 25% S-wave criteria. The new algorithm uses an ST threshold based on QRS area. All algorithms share the concordant ST elevation and anterior ST depression criteria from the Sgarbossa score. The difference is in the threshold for discordant ST elevation. The Sgarbossa, Selvester, Smith and Philips discordant ST elevation criteria are (1) ST elevation ≥ 500 µV, (2) ST elevation ≥ 10% of |S|-|R| plus STEMI limits, (3) ST elevation ≥ 25% of the S-wave amplitude and (4) ST elevation ≥ 100 µV + 1050 µV/Ash * QRS area. The Smith S-wave and Philips QRS area criteria were tested using both a single and 2 lead requirement. Algorithm performance was measured by sensitivity, specificity, and positive likelihood ratio (LR+). RESULTS: Algorithm performance can be organized in bands of similar sensitivity and specificity ranging from Sgarbossa score ≥ 3 with the lowest sensitivity and highest specificity, 13.3% and 97.9%, to the Selvester 10% rule with the highest sensitivity and lower specificity of 30.1% and 93.2%. The Smith S-wave and Philips QRS area algorithms were in the middle band with sensitivity and specificity of (20.3%, 94.9%) and (23.8%, 95.8%) respectively. CONCLUSION: As can be seen from the difference between Sgarbossa score ≥ 3 and other algorithms for STEMI in LBBB, a discordant ST elevation criterion improves the sensitivity for detection but also results in a drop in specificity. For applications of automated STEMI detection that require higher sensitivity, the Selvester algorithm is better. For applications that require a low false positive rate such as relying on the algorithm for pre-hospital activation of cardiac catheterization laboratory for urgent PCI, it may be better to use the 2 lead Philips QRS area or Smith 25% S-wave algorithm.


Asunto(s)
Algoritmos , Bloqueo de Rama/complicaciones , Bloqueo de Rama/diagnóstico , Diagnóstico por Computador/métodos , Electrocardiografía/métodos , Infarto del Miocardio/complicaciones , Infarto del Miocardio/diagnóstico , Diagnóstico Diferencial , Femenino , Humanos , Masculino , Reproducibilidad de los Resultados , Sensibilidad y Especificidad
SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA