RESUMEN
BACKGROUND: Mobile Cardiac Outpatient Telemetry (MCOT) can be used to screen high risk patients for atrial fibrillation (AF). These devices rely primarily on algorithmic detection of AF events, which are then stored and transmitted to a clinician for review. It is critical the positive predictive value (PPV) of MCOT detected AF is high, and this often leads to reduced sensitivity, as device manufacturers try to limit false positives. OBJECTIVE: The purpose of this study was to design a two stage classifier using artificial intelligence (AI) to improve the PPV of MCOT detected atrial fibrillation episodes whilst maintaining high levels of detection sensitivity. METHODS: A low complexity, RR-interval based, AF classifier was paired with a deep convolutional neural network (DCNN) to create a two-stage classifier. The DCNN was limited in size to allow it to be embedded on MCOT devices. The DCNN was trained on 491,727 ECGs from a proprietary database and contained 128,612 parameters requiring only 158 KB of storage. The performance of the two-stage classifier was then assessed using publicly available datasets. RESULTS: The sensitivity of AF detected by the low complexity classifier was high across all datasets (>93%) however the PPV was poor (<76%). Subsequent analysis by the DCNN increased episode PPV across all datasets substantially (>11%), with only a minor loss in sensitivity (<5%). This increase in PPV was due to a decrease in the number of false positive detections. Further analysis showed that DCNN processing was only required on around half of analysis windows, offering a significant computational saving against using the DCNN as a one-stage classifier. CONCLUSION: DCNNs can be combined with existing MCOT classifiers to increase the PPV of detected AF episodes. This reduces the review burden for physicians and can be achieved with only a modest decrease in sensitivity.
Asunto(s)
Fibrilación Atrial , Aprendizaje Profundo , Humanos , Fibrilación Atrial/diagnóstico , Electrocardiografía , Inteligencia Artificial , Redes Neurales de la ComputaciónRESUMEN
Impedance cardiography (ICG) is a low-cost, non-invasive technique that enables the clinical assessment of haemodynamic parameters, such as cardiac output and stroke volume (SV). Conventional ICG recordings are taken from the patient's thorax. However, access to ICG vital signs from the upper-arm brachial artery (as an associated surrogate) can enable user-convenient wearable armband sensor devices to provide an attractive option for gathering ICG trend-based indicators of general health, which offers particular advantages in ambulatory long-term monitoring settings. This study considered the upper arm ICG and control Thorax-ICG recordings data from 15 healthy subject cases. A prefiltering stage included a third-order Savitzky-Golay finite impulse response (FIR) filter, which was applied to the raw ICG signals. Then, a multi-stage wavelet-based denoising strategy on a beat-by-beat (BbyB) basis, which was supported by a recursive signal-averaging optimal thresholding adaptation algorithm for Arm-ICG signals, was investigated for robust signal quality enhancement. The performance of the BbyB ICG denoising was evaluated for each case using a 700 ms frame centred on the heartbeat ICG pulse. This frame was extracted from a 600-beat ensemble signal-averaged ICG and was used as the noiseless signal reference vector (gold standard frame). Furthermore, in each subject case, enhanced Arm-ICG and Thorax-ICG above a threshold of correlation of 0.95 with the noiseless vector enabled the analysis of beat inclusion rate (BIR%), yielding an average of 80.9% for Arm-ICG and 100% for Thorax-ICG, and BbyB values of the ICG waveform feature metrics A, B, C and VET accuracy and precision, yielding respective error rates (ER%) of 0.83%, 11.1%, 3.99% and 5.2% for Arm-IG, and 0.41%, 3.82%, 1.66% and 1.25% for Thorax-ICG, respectively. Hence, the functional relationship between ICG metrics within and between the arm and thorax recording modes could be characterised and the linear regression (Arm-ICG vs. Thorax-ICG) trends could be analysed. Overall, it was found in this study that recursive averaging, set with a 36 ICG beats buffer size, was the best Arm-ICG BbyB denoising process, with an average of less than 3.3% in the Arm-ICG time metrics error rate. It was also found that the arm SV versus thorax SV had a linear regression coefficient of determination (R2) of 0.84.
Asunto(s)
Cardiografía de Impedancia , Hemodinámica , Humanos , Gasto Cardíaco/fisiología , Volumen Sistólico/fisiología , Cardiografía de Impedancia/métodos , Hemodinámica/fisiología , Monitoreo AmbulatorioRESUMEN
In this commentary paper, we discuss the use of the electrocardiogram to help clinicians make diagnostic and patient referral decisions in acute care settings. The paper discusses the factors that are likely to contribute to the variability and noise in the clinical decision making process for catheterization lab activation. These factors include the variable competence in reading ECGs, the intra/inter rater reliability, the lack of standard ECG training, the various ECG machine and filter settings, cognitive biases (such as automation bias which is the tendency to agree with the computer-aided diagnosis or AI diagnosis), the order of the information being received, tiredness or decision fatigue as well as ECG artefacts such as the signal noise or lead misplacement. We also discuss potential research questions and tools that could be used to mitigate this 'noise' and improve the quality of ECG based decision making.
Asunto(s)
Diagnóstico por Computador , Electrocardiografía , Toma de Decisiones Clínicas , Toma de Decisiones , Humanos , Reproducibilidad de los ResultadosRESUMEN
Deep Convolutional Neural Networks (DCNNs) have been shown to provide improved performance over traditional heuristic algorithms for the detection of arrhythmias from ambulatory ECG recordings. However, these DCNNs have primarily been trained and tested on device-specific databases with standardized electrode positions and uniform sampling frequencies. This work explores the possibility of training a DCNN for Atrial Fibrillation (AF) detection on a database of singlelead ECG rhythm strips extracted from resting 12lead ECGs. We then test the performance of the DCNN on recordings from ambulatory ECG devices with different recording leads and sampling frequencies. We developed an extensive proprietary resting 12lead ECG dataset of 549,211 patients. This dataset was randomly split into a training set of 494,289 patients and a testing set of the remaining 54,922 patients. We trained a 34-layer convolutional DCNN to detect AF and other arrhythmias on this dataset. The DCNN was then validated on two Physionet databases commonly used to benchmark automated ECG algorithms (1) MIT-BIH Arrhythmia Database and (2) MIT-BIH Atrial Fibrillation Database. Validation was performed following the EC57 guidelines, with performance assessed by gross episode and duration sensitivity and positive predictive value (PPV). Finally, validation was also performed on a selection of rhythm strips from an ambulatory ECG patch that a committee of board-certified cardiologists annotated. On MIT-BIH, The DCNN achieved a sensitivity of 100% and 84% PPV in detecting episodes of AF. and 100% sensitivity and 94% PPV in quantifying AF episode duration. On AFDB, The DCNN achieved a sensitivity of 94% and PPV of 98% in detecting episodes of AF, and 98% sensitivity and 100% PPV in quantifying AF episode duration. On the patch database, the DCNN demonstrated performance that was closely comparable to that of a cardiologist. The results indicate that DCNN models can learn features that generalize between resting 12lead and ambulatory ECG recordings, allowing DCNNs to be device agnostic for detecting arrhythmias from singlelead ECG recordings and enabling a range of clinical applications.
Asunto(s)
Fibrilación Atrial , Humanos , Fibrilación Atrial/diagnóstico , Electrocardiografía , DescansoRESUMEN
Sudden cardiac death (SCD) risk can be reduced by early detection of short-lived and transient cardiac arrhythmias using long-term electrocardiographic (ECG) monitoring. Early detection of ventricular arrhythmias can reduce the risk of SCD by allowing appropriate interventions. Long-term continuous ECG monitoring, using a non-invasive armband-based wearable device is an appealing solution for detecting early heart rhythm abnormalities. However, there is a paucity of understanding on the number and best bipolar ECG electrode pairs axial orientation around the left mid-upper arm circumference (MUAC) for such devices. This study addresses the question on the best axial orientation of ECG bipolar electrode pairs around the left MUAC in non-invasive armband-based wearable devices, for the early detection of heart rhythm abnormalities. A total of 18 subjects with almost same BMI values in the WASTCArD arm-ECG database were selected to assess arm-ECG bipolar leads quality using proposed metrics of relative (normalized) signal strength measurement, arm-ECG detection performance of the main ECG waveform event component (QRS) and heart-rate variability (HRV) in six derived bipolar arm ECG-lead sensor pairs around the armband circumference, having regularly spaced axis angles (at 30° steps) orientation. The analysis revealed that the angular range from -30° to +30°of arm-lead sensors pair axis orientation around the arm, including the 0° axis (which is co-planar to chest plane), provided the best orientation on the arm for reasonably good QRS detection; presenting the highest sensitivity (Se) median value of 93.3%, precision PPV median value at 99.6%; HRV RMS correlation (p) of 0.97 and coefficient of determination (R2) of 0.95 with HRV gold standard values measured in the standard Lead-I ECG.
Asunto(s)
Brazo , Dispositivos Electrónicos Vestibles , Arritmias Cardíacas/diagnóstico , Electrocardiografía , Electrodos , HumanosRESUMEN
Inertial sensors are widely used in human motion monitoring. Orientation and position are the two most widely used measurements for motion monitoring. Tracking with the use of multiple inertial sensors is based on kinematic modelling which achieves a good level of accuracy when biomechanical constraints are applied. More recently, there is growing interest in tracking motion with a single inertial sensor to simplify the measurement system. The dead reckoning method is commonly used for estimating position from inertial sensors. However, significant errors are generated after applying the dead reckoning method because of the presence of sensor offsets and drift. These errors limit the feasibility of monitoring upper limb motion via a single inertial sensing system. In this paper, error correction methods are evaluated to investigate the feasibility of using a single sensor to track the movement of one upper limb segment. These include zero velocity update, wavelet analysis and high-pass filtering. The experiments were carried out using the nine-hole peg test. The results show that zero velocity update is the most effective method to correct the drift from the dead reckoning-based position tracking. If this method is used, then the use of a single inertial sensor to track the movement of a single limb segment is feasible.
Asunto(s)
Movimiento , Extremidad Superior , Humanos , Movimiento (Física) , Fenómenos BiomecánicosRESUMEN
PURPOSE: Congenital heart disease (CHD) is the most common live birth defect and a proportion of these patients have chronic hypoxia. Chronic hypoxia leads to secondary erythrocytosis resulting in microvascular dysfunction and increased thrombosis risk. The conjunctival microcirculation is easily accessible for imaging and quantitative assessment. It has not previously been studied in adult CHD patients with cyanosis (CCHD). METHODS: We assessed the conjunctival microcirculation and compared CCHD patients and matched healthy controls to determine if there were differences in measured microcirculatory parameters. We acquired images using an iPhone 6s and slit-lamp biomicroscope. Parameters measured included diameter, axial velocity, wall shear rate and blood volume flow. The axial velocity was estimated by applying the 1D + T continuous wavelet transform (CWT). Results are for all vessels as they were not sub-classified into arterioles or venules. RESULTS: 11 CCHD patients and 14 healthy controls were recruited to the study. CCHD patients were markedly more hypoxic compared to the healthy controls (84% vs 98%, p = 0.001). A total of 736 vessels (292 vs 444) were suitable for analysis. Mean microvessel diameter (D) did not significantly differ between the CCHD patients and controls (20.4 ± 2.7 µm vs 20.2 ± 2.6 µm, p = 0.86). Axial velocity (Va) was lower in the CCHD patients (0.47 ± 0.06 mm/s vs 0.53 ± 0.05 mm/s, p = 0.03). Blood volume flow (Q) was lower for CCHD patients (121 ± 30pl/s vs 145 ± 50pl/s, p = 0.65) with the greatest differences observed in vessels >22 µm diameter (216 ± 121pl/s vs 258 ± 154pl/s, p = 0.001). Wall shear rate (WSR) was significantly lower for the CCHD group (153 ± 27 s-1 vs 174 ± 22 s-1, p = 0.04). CONCLUSIONS: This iPhone and slit-lamp combination assessment of conjunctival vessels found lower axial velocity, wall shear rate and in the largest vessel group, lower blood volume flow in chronically hypoxic patients with congenital heart disease. With further study this assessment method may have utility in the evaluation of patients with chronic hypoxia.
Asunto(s)
Conjuntiva/irrigación sanguínea , Cianosis/diagnóstico , Cardiopatías Congénitas/diagnóstico , Microcirculación , Microscopía con Lámpara de Hendidura , Adulto , Velocidad del Flujo Sanguíneo , Estudios de Casos y Controles , Cianosis/etiología , Cianosis/fisiopatología , Femenino , Cardiopatías Congénitas/complicaciones , Cardiopatías Congénitas/fisiopatología , Humanos , Masculino , Persona de Mediana Edad , Valor Predictivo de las Pruebas , Flujo Sanguíneo Regional , Lámpara de Hendidura , Microscopía con Lámpara de Hendidura/instrumentación , Teléfono Inteligente , Estrés Mecánico , Adulto JovenRESUMEN
Compartment-based infectious disease models that consider the transmission rate (or contact rate) as a constant during the course of an epidemic can be limiting regarding effective capture of the dynamics of infectious disease. This study proposed a novel approach based on a dynamic time-varying transmission rate with a control rate governing the speed of disease spread, which may be associated with the information related to infectious disease intervention. Integration of multiple sources of data with disease modelling has the potential to improve modelling performance. Taking the global mobility trend of vehicle driving available via Apple Maps as an example, this study explored different ways of processing the mobility trend data and investigated their relationship with the control rate. The proposed method was evaluated based on COVID-19 data from six European countries. The results suggest that the proposed model with dynamic transmission rate improved the performance of model fitting and forecasting during the early stage of the pandemic. Positive correlation has been found between the average daily change of mobility trend and control rate. The results encourage further development for incorporation of multiple resources into infectious disease modelling in the future.
Asunto(s)
COVID-19 , Malus , Predicción , Humanos , Pandemias , SARS-CoV-2RESUMEN
This paper provides a brief description of how computer programs are used to automatically interpret electrocardiograms (ECGs), and also provides a discussion regarding new opportunities. The algorithms that are typically used today in hospitals are knowledge engineered where a computer programmer manually writes computer code and logical statements which are then used to deduce a possible diagnosis. The computer programmer's code represents the criteria and knowledge that is used by clinicians when reading ECGs. This is in contrast to supervised machine learning (ML) approaches which use large, labelled ECG datasets to induct their own 'rules' to automatically classify ECGs. Although there are many ML techniques, deep neural networks are being increasingly explored as ECG classification algorithms when trained on large ECG datasets. Whilst this paper presents some of the pros and cons of each of these approaches, perhaps there are opportunities to develop hybridised algorithms that combine both knowledge and data driven techniques. In this paper, it is pointed out that open ECG data can dramatically influence what international ECG ML researchers focus on and that, ideally, open datasets could align with real world clinical challenges. In addition, some of the pitfalls and opportunities for ML with ECGs are outlined. A potential opportunity for the ECG community is to provide guidelines to researchers to help guide ECG ML practices. For example, whilst general ML guidelines exist, there is perhaps a need to recommend approaches for 'stress testing' and evaluating ML algorithms for ECG analysis, e.g. testing the algorithm with noisy ECGs and ECGs acquired using common lead and electrode misplacements. This paper provides a primer on ECG ML and discusses some of the key challenges and opportunities.
Asunto(s)
Algoritmos , Electrocardiografía , Prueba de Esfuerzo , Humanos , Aprendizaje Automático , Redes Neurales de la ComputaciónRESUMEN
Automated interpretation of the 12-lead ECG has remained an underpinning interest in decades of research that has seen a diversity of computing applications in cardiology. The application of computers in cardiology began in the 1960s with early research focusing on the conversion of analogue ECG signals (voltages) to digital samples. Alongside this, software techniques that automated the extraction of wave measurements and provided basic diagnostic statements, began to emerge. In the years since then there have been many significant milestones which include the widespread commercialisation of 12-lead ECG interpretation software, associated clinical utility and the development of the related regulatory frameworks to promote standardised development. In the past few years, the research community has seen a significant rejuvenation in the development of ECG interpretation programs. This is evident in the research literature where a large number of studies have emerged tackling a variety of automated ECG interpretation problems. This is largely due to two factors. Specifically, the technical advances, both software and hardware, that have facilitated the broad adoption of modern artificial intelligence (AI) techniques, and, the increasing availability of large datasets that support modern AI approaches. In this article we provide a very high-level overview of the operation of and approach to the development of early 12-lead ECG interpretation programs and we contrast this to the approaches that are now seen in emerging AI approaches. Our overview is mainly focused on highlighting differences in how input data are handled prior to generation of the diagnostic statement.
Asunto(s)
Cardiología , Aprendizaje Profundo , Algoritmos , Inteligencia Artificial , Electrocardiografía , HumanosRESUMEN
INTRODUCTION: Electrode misplacement and interchange errors are known problems when recording the 12lead electrocardiogram (ECG). Automatic detection of these errors could play an important role for improving clinical decision making and outcomes in cardiac care. The objectives of this systematic review and meta-analysis is to 1) study the impact of electrode misplacement on ECG signals and ECG interpretation, 2) to determine the most challenging electrode misplacements to detect using machine learning (ML), 3) to analyse the ML performance of algorithms that detect electrode misplacement or interchange according to sensitivity and specificity and 4) to identify the most commonly used ML technique for detecting electrode misplacement/interchange. This review analysed the current literature regarding electrode misplacement/interchange recognition accuracy using machine learning techniques. METHOD: A search of three online databases including IEEE, PubMed and ScienceDirect identified 228 articles, while 3 articles were included from additional sources from co-authors. According to the eligibility criteria, 14 articles were selected. The selected articles were considered for qualitative analysis and meta-analysis. RESULTS: The articles showed the effect of lead interchange on ECG morphology and as a consequence on patient diagnoses. Statistical analysis of the included articles found that machine learning performance is high in detecting electrode misplacement/interchange except left arm/left leg interchange. CONCLUSION: This review emphasises the importance of detecting electrode misplacement detection in ECG diagnosis and the effects on decision making. Machine learning shows promise in detecting lead misplacement/interchange and highlights an opportunity for developing and operationalising deep learning algorithms such as convolutional neural network (CNN) to detect electrode misplacement/interchange.
Asunto(s)
Electrocardiografía , Aprendizaje Automático , Algoritmos , Electrodos , Humanos , Redes Neurales de la ComputaciónRESUMEN
PURPOSE: The conjunctival microcirculation is a readily-accessible vascular bed for quantitative haemodynamic assessment and has been studied previously using a digital charge-coupled device (CCD). Smartphone video imaging of the conjunctiva, and haemodynamic parameter quantification, represents a novel approach. We report the feasibility of smartphone video acquisition and subsequent haemodynamic measure quantification via semi-automated means. METHODS: Using an Apple iPhone 6â¯s and a Topcon SL-D4 slit-lamp biomicroscope, we obtained videos of the conjunctival microcirculation in 4 fields of view per patient, for 17 low cardiovascular risk patients. After image registration and processing, we quantified the diameter, mean axial velocity, mean blood volume flow, and wall shear rate for each vessel studied. Vessels were grouped into quartiles based on their diameter i.e. group 1 (<11⯵m), 2 (11-16⯵m), 3 (16-22⯵m) and 4 (>22⯵m). RESULTS: From the 17 healthy controls (mean QRISK3 6.6%), we obtained quantifiable haemodynamics from 626 vessel segments. The mean diameter of microvessels, across all sites, was 21.1µm (range 5.8-58⯵m). Mean axial velocity was 0.50mm/s (range 0.11-1mm/s) and there was a modestly positive correlation (r 0.322) seen with increasing diameter, best appreciated when comparing group 4 to the remaining groups (pâ¯<â¯.0001). Blood volume flow (mean 145.61pl/s, range 7.05-1178.81pl/s) was strongly correlated with increasing diameter (r 0.943, pâ¯<â¯.0001) and wall shear rate (mean 157.31â¯s-1, range 37.37-841.66â¯s-1) negatively correlated with increasing diameter (râ¯-â¯0.703, pâ¯<â¯.0001). CONCLUSIONS: We, for the first time, report the successful assessment and quantification of the conjunctival microcirculatory haemodynamics using a smartphone-based system.
Asunto(s)
Enfermedades Cardiovasculares/diagnóstico , Conjuntiva/irrigación sanguínea , Técnicas de Diagnóstico Oftalmológico/instrumentación , Hemodinámica , Microcirculación , Lámpara de Hendidura , Teléfono Inteligente , Adulto , Velocidad del Flujo Sanguíneo , Enfermedades Cardiovasculares/fisiopatología , Estudios de Casos y Controles , Estudios de Factibilidad , Femenino , Hemorreología , Humanos , Interpretación de Imagen Asistida por Computador , Masculino , Persona de Mediana Edad , Aplicaciones Móviles , Modelos Cardiovasculares , Valor Predictivo de las Pruebas , Flujo Sanguíneo RegionalRESUMEN
BACKGROUND: Body surface potential mapping (BSPM) provides additional electrophysiological information that can be useful for the detection of cardiac diseases. Moreover, BSPMs are currently utilized in electrocardiographic imaging (ECGI) systems within clinical practice. Missing information due to noisy recordings, poor electrode contact is inevitable. In this study, we present an interpolation method that combines Laplacian minimization and principal component analysis (PCA) techniques for interpolating this missing information. METHOD: The dataset used consisted of 117 lead BSPMs recorded from 744 subjects (a training set of 384 subjects, and a test set of 360). This dataset is a mixture of normal, old myocardial infarction, and left ventricular hypertrophy subjects. The missing data was simulated by ignoring data recorded from 7 regions: the first region represents three rows of five electrodes on the anterior torso surface (high potential gradient region), and the other six regions were realistic patterns that have been drawn from clinical data and represent the most likely regions of broken electrodes. Three interpolation methods including PCA based interpolation, Laplacian interpolation, and hybrid Laplacian-PCA interpolation methods were used to interpolate the missing data from the remaining electrodes. In the simulated region of missing data, the calculated potentials from each interpolation method were compared with the measured potentials using relative error (RE) and correlation coefficient (CC) over time. In the hybrid Laplacian-PCA interpolation method, the missing data are firstly interpolated using Laplacian interpolation, then the resulting BSPM of 117 potentials was multiplied by the (117â¯×â¯117) coefficient matrix calculated using the training set to get the principal components. Out of 117 principal components (PCs), the first 15 PCs were utilized for the second stage of interpolation. The best performance of interpolation was the reason for choosing the first 15 PCs. RESULTS: The differences in the median of relative error (RE) between Laplacian and Hybrid method ranged from 0.01 to 0.35 (pâ¯<â¯0.001), while the differences in the median of correlation between them ranged from 0.0006 to 0.034 (pâ¯<â¯0.001). PCA-interpolation method performed badly especially in some scenarios where the number of missing electrodes was up to 12 or higher causing a high region of missing data. The figures of median of RE for PCA-method were between 0.05 and 0.6 lower than that for Hybrid method (pâ¯<â¯0.001). However, the median of correlation was between 0.0002 and 0.26 lower than the figure for the Hybrid method (pâ¯<â¯0.001). CONCLUSION: Comparison between the three methods of interpolation (Laplacian, PCA, Hybrid) in reconstructing missing data in BSPM showed that the Hybrid method was always better than the other methods in all scenarios; whether the number of missed electrodes is high or low, and irrespective of the location of these missed electrodes.
Asunto(s)
Mapeo del Potencial de Superficie Corporal , Electrocardiografía , Infarto del Miocardio , Electrodos , Humanos , Hipertrofia Ventricular Izquierda , Infarto del Miocardio/diagnósticoRESUMEN
BACKGROUND: Electrocardiogram (ECG) lead misplacement can adversely affect ECG diagnosis and subsequent clinical decisions. V1 and V2 are commonly placed superior of their correct position. The aim of the current study was to use machine learning approaches to detect V1 and V2 lead misplacement to enhance ECG data quality. METHOD: ECGs for 453 patients, (normal nâ¯=â¯151, Left Ventricular Hypertrophy (LVH) nâ¯=â¯151, Myocardial Infarction nâ¯=â¯151) were extracted from body surface potential maps. These were used to extract both the correct and incorrectly placed V1 and V2 leads. The prevalence for correct and incorrect leads were 50%. Sixteen features were extracted in three different domains: time-based, statistical and time-frequency features using a wavelet transform. A hybrid feature selection approach was applied to select an optimal set of features. To ensure optimal model selection, five classifiers were used and compared. The aforementioned feature selection approach and classifiers were applied for V1 and V2 misplacement in three different positions: first, second and third intercostal spaces (ICS). RESULTS: The accuracy for V1 misplacement detection was 93.9%, 89.3%, 72.8% in the first, second and third ICS respectively. In V2, the accuracy was 93.6%, 86.6% and 68.1% in the first, second and third ICS respectively. There is a noticeable decline in accuracy when detecting misplacement in the third ICS which is expected.
Asunto(s)
Electrocardiografía , Infarto del Miocardio , Electrodos , Humanos , Aprendizaje Automático , TóraxRESUMEN
INTRODUCTION: Interpretation of the 12lead Electrocardiogram (ECG) is normally assisted with an automated diagnosis (AD), which can facilitate an 'automation bias' where interpreters can be anchored. In this paper, we studied, 1) the effect of an incorrect AD on interpretation accuracy and interpreter confidence (a proxy for uncertainty), and 2) whether confidence and other interpreter features can predict interpretation accuracy using machine learning. METHODS: This study analysed 9000 ECG interpretations from cardiology and non-cardiology fellows (CFs and non-CFs). One third of the ECGs involved no ADs, one third with ADs (half as incorrect) and one third had multiple ADs. Interpretations were scored and interpreter confidence was recorded for each interpretation and subsequently standardised using sigma scaling. Spearman coefficients were used for correlation analysis and C5.0 decision trees were used for predicting interpretation accuracy using basic interpreter features such as confidence, age, experience and designation. RESULTS: Interpretation accuracies achieved by CFs and non-CFs dropped by 43.20% and 58.95% respectively when an incorrect AD was presented (pâ¯<â¯0.001). Overall correlation between scaled confidence and interpretation accuracy was higher amongst CFs. However, correlation between confidence and interpretation accuracy decreased for both groups when an incorrect AD was presented. We found that an incorrect AD disturbs the reliability of interpreter confidence in predicting accuracy. An incorrect AD has a greater effect on the confidence of non-CFs (although this is not statistically significant it is close to the threshold, pâ¯=â¯0.065). The best C5.0 decision tree achieved an accuracy rate of 64.67% (pâ¯<â¯0.001), however this is only 6.56% greater than the no-information-rate. CONCLUSION: Incorrect ADs reduce the interpreter's diagnostic accuracy indicating an automation bias. Non-CFs tend to agree more with the ADs in comparison to CFs, hence less expert physicians are more effected by automation bias. Incorrect ADs reduce the interpreter's confidence and also reduces the predictive power of confidence for predicting accuracy (even more so for non-CFs). Whilst a statistically significant model was developed, it is difficult to predict interpretation accuracy using machine learning on basic features such as interpreter confidence, age, reader experience and designation.
Asunto(s)
Arritmias Cardíacas/diagnóstico , Automatización , Competencia Clínica , Errores Diagnósticos/estadística & datos numéricos , Electrocardiografía , Sesgo , Árboles de Decisión , Humanos , Variaciones Dependientes del Observador , IncertidumbreRESUMEN
BACKGROUND: In clinical practice, data archiving of resting 12-lead electrocardiograms (ECGs) is mainly achieved by storing a PDF report in the hospital electronic health record (EHR). When available, digital ECG source data (raw samples) are only retained within the ECG management system. OBJECTIVE: The widespread availability of the ECG source data would undoubtedly permit successive analysis and facilitate longitudinal studies, with both scientific and diagnostic benefits. METHODS & RESULTS: PDF-ECG is a hybrid archival format which allows to store in the same file both the standard graphical report of an ECG together with its source ECG data (waveforms). Using PDF-ECG as a model to address the challenge of ECG data portability, long-term archiving and documentation, a real-world proof-of-concept test was conducted in a northern Italy hospital. A set of volunteers undertook a basic ECG using routine hospital equipment and the source data captured. Using dedicated web services, PDF-ECG documents were then generated and seamlessly uploaded in the hospital EHR, replacing the standard PDF reports automatically generated at the time of acquisition. Finally, the PDF-ECG files could be successfully retrieved and re-analyzed. CONCLUSION: Adding PDF-ECG to an existing EHR had a minimal impact on the hospital's workflow, while preserving the ECG digital data.
Asunto(s)
Electrocardiografía , Registros Electrónicos de Salud , Almacenamiento y Recuperación de la Información/métodos , Humanos , Programas Informáticos , Integración de Sistemas , Flujo de TrabajoRESUMEN
BACKGROUND: The 12-lead Electrocardiogram (ECG) has been used to detect cardiac abnormalities in the same format for more than 70years. However, due to the complex nature of 12-lead ECG interpretation, there is a significant cognitive workload required from the interpreter. This complexity in ECG interpretation often leads to errors in diagnosis and subsequent treatment. We have previously reported on the development of an ECG interpretation support system designed to augment the human interpretation process. This computerised decision support system has been named 'Interactive Progressive based Interpretation' (IPI). In this study, a decision support algorithm was built into the IPI system to suggest potential diagnoses based on the interpreter's annotations of the 12-lead ECG. We hypothesise semi-automatic interpretation using a digital assistant can be an optimal man-machine model for ECG interpretation. OBJECTIVES: To improve interpretation accuracy and reduce missed co-abnormalities. METHODS: The Differential Diagnoses Algorithm (DDA) was developed using web technologies where diagnostic ECG criteria are defined in an open storage format, Javascript Object Notation (JSON), which is queried using a rule-based reasoning algorithm to suggest diagnoses. To test our hypothesis, a counterbalanced trial was designed where subjects interpreted ECGs using the conventional approach and using the IPI+DDA approach. RESULTS: A total of 375 interpretations were collected. The IPI+DDA approach was shown to improve diagnostic accuracy by 8.7% (although not statistically significant, p-value=0.1852), the IPI+DDA suggested the correct interpretation more often than the human interpreter in 7/10 cases (varying statistical significance). Human interpretation accuracy increased to 70% when seven suggestions were generated. CONCLUSION: Although results were not found to be statistically significant, we found; 1) our decision support tool increased the number of correct interpretations, 2) the DDA algorithm suggested the correct interpretation more often than humans, and 3) as many as 7 computerised diagnostic suggestions augmented human decision making in ECG interpretation. Statistical significance may be achieved by expanding sample size.
Asunto(s)
Algoritmos , Sistemas de Apoyo a Decisiones Clínicas , Errores Diagnósticos/prevención & control , Electrocardiografía , Competencia Clínica , Diagnóstico Diferencial , Humanos , Sistemas Hombre-Máquina , Programas InformáticosRESUMEN
INTRODUCTION: The 12-lead Electrocardiogram (ECG) presents a plethora of information and demands extensive knowledge and a high cognitive workload to interpret. Whilst the ECG is an important clinical tool, it is frequently incorrectly interpreted. Even expert clinicians are known to impulsively provide a diagnosis based on their first impression and often miss co-abnormalities. Given it is widely reported that there is a lack of competency in ECG interpretation, it is imperative to optimise the interpretation process. Predominantly the ECG interpretation process remains a paper based approach and whilst computer algorithms are used to assist interpreters by providing printed computerised diagnoses, there are a lack of interactive human-computer interfaces to guide and assist the interpreter. METHODS: An interactive computing system was developed to guide the decision making process of a clinician when interpreting the ECG. The system decomposes the interpretation process into a series of interactive sub-tasks and encourages the clinician to systematically interpret the ECG. We have named this model 'Interactive Progressive based Interpretation' (IPI) as the user cannot 'progress' unless they complete each sub-task. Using this model, the ECG is segmented into five parts and presented over five user interfaces (1: Rhythm interpretation, 2: Interpretation of the P-wave morphology, 3: Limb lead interpretation, 4: QRS morphology interpretation with chest lead and rhythm strip presentation and 5: Final review of 12-lead ECG). The IPI model was implemented using emerging web technologies (i.e. HTML5, CSS3, AJAX, PHP and MySQL). It was hypothesised that this system would reduce the number of interpretation errors and increase diagnostic accuracy in ECG interpreters. To test this, we compared the diagnostic accuracy of clinicians when they used the standard approach (control cohort) with clinicians who interpreted the same ECGs using the IPI approach (IPI cohort). RESULTS: For the control cohort, the (mean; standard deviation; confidence interval) of the ECG interpretation accuracy was (45.45%; SD=18.1%; CI=42.07, 48.83). The mean ECG interpretation accuracy rate for the IPI cohort was 58.85% (SD=42.4%; CI=49.12, 68.58), which indicates a positive mean difference of 13.4%. (CI=4.45, 22.35) An N-1 Chi-square test of independence indicated a 92% chance that the IPI cohort will have a higher accuracy rate. Interpreter self-rated confidence also increased between cohorts from a mean of 4.9/10 in the control cohort to 6.8/10 in the IPI cohort (p=0.06). Whilst the IPI cohort had greater diagnostic accuracy, the duration of ECG interpretation was six times longer when compared to the control cohort. CONCLUSIONS: We have developed a system that segments and presents the ECG across five graphical user interfaces. Results indicate that this approach improves diagnostic accuracy but with the expense of time, which is a valuable resource in medical practice.
Asunto(s)
Algoritmos , Toma de Decisiones Clínicas , Electrocardiografía , Cardiopatías/diagnóstico , Interfaz Usuario-Computador , HumanosRESUMEN
INTRODUCTION: The aim of this study is to present and evaluate the integration of a low resource JavaScript based ECG training interface (CrowdLabel) and a standardised curriculum for self-guided tuition in ECG interpretation. METHODS: Participants practiced interpreting ECGs weekly using the CrowdLabel interface to assist with the learning of the traditional didactic taught course material during a 6 week training period. To determine competency students were tested during week 7. RESULTS: A total of 245 unique ECG cases were submitted by each student. Accuracy scores during the training period ranged from 0-59.5% (median = 33.3%). Conversely accuracy scores during the test ranged from 30 - 70% (median = 37.5%) (p < 0.05). There was no correlation between students who interpreted high numbers of ECGs during the training period and their marks obtained. CONCLUSIONS: CrowdLabel is shown to be a readily accessible dedicated learning platform to support ECG interpretation competency.
Asunto(s)
Cardiología/educación , Instrucción por Computador/métodos , Evaluación Educacional/estadística & datos numéricos , Electrocardiografía/estadística & datos numéricos , Internet/estadística & datos numéricos , Programas Informáticos , Enseñanza , Cardiología/estadística & datos numéricos , Curriculum , Diagnóstico por Computador/estadística & datos numéricos , Escolaridad , Electrocardiografía/métodos , Femenino , Humanos , Masculino , Sistemas en Línea , Reino Unido , Adulto JovenRESUMEN
Automated detection of AF from the electrocardiogram (ECG) still remains a challenge. In this study, we investigated two multivariate-based classification techniques, Random Forests (RF) and k-nearest neighbor (k-nn), for improved automated detection of AF from the ECG. We have compiled a new database from ECG data taken from existing sources. R-R intervals were then analyzed using four previously described R-R irregularity measurements: (1) the coefficient of sample entropy (CoSEn), (2) the coefficient of variance (CV), (3) root mean square of the successive differences (RMSSD), and (4) median absolute deviation (MAD). Using outputs from all four R-R irregularity measurements, RF and k-nn models were trained. RF classification improved AF detection over CoSEn with overall specificity of 80.1% vs. 98.3% and positive predictive value of 51.8% vs. 92.1% with a reduction in sensitivity, 97.6% vs. 92.8%. k-nn also improved specificity and PPV over CoSEn; however, the sensitivity of this approach was considerably reduced (68.0%).