RESUMO
In cardiac cine imaging, acquiring high-quality data is challenging and time-consuming due to the artifacts generated by the heart's continuous movement. Volumetric, fully isotropic data acquisition with high temporal resolution is, to date, intractable due to MR physics constraints. To assess whole-heart movement under minimal acquisition time, we propose a deep learning model that reconstructs the volumetric shape of multiple cardiac chambers from a limited number of input slices while simultaneously optimizing the slice acquisition orientation for this task. We mimic the current clinical protocols for cardiac imaging and compare the shape reconstruction quality of standard clinical views and optimized views. In our experiments, we show that the jointly trained model achieves accurate high-resolution multi-chamber shape reconstruction with errors of <13 mm HD95 and Dice scores of >80%, indicating its effectiveness in both simulated cardiac cine MRI and clinical cardiac MRI with a wide range of pathological shape variations.
Assuntos
Procedimentos Cirúrgicos Cardíacos , Aprendizado Profundo , Volume Cardíaco , Coração/diagnóstico por imagem , ArtefatosRESUMO
OBJECTIVES: To qualitatively and quantitatively compare a single breath-hold fast half-Fourier single-shot turbo spin echo sequence with deep learning reconstruction (DL HASTE) with T2-weighted BLADE sequence for liver MRI at 3 T. METHODS: From December 2020 to January 2021, patients with liver MRI were prospectively included. For qualitative analysis, sequence quality, presence of artifacts, conspicuity, and presumed nature of the smallest lesion were assessed using the chi-squared and McNemar tests. For quantitative analysis, number of liver lesions, size of the smallest lesion, signal-to-noise ratio (SNR), and contrast-to-noise ratio (CNR) in both sequences were assessed using the paired Wilcoxon signed-rank test. Intraclass correlation coefficients (ICCs) and kappa coefficients were used to assess agreement between the two readers. RESULTS: One hundred and twelve patients were evaluated. Overall image quality (p = .006), artifacts (p < .001), and conspicuity of the smallest lesion (p = .001) were significantly better for the DL HASTE sequence than for the T2-weighted BLADE sequence. Significantly more liver lesions were detected with the DL HASTE sequence (356 lesions) than with the T2-weighted BLADE sequence (320 lesions; p < .001). CNR was significantly higher for the DL HASTE sequence (p < .001). SNR was higher for the T2-weighted BLADE sequence (p < .001). Interreader agreement was moderate to excellent depending on the sequence. Of the 41 supernumerary lesions visible only on the DL HASTE sequence, 38 (93%) were true-positives. CONCLUSION: The DL HASTE sequence can be used to improve image quality and contrast and reduces artifacts, allowing the detection of more liver lesions than with the T2-weighted BLADE sequence. CLINICAL RELEVANCE STATEMENT: The DL HASTE sequence is superior to the T2-weighted BLADE sequence for the detection of focal liver lesions and can be used in daily practice as a standard sequence. KEY POINTS: ⢠The half-Fourier acquisition single-shot turbo spin echo sequence with deep learning reconstruction (DL HASTE sequence) has better overall image quality, reduced artifacts (particularly motion artifacts), and improved contrast, allowing the detection of more liver lesions than with the T2-weighted BLADE sequence. ⢠The acquisition time of the DL HASTE sequence is at least eight times faster (21 s) than that of the T2-weighted BLADE sequence (3-5 min). ⢠The DL HASTE sequence could replace the conventional T2-weighted BLADE sequence to meet the growing indication for hepatic MRI in clinical practice, given its diagnostic and time-saving performance.
Assuntos
Aprendizado Profundo , Neoplasias Hepáticas , Humanos , Neoplasias Hepáticas/diagnóstico por imagem , Estudos Prospectivos , Imageamento por Ressonância Magnética/métodos , ArtefatosRESUMO
Focal bone lesions are frequent, and management greatly depends on the characteristics of their images. After briefly discussing the required work-up, we analyze the most relevant imaging signs for assessing potential aggressiveness. We also describe the imaging aspects of the various types of lesion matrices and their clinical implications.
Assuntos
Doenças Ósseas , Doenças das Cartilagens , HumanosRESUMO
Providing reliable detection of QRS complexes is key in automated analyses of electrocardiograms (ECG). Accurate and timely R-peak detections provide a basis for ECG-based diagnoses and to synchronize radiologic, electrophysiologic, or other medical devices. Compared with classical algorithms, deep learning (DL) architectures have demonstrated superior accuracy and high generalization capacity. Furthermore, they can be embedded on edge devices for real-time inference. 3D vectorcardiograms (VCG) provide a unifying framework for detecting R-peaks regardless of the acquisition strategy or number of ECG leads. In this article, a DL architecture was demonstrated to provide enhanced precision when trained and applied on 3D VCG, with no pre-processing nor post-processing steps. Experiments were conducted on four different public databases. Using the proposed approach, high F1-scores of 99.80% and 99.64% were achieved in leave-one-out cross-validation and cross-database validation protocols, respectively. False detections, measured by a precision of 99.88% or more, were significantly reduced compared with recent state-of-the-art methods tested on the same databases, without penalty in the number of missed peaks, measured by a recall of 99.39% or more. This approach can provide new applications for devices where precision, or positive predictive value, is essential, for instance cardiac magnetic resonance imaging.
Assuntos
Aprendizado Profundo , Eletrocardiografia , Coração , Algoritmos , Bases de Dados FactuaisRESUMO
Recently, deep learning (DL) models have been increasingly adopted for automatic analyses of medical data, including electrocardiograms (ECGs). Large, available ECG datasets, generally of high quality, often lack specific distortions, which could be helpful for enhancing DL-based algorithms. Synthetic ECG datasets could overcome this limitation. A generative adversarial network (GAN) was used to synthesize realistic 3D magnetohydrodynamic (MHD) distortion templates, as observed during magnetic resonance imaging (MRI), and then added to available ECG recordings to produce an augmented dataset. Similarity metrics, as well as the accuracy of a DL-based R-peak detector trained with and without data augmentation, were used to evaluate the effectiveness of the synthesized data. Three-dimensional MHD distortions produced by the proposed GAN were similar to the measured ones used as input. The precision of a DL-based R-peak detector, tested on actual unseen data, was significantly enhanced by data augmentation; its recall was higher when trained with augmented data. Using synthesized MHD-distorted ECGs significantly improves the accuracy of a DL-based R-peak detector, with a good generalization capacity. This provides a simple and effective alternative to collecting new patient data. DL-based algorithms for ECG analyses can suffer from bias or gaps in training datasets. Using a GAN to synthesize new data, as well as metrics to evaluate its performance, can overcome the scarcity issue of data availability.
Assuntos
Eletrocardiografia , Coração , Humanos , Algoritmos , Benchmarking , Imageamento por Ressonância MagnéticaRESUMO
PURPOSE: Numerous MRI applications require data from external devices. Such devices are often independent of the MRI system, so synchronizing these data with the MRI data is often tedious and limited to offline use. In this work, a hardware and software system is proposed for acquiring data from external devices during MR imaging, for use online (in real-time) or offline. METHODS: The hardware includes a set of external devices - electrocardiography (ECG) devices, respiration sensors, microphone, electronics of the MR system etc. - using various channels for data transmission (analog, digital, optical fibers), all connected to a server through a universal serial bus (USB) hub. The software is based on a flexible client-server architecture, allowing real-time processing pipelines to be configured and executed. Communication protocols and data formats are proposed, in particular for transferring the external device data to an open-source reconstruction software (Gadgetron), for online image reconstruction using external physiological data. The system performance is evaluated in terms of accuracy of the recorded signals and delays involved in the real-time processing tasks. Its flexibility is shown with various applications. RESULTS: The real-time system had low delays and jitters (on the order of 1 ms). Example MRI applications using external devices included: prospectively gated cardiac cine imaging, multi-modal acquisition of the vocal tract (image, sound, and respiration) and online image reconstruction with nonrigid motion correction. CONCLUSION: The performance of the system and its versatile architecture make it suitable for a wide range of MRI applications requiring online or offline use of external device data.
Assuntos
Imageamento por Ressonância Magnética , Software , Sistemas Computacionais , Humanos , Imageamento por Ressonância Magnética/métodos , Movimento (Física) , RespiraçãoRESUMO
PURPOSE: Current electrocardiography (ECG) devices in MRI use non-conventional electrode placement, have a narrow bandwidth, and suffer from signal distortions including magnetohydrodynamic (MHD) effects and gradient-induced artifacts. In this work a system is proposed to obtain a high-quality 12-lead ECG. METHODS: A network of N electrically independent MR-compatible ECG sensors was developed (N = 4 in this study). Each sensor uses a safe technology - short cables, preamplification/digitization close to the patient, and optical transmission - and provides three bipolar voltage leads. A matrix combination is applied to reconstruct a 12-lead ECG from the raw network signals. A subject-specific calibration is performed to identify the matrix coefficients, maximizing the similarity with a true 12-lead ECG, acquired with a conventional 12-lead device outside the scan room. The sensor network was subjected to radiofrequency heating phantom tests at 3T. It was then tested in four subjects, both at 1.5T and 3T. RESULTS: Radiofrequency heating at 3T was within the MR-compatibility standards. The reconstructed 12-lead ECG showed minimal MHD artifacts and its morphology compared well with that of the true 12-lead ECG, as measured by correlation coefficients above 93% (respectively, 84%) for the QRS complex shape during steady-state free precession (SSFP) imaging at 1.5T (respectively, 3T). CONCLUSION: High-quality 12-lead ECG can be reconstructed by the proposed sensor network at 1.5T and 3T with reduced MHD artifacts compared to previous systems. The system might help improve patient monitoring and triggering and might also be of interest for interventional MRI and advanced cardiac MR applications.
Assuntos
Técnicas de Imagem de Sincronização Cardíaca/instrumentação , Eletrocardiografia/instrumentação , Imageamento por Ressonância Magnética/instrumentação , Adulto , Artefatos , Desenho de Equipamento , Feminino , Voluntários Saudáveis , Temperatura Alta , Humanos , Masculino , Pessoa de Meia-Idade , Método de Monte Carlo , Imagens de FantasmasRESUMO
OBJECTIVE: To assess whether noninvasive fetal electrocardiography (NI-FECG) enables the diagnosis of fetal arrhythmias. METHODS: A total of 500 echocardiography and NI-FECG recordings were collected from pregnant women during a routine medical visit in this multicenter study. All the cases with fetal arrhythmias (n = 12) and a matching number of control (n = 14) were used. Two perinatal cardiologists analyzed the extracted NI-FECG while blinded to the echocardiography. The NI-FECG-based diagnosis was compared with the reference fetal echocardiography diagnosis. RESULTS: NI-FECG and fetal echocardiography agreed on all cases (Ac = 100%) on the presence of an arrhythmia or not. However, in one case, the type of arrhythmia identified by the NI-FECG was incorrect because of the low resolution of the extracted fetal P-wave, which prevented resolving the mechanism (2:1 atrioventricular conduction) of the atrial tachycardia. CONCLUSION: It is possible to diagnose fetal arrhythmias using the NI-FECG technique. However, this study identifies that improvement in algorithms for reconstructing the P-wave is critical to systematically resolve the mechanisms underlying the arrhythmias. The elaboration of a NI-FECG Holter device will offer new opportunities for fetal diagnosis and remote monitoring of problematic pregnancies because of its low-cost, noninvasiveness, portability, and minimal setup requirements.
Assuntos
Arritmias Cardíacas/diagnóstico , Eletrocardiografia , Doenças Fetais/diagnóstico , Coração Fetal , Feminino , Humanos , GravidezRESUMO
BACKGROUND: Heart rate variability (HRV) has emerged as a predictor of later cardiac risk. This study tested whether pregnancy complications that may have long-term offspring cardiac sequelae are associated with differences in HRV at birth, and whether these HRV differences identify abnormal cardiovascular development in the postnatal period. METHODS: Ninety-eight sleeping neonates had 5-min electrocardiogram recordings at birth. Standard time and frequency domain parameters were calculated and related to cardiovascular measures at birth and 3 months of age. RESULTS: Increasing prematurity, but not maternal hypertension or growth restriction, was associated with decreased HRV at birth, as demonstrated by a lower root mean square of the difference between adjacent NN intervals (rMSSD) and low (LF) and high-frequency power (HF), with decreasing gestational age (p < 0.001, p = 0.009 and p = 0.007, respectively). We also demonstrated a relative imbalance between sympathetic and parasympathetic tone, compared to the term infants. However, differences in autonomic function did not predict cardiovascular measures at either time point. CONCLUSIONS: Altered cardiac autonomic function at birth relates to prematurity rather than other pregnancy complications and does not predict cardiovascular developmental patterns during the first 3 months post birth. Long-term studies will be needed to understand the relevance to cardiovascular risk.
Assuntos
Sistema Nervoso Autônomo/crescimento & desenvolvimento , Sistema Cardiovascular/crescimento & desenvolvimento , Frequência Cardíaca/fisiologia , Complicações na Gravidez , Adulto , Arritmias Cardíacas/fisiopatologia , Eletrocardiografia , Feminino , Idade Gestacional , Coração , Humanos , Recém-Nascido , Masculino , Análise Multivariada , Parto , Gravidez , Análise de RegressãoRESUMO
Both biomedical research and clinical practice rely on complex datasets for the physiological and genetic characterization of human hearts in health and disease. Given the complexity and variety of approaches and recordings, there is now growing recognition of the need to embed computational methods in cardiovascular medicine and science for analysis, integration and prediction. This paper describes a Workshop on Computational Cardiovascular Science that created an international, interdisciplinary and inter-sectorial forum to define the next steps for a human-based approach to disease supported by computational methodologies. The main ideas highlighted were (i) a shift towards human-based methodologies, spurred by advances in new in silico, in vivo, in vitro, and ex vivo techniques and the increasing acknowledgement of the limitations of animal models. (ii) Computational approaches complement, expand, bridge, and integrate in vitro, in vivo, and ex vivo experimental and clinical data and methods, and as such they are an integral part of human-based methodologies in pharmacology and medicine. (iii) The effective implementation of multi- and interdisciplinary approaches, teams, and training combining and integrating computational methods with experimental and clinical approaches across academia, industry, and healthcare settings is a priority. (iv) The human-based cross-disciplinary approach requires experts in specific methodologies and domains, who also have the capacity to communicate and collaborate across disciplines and cross-sector environments. (v) This new translational domain for human-based cardiology and pharmacology requires new partnerships supported financially and institutionally across sectors. Institutional, organizational, and social barriers must be identified, understood and overcome in each specific setting.
Assuntos
Cardiologia/métodos , Fármacos Cardiovasculares/uso terapêutico , Cardiopatias , Farmacologia/métodos , Pesquisa Translacional Biomédica/métodos , Animais , Biomarcadores/metabolismo , Técnicas de Imagem Cardíaca , Cardiotoxicidade , Fármacos Cardiovasculares/efeitos adversos , Comportamento Cooperativo , Difusão de Inovações , Técnicas Eletrofisiológicas Cardíacas , Cardiopatias/diagnóstico por imagem , Cardiopatias/tratamento farmacológico , Cardiopatias/metabolismo , Cardiopatias/fisiopatologia , Humanos , Comunicação Interdisciplinar , Modelos Cardiovasculares , Modelagem Computacional Específica para o Paciente , Valor Preditivo dos Testes , Prognóstico , Parcerias Público-PrivadasRESUMO
Atrial fibrillation (AF) is the most common cardiac arrhythmia, but is currently under-diagnosed since it can be asymptomatic. Early detection of AF could be highly beneficial for the prevention of stroke, which is one major risk associated with AF, with a five fold increase. mHealth applications have been recently proposed for early screening of paroxysmal AF. Several automatic AF detections have been suggested, and they are mostly based on features extracted from the RR interval time-series, since this is more robust to ambulatory noise than p-wave based algorithms. The RR interval features highlight the irregularity and unpredictability of the rhythm due to the chaotic electrical conduction through the AV node. Such approach has proved to be accurate on openly available databases. However, current techniques are limited by their assumption of almost perfect R peak detection, and RR time-series features are usually estimated from manual annotations. Analysis of the huge amount of data an mHealth application may create has to be automated, robust to noise, and should incorporate a confidence index based on an estimation of the signal quality. In this study, we present an in depth analysis of the performance of AF detection algorithms as a function of noise and QRS detection performance. We show a linear decrease of AF detection accuracy with respect to the SNR. Finally, we will demonstrate how the use of an automatic signal quality index can ensure a given level of performance in AF detection, more than 95% AF detection accuracy by analyzing segments with a median SQI over 0.8.
Assuntos
Algoritmos , Artefatos , Fibrilação Atrial/diagnóstico , Diagnóstico por Computador/métodos , Eletrocardiografia/métodos , Frequência Cardíaca , Fibrilação Atrial/fisiopatologia , Diagnóstico Precoce , Humanos , Reprodutibilidade dos Testes , Sensibilidade e Especificidade , Razão Sinal-RuídoRESUMO
PURPOSE: High-fidelity 12-lead electrocardiogram (ECG) is important for physiological monitoring of patients during MR-guided intervention and cardiac MRI. Issues in obtaining noncorrupted ECGs inside MRI include a superimposed magneto-hydro-dynamic voltage, gradient switching-induced voltages, and radiofrequency heating. These problems increase with magnetic field. The aim of this study is to develop and clinically validate a 1.5T MRI-conditional 12-lead ECG system. METHODS: The system was constructed with transmission lines to reduce radiofrequency induction and switching circuits to remove induced voltages. Adaptive filters, trained by 12-lead measurements outside MRI and in two orientations inside MRI, were used to remove the magneto-hydro-dynamic voltage. The system was tested on 10 (one exercising) volunteers and four arrhythmia patients. RESULTS: Switching circuits removed most imaging-induced voltages (residual noise <3% of the R-wave). Magneto-hydro-dynamic voltage removal provided intra-MRI ECGs that varied by <3.8% from those outside the MRI, preserving the true S-wave to T-wave segment. In premature ventricular contraction (PVC) patients, clean ECGs separated premature ventricular contraction and sinus rhythm beats. Measured heating was <1.5°C. The system reliably acquired multiphase (steady-state free precession) wall-motion-cine and phase-contrast-cine scans, including subjects in whom 4-lead gating failed. The system required a minimum repetition time of 4 ms to allow robust ECG processing. CONCLUSION: High-fidelity intra-MRI 12-lead ECG is possible.
Assuntos
Fibrilação Atrial/cirurgia , Técnicas de Imagem de Sincronização Cardíaca/instrumentação , Eletrocardiografia/instrumentação , Imagem por Ressonância Magnética Intervencionista/instrumentação , Cirurgia Assistida por Computador/instrumentação , Idoso , Animais , Fibrilação Atrial/diagnóstico , Procedimentos Cirúrgicos Cardiovasculares/instrumentação , Eletrodos , Desenho de Equipamento , Análise de Falha de Equipamento , Feminino , Humanos , Masculino , Pessoa de Meia-Idade , Reprodutibilidade dos Testes , Sensibilidade e Especificidade , Suínos , Resultado do TratamentoRESUMO
Electrocardiogram (ECG) is acquired during Magnetic Resonance Imaging (MRI) to monitor patients and synchronize image acquisition with the heart motion. ECG signals are highly distorted during MRI due to the complex electromagnetic environment. Automated ECG analysis is therefore complicated in this context and there is no reference technique in MRI to classify pathological heartbeats. Imaging arrhythmic patients is hence difficult in MRI. Deep Learning based heartbeat classifier have been suggested but require large databases whereas existing annotated sets of ECG in MRI are very small. We proposed a Siamese network to leverage a large database of unannotated ECGs outside MRI. This was used to develop an efficient representation of ECG signals, further used to develop a heartbeat classifier. We extensively tested several data augmentations and self-supervised learning (SSL) techniques and assessed the generalization of the obtained classifier to ECG signals acquired in MRI. These augmentations included random noises and a model simulating MRI specific artefacts. SSL pretraining improved the generalizability of heartbeat classifiers in MRI (F1=0.75) compared to Deep Learning not relying on SSL (F1=0.46) and another classical machine learning approach (F1=0.40). These promising results seem to indicate that the use of SSL techniques can learn efficient ECG signal representation, and are useful for the development of Deep Learning models even when only scarce annotated medical data are available.
Assuntos
Eletrocardiografia , Frequência Cardíaca , Imageamento por Ressonância Magnética , Processamento de Sinais Assistido por Computador , Aprendizado de Máquina Supervisionado , Humanos , Eletrocardiografia/métodos , Imageamento por Ressonância Magnética/métodos , Frequência Cardíaca/fisiologia , Adulto , Masculino , Feminino , Pessoa de Meia-Idade , Adulto Jovem , Algoritmos , Aprendizado ProfundoRESUMO
This study assesses the feasibility of using a sample-efficient model to investigate radiomics changes over time for predicting progression-free survival in rare diseases. Eighteen high-grade glioma patients underwent two L-3,4-dihydroxy-6-[18F]-fluoro-phenylalanine positron emission tomography (PET) dynamic scans: the first during treatment and the second at temozolomide chemotherapy discontinuation. Radiomics features from static/dynamic parametric images, alongside conventional features, were extracted. After excluding highly correlated features, 16 different models were trained by combining various feature selection methods and time-to-event survival algorithms. Performance was assessed using cross-validation. To evaluate model robustness, an additional dataset including 35 patients with a single PET scan at therapy discontinuation was used. Model performance was compared with a strategy extracting informative features from the set of 35 patients and applying them to the 18 patients with 2 PET scans. Delta-absolute radiomics achieved the highest performance when the pipeline was directly applied to the 18-patient subset (support vector machine (SVM) and recursive feature elimination (RFE): C-index = 0.783 [0.744-0.818]). This result remained consistent when transferring informative features from 35 patients (SVM + RFE: C-index = 0.751 [0.716-0.784], p = 0.06). In addition, it significantly outperformed delta-absolute conventional (C-index = 0.584 [0.548-0.620], p < 0.001) and single-time-point radiomics features (C-index = 0.546 [0.512-0.580], p < 0.001), highlighting the considerable potential of delta radiomics in rare cancer cohorts.
Assuntos
Glioma , Radiômica , Humanos , Intervalo Livre de Progressão , Glioma/diagnóstico por imagem , Tomografia por Emissão de Pósitrons , Estudos RetrospectivosRESUMO
Drug safety trials require substantial ECG labelling like, in thorough QT studies, measurements of the QT interval, whose prolongation is a biomarker of proarrhythmic risk. The traditional method of manually measuring the QT interval is time-consuming and error-prone. Studies have demonstrated the potential of deep learning (DL)-based methods to automate this task but expert validation of these computerized measurements remains of paramount importance, particularly for abnormal ECG recordings. In this paper, we propose a highly automated framework that combines such a DL-based QT estimator with human expertise. The framework consists of 3 key components: (1) automated QT measurement with uncertainty quantification (2) expert review of a few DL-based measurements, mostly those with high model uncertainty and (3) recalibration of the unreviewed measurements based on the expert-validated data. We assess its effectiveness on 3 drug safety trials and show that it can significantly reduce effort required for ECG labelling-in our experiments only 10% of the data were reviewed per trial-while maintaining high levels of QT accuracy. Our study thus demonstrates the possibility of productive human-machine collaboration in ECG analysis without any compromise on the reliability of subsequent clinical interpretations.
Assuntos
Eletrocardiografia , Humanos , Eletrocardiografia/métodos , Aprendizado Profundo , Processamento de Sinais Assistido por Computador , Síndrome do QT Longo , Efeitos Colaterais e Reações Adversas Relacionados a Medicamentos/prevenção & controle , Ensaios Clínicos como AssuntoRESUMO
Aims: Ventricular tachycardia (VT) is a dangerous cardiac arrhythmia that can lead to sudden cardiac death. Early detection and management of VT is thus of high clinical importance. We hypothesize that it is possible to identify patients with VT during sinus rhythm by leveraging a continuous 24â h Holter electrocardiogram and artificial intelligence. Methods and results: We analysed a retrospective Holter data set from the Rambam Health Care Campus, Haifa, Israel, which included 1773 Holter recordings from 1570 non-VT patients and 52 recordings from 49 VT patients. Morphological and heart rate variability features were engineered from the raw electrocardiogram signal and fed, together with demographical features, to a data-driven model for the task of classifying a patient as either VT or non-VT. The model obtained an area under the receiving operative curve of 0.76 ± 0.07. Feature importance suggested that the proportion of premature ventricular beats and beat-to-beat interval variability was discriminative of VT, while demographic features were not. Conclusion: This original study demonstrates the feasibility of VT identification from sinus rhythm in Holter.
RESUMO
BACKGROUND: In Cardiovascular Magnetic Resonance (CMR), the synchronization of image acquisition with heart motion is performed in clinical practice by processing the electrocardiogram (ECG). The ECG-based synchronization is well established for MR scanners with magnetic fields up to 3 T. However, this technique is prone to errors in ultra high field environments, e.g. in 7 T MR scanners as used in research applications. The high magnetic fields cause severe magnetohydrodynamic (MHD) effects which disturb the ECG signal. Image synchronization is thus less reliable and yields artefacts in CMR images. METHODS: A strategy based on Independent Component Analysis (ICA) was pursued in this work to enhance the ECG contribution and attenuate the MHD effect. ICA was applied to 12-lead ECG signals recorded inside a 7 T MR scanner. An automatic source identification procedure was proposed to identify an independent component (IC) dominated by the ECG signal. The identified IC was then used for detecting the R-peaks. The presented ICA-based method was compared to other R-peak detection methods using 1) the raw ECG signal, 2) the raw vectorcardiogram (VCG), 3) the state-of-the-art gating technique based on the VCG, 4) an updated version of the VCG-based approach and 5) the ICA of the VCG. RESULTS: ECG signals from eight volunteers were recorded inside the MR scanner. Recordings with an overall length of 87 min accounting for 5457 QRS complexes were available for the analysis. The records were divided into a training and a test dataset. In terms of R-peak detection within the test dataset, the proposed ICA-based algorithm achieved a detection performance with an average sensitivity (Se) of 99.2%, a positive predictive value (+P) of 99.1%, with an average trigger delay and jitter of 5.8 ms and 5.0 ms, respectively. Long term stability of the demixing matrix was shown based on two measurements of the same subject, each being separated by one year, whereas an averaged detection performance of Se = 99.4% and +P = 99.7% was achieved.Compared to the state-of-the-art VCG-based gating technique at 7 T, the proposed method increased the sensitivity and positive predictive value within the test dataset by 27.1% and 42.7%, respectively. CONCLUSIONS: The presented ICA-based method allows the estimation and identification of an IC dominated by the ECG signal. R-peak detection based on this IC outperforms the state-of-the-art VCG-based technique in a 7 T MR scanner environment.
Assuntos
Técnicas de Imagem de Sincronização Cardíaca/métodos , Eletrocardiografia , Frequência Cardíaca , Imageamento por Ressonância Magnética , Contração Miocárdica , Processamento de Sinais Assistido por Computador , Adulto , Algoritmos , Artefatos , Fenômenos Biomecânicos , Feminino , Voluntários Saudáveis , Humanos , Masculino , Modelos Cardiovasculares , Modelos Estatísticos , Valor Preditivo dos Testes , Reprodutibilidade dos Testes , Fatores de Tempo , Adulto JovemRESUMO
Gastrointestinal (GI) potential mapping could be useful for evaluating GI motility disorders. Such disorders are found in inflammatory bowel diseases, such as Crohn's disease, or GI functional disorders. GI potential mapping data originate from a mixture of several GI electrophysiological sources (termed ExG) and other noise sources, including the electrocardiogram (ECG) and respiration. Denoising and/or source separation techniques are required, however, with real measurements, no ground truth is available. In this paper we propose a framework for the simulation of body surface GI potential mapping data. The framework is an electrostatic model, based on fecgsyn toolbox, using dipoles as electrical sources for the heart, stomach, small bowel and colon, and an array of surface electrodes. It is shown to generate realistic ExG waveforms, which are then used to compare several ECG and respiration cancellation techniques, based on, fast independent component analysis (FastICA) and pseudo-periodic component analysis (PiCA). The best performance was obtained with PiCA with a median root mean squared error of 0.005.
Assuntos
Algoritmos , Pica , Humanos , Simulação por Computador , Intestino Delgado , EletrodosRESUMO
Rate-corrected QT interval (QTc) prolongation has been suggested as a biomarker for the risk of drug-induced torsades de pointes, and is therefore monitored during clinical trials for the assessment of drug safety. Manual QT measurements by expert ECG analysts are expensive, laborious and prone to errors. Wavelet-based delineators and other automatic methods do not generalize well to different T wave morphologies and may require laborious tuning. Our study investigates the robustness of convolutional neural networks (CNNs) for QT measurement. We trained 3 CNN-based deep learning models on a private ECG database with human expert-annotated QT intervals. Among these models, we propose a U-Net model, which is widely used for segmentation tasks, to build a novel clinically useful QT estimator that includes QT delineation for better interpretability. We tested the 3 models on four external databases, amongst which a clinical trial investigating four drugs. Our results show that the deep learning models are in stronger agreement with the experts than the state-of-the-art wavelet-based algorithm. Indeed, the deep learning models yielded up to 71% of accurate QT measurements (absolute difference between manual and automatic QT below 15 ms) whereas the wavelet-based algorithm only allowed 52% of QT accuracy. For the 2 studies of drugs with small to no QT prolonging effect, a mean absolute difference of 6 ms (std = 5 ms) was obtained between the manual and deep learning methods. For the other 2 drugs with more significant effect on the volunteers, a mean difference of up to 17 ms (std = 17 ms) was obtained. The proposed models are therefore promising for automated QT measurements during clinical trials. They can analyze various ECG morphologies from a diversity of individuals although some QT-prolonged ECGs can be challenging. The U-Net model is particularly interesting for our application as it facilitates expert review of automatic QT intervals, which is still required by regulatory bodies, by providing QRS onset and T offset positions that are consistent with the estimated QT intervals.
Assuntos
Eletrocardiografia , Síndrome do QT Longo , Humanos , Eletrocardiografia/métodos , Síndrome do QT Longo/induzido quimicamente , Síndrome do QT Longo/diagnóstico , Redes Neurais de ComputaçãoRESUMO
To drive health innovation that meets the needs of all and democratize healthcare, there is a need to assess the generalization performance of deep learning (DL) algorithms across various distribution shifts to ensure that these algorithms are robust. This retrospective study is, to the best of our knowledge, an original attempt to develop and assess the generalization performance of a DL model for AF events detection from long term beat-to-beat intervals across geography, ages and sexes. The new recurrent DL model, denoted ArNet2, is developed on a large retrospective dataset of 2,147 patients totaling 51,386 h obtained from continuous electrocardiogram (ECG). The model's generalization is evaluated on manually annotated test sets from four centers (USA, Israel, Japan and China) totaling 402 patients. The model is further validated on a retrospective dataset of 1,825 consecutives Holter recordings from Israel. The model outperforms benchmark state-of-the-art models and generalized well across geography, ages and sexes. For the task of event detection ArNet2 performance was higher for female than male, higher for young adults (less than 61 years old) than other age groups and across geography. Finally, ArNet2 shows better performance for the test sets from the USA and China. The main finding explaining these variations is an impairment in performance in groups with a higher prevalence of atrial flutter (AFL). Our findings on the relative performance of ArNet2 across groups may have clinical implications on the choice of the preferred AF examination method to use relative to the group of interest.