RESUMO
Background: Recent studies have shown that epicardial adipose tissue (EAT) is an independent atrial fibrillation (AF) prognostic marker and has influence on the myocardial function. In computed tomography (CT), EAT volume (EATv) and density (EATd) are parameters that are often used to quantify EAT. While increased EATv has been found to correlate with the prevalence and the recurrence of AF after ablation therapy, higher EATd correlates with inflammation due to arrest of lipid maturation and with high risk of plaque presence and plaque progression. Automation of the quantification task diminishes the variability in readings introduced by different observers in manual quantification and results in high reproducibility of studies and less time-consuming analysis. Our objective is to develop a fully automated quantification of EATv and EATd using a deep learning (DL) framework. Methods: We proposed a framework that consists of image classification and segmentation DL models and performs the task of selecting images with EAT from all the CT images acquired for a patient, and the task of segmenting the EAT from the output images of the preceding task. EATv and EATd are estimated using the segmentation masks to define the region of interest. For our experiments, a 300-patient dataset was divided into two subsets, each consisting of 150 patients: Dataset 1 (41,979 CT slices) for training the DL models, and Dataset 2 (36,428 CT slices) for evaluating the quantification of EATv and EATd. Results: The classification model achieved accuracies of 98% for precision, recall and F 1 scores, and the segmentation model achieved accuracies in terms of mean ( ± std.) and median dice similarity coefficient scores of 0.844 ( ± 0.19) and 0.84, respectively. Using the evaluation set (Dataset 2), our approach resulted in a Pearson correlation coefficient of 0.971 ( R 2 = 0.943) between the label and predicted EATv, and the correlation coefficient of 0.972 ( R 2 = 0.945) between the label and predicted EATd. Conclusions: We proposed a framework that provides a fast and robust strategy for accurate EAT segmentation, and volume (EATv) and attenuation (EATd) quantification tasks. The framework will be useful to clinicians and other practitioners for carrying out reproducible EAT quantification at patient level or for large cohorts and high-throughput projects.
RESUMO
Myocarditis is a cardiovascular disease characterised by inflammation of the heart muscle which can lead to heart failure. There is heterogeneity in the mode of presentation, underlying aetiologies, and clinical outcome with impact on a wide range of age groups which lead to diagnostic challenges. Cardiovascular magnetic resonance (CMR) is the preferred imaging modality in the diagnostic work-up of those with acute myocarditis. There is a need for systematic analytical approaches to improve diagnosis. Artificial intelligence (AI) and machine learning (ML) are increasingly used in CMR and has been shown to match human diagnostic performance in multiple disease categories. In this review article, we will describe the role of CMR in the diagnosis of acute myocarditis followed by a literature review on the applications of AI and ML to diagnose acute myocarditis. Only a few papers were identified with limitations in cases and control size and a lack of detail regarding cohort characteristics in addition to the absence of relevant cardiovascular disease controls. Furthermore, often CMR datasets did not include contemporary tissue characterisation parameters such as T1 and T2 mapping techniques, which are central to the diagnosis of acute myocarditis. Future work may include the use of explainability tools to enhance our confidence and understanding of the machine learning models with large, better characterised cohorts and clinical context improving the diagnosis of acute myocarditis.
RESUMO
BACKGROUND: New-onset atrial fibrillation (NOAF) occurs in 5% to 15% of patients who undergo transfemoral transcatheter aortic valve replacement (TAVR). Cardiac imaging has been underutilized to predict NOAF following TAVR. OBJECTIVES: The objective of this analysis was to compare and assess standard, manual echocardiographic and cardiac computed tomography (cCT) measurements as well as machine learning-derived cCT measurements of left atrial volume index and epicardial adipose tissue as risk factors for NOAF following TAVR. METHODS: The study included 1,385 patients undergoing elective, transfemoral TAVR for severe, symptomatic aortic stenosis. Each patient had standard and machine learning-derived measurements of left atrial volume and epicardial adipose tissue from cardiac computed tomography. The outcome of interest was NOAF within 30 days following TAVR. We used a 2-step statistical model including random forest for variable importance ranking, followed by multivariable logistic regression for predictors of highest importance. Model discrimination was assessed by using the C-statistic to compare the performance of the models with and without imaging. RESULTS: Forty-seven (5.0%) of 935 patients without pre-existing atrial fibrillation (AF) experienced NOAF. Patients with pre-existing AF had the largest left atrial volume index at 76.3 ± 28.6 cm3/m2 followed by NOAF at 68.1 ± 26.6 cm3/m2 and then no AF at 57.0 ± 21.7 cm3/m2 (P < 0.001). Multivariable regression identified the following risk factors in association with NOAF: left atrial volume index ≥76 cm2 (OR: 2.538 [95% CI: 1.165-5.531]; P = 0.0191), body mass index <22 kg/m2 (OR: 4.064 [95% CI: 1.500-11.008]; P = 0.0058), EATv (OR: 1.007 [95% CI: 1.000-1.014]; P = 0.043), aortic annulus area ≥659 mm2 (OR: 6.621 [95% CI: 1.849-23.708]; P = 0.004), and sinotubular junction diameter ≥35 mm (OR: 3.891 [95% CI: 1.040-14.552]; P = 0.0435). The C-statistic of the model was 0.737, compared with 0.646 in a model that excluded imaging variables. CONCLUSIONS: Underlying cardiac structural differences derived from cardiac imaging may be useful in predicting NOAF following transfemoral TAVR, independent of other clinical risk factors.
Assuntos
Estenose da Valva Aórtica , Fibrilação Atrial , Aprendizado de Máquina , Substituição da Valva Aórtica Transcateter , Humanos , Substituição da Valva Aórtica Transcateter/efeitos adversos , Fibrilação Atrial/cirurgia , Fibrilação Atrial/diagnóstico por imagem , Feminino , Masculino , Idoso , Idoso de 80 Anos ou mais , Estenose da Valva Aórtica/cirurgia , Estenose da Valva Aórtica/diagnóstico por imagem , Fatores de Risco , Ecocardiografia , Tomografia Computadorizada por Raios X , Átrios do Coração/diagnóstico por imagem , Átrios do Coração/anatomia & histologia , Complicações Pós-Operatórias/diagnóstico por imagem , Complicações Pós-Operatórias/epidemiologiaRESUMO
AIMS: Left ventricular systolic dysfunction (LSVD) is a heterogeneous condition with several factors influencing prognosis. Better phenotyping of asymptomatic individuals can inform preventative strategies. This study aims to explore the clinical phenotypes of LVSD in initially asymptomatic subjects and their association with clinical outcomes and cardiovascular abnormalities through multi-dimensional data clustering. METHODS AND RESULTS: Clustering analysis was performed on 60 clinically available variables from 1563 UK Biobank participants without pre-existing heart failure (HF) and with left ventricular ejection fraction (LVEF) < 50% on cardiovascular magnetic resonance (CMR) assessment. Risks of developing HF, other cardiovascular events, death, and a composite of major adverse cardiovascular events (MACE) associated with clusters were investigated. Cardiovascular imaging characteristics, not included in the clustering analysis, were also evaluated. Three distinct clusters were identified, differing considerably in lifestyle habits, cardiovascular risk factors, electrocardiographic parameters, and cardiometabolic profiles. A stepwise increase in risk profile was observed from Cluster 1 to Cluster 3, independent of traditional risk factors and LVEF. Compared with Cluster 1, the lowest risk subset, the risk of MACE ranged from 1.42 [95% confidence interval (CI): 1.03-1.96; P < 0.05] for Cluster 2 to 1.72 (95% CI: 1.36-2.35; P < 0.001) for Cluster 3. Cluster 3, the highest risk profile, had features of adverse cardiovascular imaging with the greatest LV re-modelling, myocardial dysfunction, and decrease in arterial compliance. CONCLUSIONS: Clustering of clinical variables identified three distinct risk profiles and clinical trajectories of LVSD amongst initially asymptomatic subjects. Improved characterization may facilitate tailored interventions based on the LVSD sub-type and improve clinical outcomes.
Assuntos
Insuficiência Cardíaca , Disfunção Ventricular Esquerda , Humanos , Função Ventricular Esquerda , Volume Sistólico , Fatores de Risco , Prognóstico , Medição de RiscoRESUMO
Background: Traditional risk scores for recurrent atrial fibrillation (AF) following catheter ablation utilize readily available clinical and echocardiographic variables and yet have limited discriminatory capacity. Use of data from cardiac imaging and deep learning may help improve accuracy and prediction of recurrent AF after ablation. Methods: We evaluated patients with symptomatic, drug-refractory AF undergoing catheter ablation. All patients underwent pre-ablation cardiac computed tomography (cCT). LAVi was computed using a deep-learning algorithm. In a two-step analysis, random survival forest (RSF) was used to generate prognostic models with variables of highest importance, followed by Cox proportional hazard regression analysis of the selected variables. Events of interest included early and late recurrence. Results: Among 653 patients undergoing AF ablation, the most important factors associated with late recurrence by RSF analysis at 24 (+/-18) months follow-up included LAVi and early recurrence. In total, 5 covariates were identified as independent predictors of late recurrence: LAVi (HR per mL/m2 1.01 [1.01-1.02]; p < .001), early recurrence (HR 2.42 [1.90-3.09]; p < .001), statin use (HR 1.38 [1.09-1.75]; p = .007), beta-blocker use (HR 1.29 [1.01-1.65]; p = .043), and adjunctive cavotricuspid isthmus ablation [HR 0.74 (0.57-0.96); p = .02]. Survival analysis demonstrated that patients with both LAVi >66.7 mL/m2 and early recurrence had the highest risk of late recurrence risk compared with those with LAVi <66.7 mL/m2 and no early recurrence (HR 4.52 [3.36-6.08], p < .001). Conclusions: Machine learning-derived, full volumetric LAVi from cCT is the most important pre-procedural risk factor for late AF recurrence following catheter ablation. The combination of increased LAVi and early recurrence confers more than a four-fold increased risk of late recurrence.
RESUMO
Cardiovascular magnetic resonance (CMR) is an important cardiac imaging tool for assessing the prognostic extent of myocardial injury after myocardial infarction (MI). Within the context of clinical trials, CMR is also useful for assessing the efficacy of potential cardioprotective therapies in reducing MI size and preventing adverse left ventricular (LV) remodelling in reperfused MI. However, manual contouring and analysis can be time-consuming with interobserver and intra-observer variability, which can in turn lead to reduction in accuracy and precision of analysis. There is thus a need to automate CMR scan analysis in MI patients to save time, increase accuracy, increase reproducibility and increase precision. In this regard, automated imaging analysis techniques based on artificial intelligence (AI) that are developed with machine learning (ML), and more specifically deep learning (DL) strategies, can enable efficient, robust, accurate and clinician-friendly tools to be built so as to try and improve both clinician productivity and quality of patient care. In this review, we discuss basic concepts of ML in CMR, important prognostic CMR imaging biomarkers in MI and the utility of current ML applications in their analysis as assessed in research studies. We highlight potential barriers to the mainstream implementation of these automated strategies and discuss related governance and quality control issues. Lastly, we discuss the future role of ML applications in clinical trials and the need for global collaboration in growing this field.
Assuntos
Inteligência Artificial , Infarto do Miocárdio , Humanos , Reprodutibilidade dos Testes , Infarto do Miocárdio/diagnóstico por imagem , Infarto do Miocárdio/terapia , Imageamento por Ressonância Magnética/métodos , Remodelação VentricularRESUMO
Objectives: Currently, administering contrast agents is necessary for accurately visualizing and quantifying presence, location, and extent of myocardial infarction (MI) with cardiac magnetic resonance (CMR). In this study, our objective is to investigate and analyze pre- and post-contrast CMR images with the goal of predicting post-contrast information using pre-contrast information only. We propose methods and identify challenges. Methods: The study population consists of 272 retrospectively selected CMR studies with diagnoses of MI (n = 108) and healthy controls (n = 164). We describe a pipeline for pre-processing this dataset for analysis. After data feature engineering, 722 cine short-axis (SAX) images and segmentation mask pairs were used for experimentation. This constitutes 506, 108, and 108 pairs for the training, validation, and testing sets, respectively. We use deep learning (DL) segmentation (UNet) and classification (ResNet50) models to discover the extent and location of the scar and classify between the ischemic cases and healthy cases (i.e., cases with no regional myocardial scar) from the pre-contrast cine SAX image frames, respectively. We then capture complex data patterns that represent subtle signal and functional changes in the cine SAX images due to MI using optical flow, rate of change of myocardial area, and radiomics data. We apply this dataset to explore two supervised learning methods, namely, the support vector machines (SVM) and the decision tree (DT) methods, to develop predictive models for classifying pre-contrast cine SAX images as being a case of MI or healthy. Results: Overall, for the UNet segmentation model, the performance based on the mean Dice score for the test set (n = 108) is 0.75 (±0.20) for the endocardium, 0.51 (±0.21) for the epicardium and 0.20 (±0.17) for the scar. For the classification task, the accuracy, F1 and precision scores of 0.68, 0.69, and 0.64, respectively, were achieved with the SVM model, and of 0.62, 0.63, and 0.72, respectively, with the DT model. Conclusion: We have presented some promising approaches involving DL, SVM, and DT methods in an attempt to accurately predict contrast information from non-contrast images. While our initial results are modest for this challenging task, this area of research still poses several open problems.
RESUMO
OBJECTIVES: Cardiac computed tomography (CCT) is a common pre-operative imaging modality to evaluate pulmonary vein anatomy and left atrial appendage thrombus in patients undergoing catheter ablation (CA) for atrial fibrillation (AF). These images also allow for full volumetric left atrium (LA) measurement for recurrence risk stratification, as larger LA volume (LAV) is associated with higher recurrence rates. Our objective is to apply deep learning (DL) techniques to fully automate the computation of LAV and assess the quality of the computed LAV values. METHODS: Using a dataset of 85,477 CCT images from 337 patients, we proposed a framework that consists of several processes that perform a combination of tasks including the selection of images with LA from all other images using a ResNet50 classification model, the segmentation of images with LA using a UNet image segmentation model, the assessment of the quality of the image segmentation task, the estimation of LAV, and quality control (QC) assessment. RESULTS: Overall, the proposed LAV estimation framework achieved accuracies of 98% (precision, recall, and F1 score metrics) in the image classification task, 88.5% (mean dice score) in the image segmentation task, 82% (mean dice score) in the segmentation quality prediction task, and R 2 (the coefficient of determination) value of 0.968 in the volume estimation task. It correctly identified 9 out of 10 poor LAV estimations from a total of 337 patients as poor-quality estimates. CONCLUSIONS: We proposed a generalizable framework that consists of DL models and computational methods for LAV estimation. The framework provides an efficient and robust strategy for QC assessment of the accuracy for DL-based image segmentation and volume estimation tasks, allowing high-throughput extraction of reproducible LAV measurements to be possible.
RESUMO
COVID-19 has created enormous suffering, affecting lives, and causing deaths. The ease with which this type of coronavirus can spread has exposed weaknesses of many healthcare systems around the world. Since its emergence, many governments, research communities, commercial enterprises, and other institutions and stakeholders around the world have been fighting in various ways to curb the spread of the disease. Science and technology have helped in the implementation of policies of many governments that are directed toward mitigating the impacts of the pandemic and in diagnosing and providing care for the disease. Recent technological tools, artificial intelligence (AI) tools in particular, have also been explored to track the spread of the coronavirus, identify patients with high mortality risk and diagnose patients for the disease. In this paper, areas where AI techniques are being used in the detection, diagnosis and epidemiological predictions, forecasting and social control for combating COVID-19 are discussed, highlighting areas of successful applications and underscoring issues that need to be addressed to achieve significant progress in battling COVID-19 and future pandemics. Several AI systems have been developed for diagnosing COVID-19 using medical imaging modalities such as chest CT and X-ray images. These AI systems mainly differ in their choices of the algorithms for image segmentation, classification and disease diagnosis. Other AI-based systems have focused on predicting mortality rate, long-term patient hospitalization and patient outcomes for COVID-19. AI has huge potential in the battle against the COVID-19 pandemic but successful practical deployments of these AI-based tools have so far been limited due to challenges such as limited data accessibility, the need for external evaluation of AI models, the lack of awareness of AI experts of the regulatory landscape governing the deployment of AI tools in healthcare, the need for clinicians and other experts to work with AI experts in a multidisciplinary context and the need to address public concerns over data collection, privacy, and protection. Having a dedicated team with expertise in medical data collection, privacy, access and sharing, using federated learning whereby AI scientists hand over training algorithms to the healthcare institutions to train models locally, and taking full advantage of biomedical data stored in biobanks can alleviate some of problems posed by these challenges. Addressing these challenges will ultimately accelerate the translation of AI research into practical and useful solutions for combating pandemics.
RESUMO
Artificial intelligence (AI) using machine learning techniques will change healthcare as we know it. While healthcare AI applications are currently trailing behind popular AI applications, such as personalized web-based advertising, the pace of research and deployment is picking up and about to become disruptive. Overcoming challenges such as patient and public support, transparency over the legal basis for healthcare data use, privacy preservation, technical challenges related to accessing large-scale data from healthcare systems not designed for Big Data analysis, and deployment of AI in routine clinical practice will be crucial. Cardiac imaging and imaging of other body parts is likely to be at the frontier for the development of applications as pattern recognition and machine learning are a significant strength of AI with practical links to image processing. Many opportunities in cardiac imaging exist where AI will impact patients, medical staff, hospitals, commissioners and thus, the entire healthcare system. This perspective article will outline our vision for AI in cardiac imaging with examples of potential applications, challenges and some lessons learnt in recent years.
RESUMO
Patients with chronic kidney disease (CKD) have significantly increased morbidity and mortality resulting from infections and cardiovascular diseases. Since monocytes play an essential role in host immunity, this study was directed to explore the gene expression profile in order to identify differences in activated pathways in monocytes relevant to the pathophysiology of atherosclerosis and increased susceptibility to infections. Monocytes from CKD patients (stages 4 and 5, estimated GFR <20 ml/min/1.73 m(2)) and healthy donors were collected from peripheral blood. Microarray gene expression profile was performed and data were interpreted by GeneSpring software and by PANTHER tool. Western blot was done to validate the pathway members. The results demonstrated that 600 and 272 genes were differentially up- and down regulated respectively in the patient group. Pathways involved in the inflammatory response were highly expressed and the Wnt/ß-catenin signaling pathway was the most significant pathway expressed in the patient group. Since this pathway has been attributed to a variety of inflammatory manifestations, the current findings may contribute to dysfunctional monocytes in CKD patients. Strategies to interfere with this pathway may improve host immunity and prevent cardiovascular complications in CKD patients.