Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 8 de 8
Filtrar
Más filtros










Base de datos
Intervalo de año de publicación
1.
Life (Basel) ; 12(6)2022 Jun 06.
Artículo en Inglés | MEDLINE | ID: mdl-35743873

RESUMEN

An electrocardiogram (ECG) consists of five types of different waveforms or characteristics (P, QRS, and T) that represent electrical activity within the heart. Identification of time intervals and morphological appearance of the waves are the major measuring instruments to detect cardiac abnormality from ECG signals. The focus of this study is to classify five different types of heartbeats, including premature ventricular contraction (PVC), left bundle branch block (LBBB), right bundle branch block (RBBB), PACE, and atrial premature contraction (APC), to identify the exact condition of the heart. Prior to the classification, extensive experiments on feature extraction were performed to identify the specific events from ECG signals, such as P, QRS complex, and T waves. This study proposed the fusion technique, dual event-related moving average (DERMA) with the fractional Fourier-transform algorithm (FrlFT) to identify the abnormal and normal morphological events of the ECG signals. The purpose of the DERMA fusion technique is to analyze certain areas of interest in ECG peaks to identify the desired location, whereas FrlFT analyzes the ECG waveform using a time-frequency plane. Furthermore, detected highest and lowest components of the ECG signal such as peaks, the time interval between the peaks, and other necessary parameters were utilized to develop an automatic model. In the last stage of the experiment, two supervised learning models, namely support vector machine and K-nearest neighbor, were trained to classify the cardiac condition from ECG signals. Moreover, two types of datasets were used in this experiment, specifically MIT-BIH Arrhythmia with 48 subjects and the newly disclosed Shaoxing and Ningbo People's Hospital (SPNH) database, which contains over 10,000 patients. The performance of the experimental setup produced overwhelming results, which show around 99.99% accuracy, 99.96% sensitivity, and 99.9% specificity.

2.
J Med Syst ; 42(12): 252, 2018 Nov 05.
Artículo en Inglés | MEDLINE | ID: mdl-30397730

RESUMEN

Electrocardiography (ECG) sensors play a vital role in the Internet of Medical Things, and these sensors help in monitoring the electrical activity of the heart. ECG signal analysis can improve human life in many ways, from diagnosing diseases among cardiac patients to managing the lifestyles of diabetic patients. Abnormalities in heart activities lead to different cardiac diseases and arrhythmia. However, some cardiac diseases, such as myocardial infarction (MI) and atrial fibrillation (Af), require special attention due to their direct impact on human life. The classification of flattened T wave cases of MI in ECG signals and how much of these cases are similar to ST-T changes in MI remain an open issue for researchers. This article presents a novel contribution to classify MI and Af. To this end, we propose a new approach called deep deterministic learning (DDL), which works by combining predefined heart activities with fused datasets. In this research, we used two datasets. The first dataset, Massachusetts Institute of Technology-Beth Israel Hospital, is publicly available, and we exclusively obtained the second dataset from the University of Malaya Medical Center, Kuala Lumpur Malaysia. We first initiated predefined activities on each individual dataset to recognize patterns between the ST-T change and flattened T wave cases and then used the data fusion approach to merge both datasets in a manner that delivers the most accurate pattern recognition results. The proposed DDL approach is a systematic stage-wise methodology that relies on accurate detection of R peaks in ECG signals, time domain features of ECG signals, and fine tune-up of artificial neural networks. The empirical evaluation shows high accuracy (i.e., ≤99.97%) in pattern matching ST-T changes and flattened T waves using the proposed DDL approach. The proposed pattern recognition approach is a significant contribution to the diagnosis of special cases of MI.


Asunto(s)
Fibrilación Atrial/diagnóstico , Aprendizaje Profundo , Electrocardiografía/métodos , Procesamiento de Imagen Asistido por Computador/métodos , Infarto del Miocardio/diagnóstico , Reconocimiento de Normas Patrones Automatizadas/métodos , Fibrilación Atrial/patología , Femenino , Humanos , Internet , Masculino , Infarto del Miocardio/patología , Redes Neurales de la Computación
3.
Cardiol Res Pract ; 2018: 2016282, 2018.
Artículo en Inglés | MEDLINE | ID: mdl-29507812

RESUMEN

Coronary artery disease (CAD) is the most dangerous heart disease which may lead to sudden cardiac death. However, CAD diagnoses are quite expensive and time-consuming procedures which a patient need to go through. The aim of our paper is to present a unique review of state-of-the-art methods up to 2017 for automatic CAD classification. The protocol of review methods is identifying best methods and classifier for CAD identification. The study proposes two workflows based on two parameter sets for instances A and B. It is necessary to follow the proper procedure, for future evaluation process of automatic diagnosis of CAD. The initial two stages of the parameter set A workflow are preprocessing and feature extraction. Subsequently, stages (feature selection and classification) are same for both workflows. In literature, the SVM classifier represents a promising approach for CAD classification. Moreover, the limitation leads to extract proper features from noninvasive signals.

4.
Med Biol Eng Comput ; 54(2-3): 385-99, 2016 Mar.
Artículo en Inglés | MEDLINE | ID: mdl-26081904

RESUMEN

Tuberculosis is a major global health problem that has been ranked as the second leading cause of death from an infectious disease worldwide, after the human immunodeficiency virus. Diagnosis based on cultured specimens is the reference standard; however, results take weeks to obtain. Slow and insensitive diagnostic methods hampered the global control of tuberculosis, and scientists are looking for early detection strategies, which remain the foundation of tuberculosis control. Consequently, there is a need to develop an expert system that helps medical professionals to accurately diagnose the disease. The objective of this study is to diagnose tuberculosis using a machine learning method. Artificial immune recognition system (AIRS) has been used successfully for diagnosing various diseases. However, little effort has been undertaken to improve its classification accuracy. In order to increase the classification accuracy, this study introduces a new hybrid system that incorporates real tournament selection mechanism into the AIRS. This mechanism is used to control the population size of the model and to overcome the existing selection pressure. Patient epacris reports obtained from the Pasteur laboratory in northern Iran were used as the benchmark data set. The sample consisted of 175 records, from which 114 (65 %) were positive for TB, and the remaining 61 (35 %) were negative. The classification performance was measured through tenfold cross-validation, root-mean-square error, sensitivity, and specificity. With an accuracy of 100 %, RMSE of 0, sensitivity of 100 %, and specificity of 100 %, the proposed method was able to successfully classify tuberculosis cases. In addition, the proposed method is comparable with top classifiers used in this research.


Asunto(s)
Algoritmos , Inteligencia Artificial , Sistemas Especialistas , Reconocimiento de Normas Patrones Automatizadas , Tuberculosis/diagnóstico , Humanos
5.
PLoS One ; 10(12): e0144059, 2015.
Artículo en Inglés | MEDLINE | ID: mdl-26658987

RESUMEN

Similarity or distance measures are core components used by distance-based clustering algorithms to cluster similar data points into the same clusters, while dissimilar or distant data points are placed into different clusters. The performance of similarity measures is mostly addressed in two or three-dimensional spaces, beyond which, to the best of our knowledge, there is no empirical study that has revealed the behavior of similarity measures when dealing with high-dimensional datasets. To fill this gap, a technical framework is proposed in this study to analyze, compare and benchmark the influence of different similarity measures on the results of distance-based clustering algorithms. For reproducibility purposes, fifteen publicly available datasets were used for this study, and consequently, future distance measures can be evaluated and compared with the results of the measures discussed in this work. These datasets were classified as low and high-dimensional categories to study the performance of each measure against each category. This research should help the research community to identify suitable distance measures for datasets and also to facilitate a comparison and evaluation of the newly proposed similarity or distance measures with traditional ones.


Asunto(s)
Algoritmos , Minería de Datos/estadística & datos numéricos , Conjuntos de Datos como Asunto , Análisis de Varianza , Benchmarking , Análisis por Conglomerados
6.
Iran Red Crescent Med J ; 17(4): e24557, 2015 Apr.
Artículo en Inglés | MEDLINE | ID: mdl-26023340

RESUMEN

BACKGROUND: Tuberculosis (TB) is a major global health problem, which has been ranked as the second leading cause of death from an infectious disease worldwide. Diagnosis based on cultured specimens is the reference standard, however results take weeks to process. Scientists are looking for early detection strategies, which remain the cornerstone of tuberculosis control. Consequently there is a need to develop an expert system that helps medical professionals to accurately and quickly diagnose the disease. Artificial Immune Recognition System (AIRS) has been used successfully for diagnosing various diseases. However, little effort has been undertaken to improve its classification accuracy. OBJECTIVES: In order to increase the classification accuracy of AIRS, this study introduces a new hybrid system that incorporates a support vector machine into AIRS for diagnosing tuberculosis. PATIENTS AND METHODS: Patient epacris reports obtained from the Pasteur laboratory of Iran were used as the benchmark data set, with the sample size of 175 (114 positive samples for TB and 60 samples in the negative group). The strategy of this study was to ensure representativeness, thus it was important to have an adequate number of instances for both TB and non-TB cases. The classification performance was measured through 10-fold cross-validation, Root Mean Squared Error (RMSE), sensitivity and specificity, Youden's Index, and Area Under the Curve (AUC). Statistical analysis was done using the Waikato Environment for Knowledge Analysis (WEKA), a machine learning program for windows. RESULTS: With an accuracy of 100%, sensitivity of 100%, specificity of 100%, Youden's Index of 1, Area Under the Curve of 1, and RMSE of 0, the proposed method was able to successfully classify tuberculosis patients. CONCLUSIONS: There have been many researches that aimed at diagnosing tuberculosis faster and more accurately. Our results described a model for diagnosing tuberculosis with 100% sensitivity and 100% specificity. This model can be used as an additional tool for experts in medicine to diagnose TBC more accurately and quickly.

7.
Sensors (Basel) ; 15(2): 4430-69, 2015 Feb 13.
Artículo en Inglés | MEDLINE | ID: mdl-25688592

RESUMEN

The staggering growth in smartphone and wearable device use has led to a massive scale generation of personal (user-specific) data. To explore, analyze, and extract useful information and knowledge from the deluge of personal data, one has to leverage these devices as the data-mining platforms in ubiquitous, pervasive, and big data environments. This study presents the personal ecosystem where all computational resources, communication facilities, storage and knowledge management systems are available in user proximity. An extensive review on recent literature has been conducted and a detailed taxonomy is presented. The performance evaluation metrics and their empirical evidences are sorted out in this paper. Finally, we have highlighted some future research directions and potentially emerging application areas for personal data mining using smartphones and wearable devices.

8.
ScientificWorldJournal ; 2014: 926020, 2014.
Artículo en Inglés | MEDLINE | ID: mdl-25110753

RESUMEN

Data streams are continuously generated over time from Internet of Things (IoT) devices. The faster all of this data is analyzed, its hidden trends and patterns discovered, and new strategies created, the faster action can be taken, creating greater value for organizations. Density-based method is a prominent class in clustering data streams. It has the ability to detect arbitrary shape clusters, to handle outlier, and it does not need the number of clusters in advance. Therefore, density-based clustering algorithm is a proper choice for clustering IoT streams. Recently, several density-based algorithms have been proposed for clustering data streams. However, density-based clustering in limited time is still a challenging issue. In this paper, we propose a density-based clustering algorithm for IoT streams. The method has fast processing time to be applicable in real-time application of IoT devices. Experimental results show that the proposed approach obtains high quality results with low computation time on real and synthetic datasets.


Asunto(s)
Algoritmos , Análisis por Conglomerados , Internet , Modelos Teóricos
SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA
...