Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 25
Filtrar
1.
Sensors (Basel) ; 23(16)2023 Aug 14.
Artigo em Inglês | MEDLINE | ID: mdl-37631693

RESUMO

Every one of us has a unique manner of communicating to explore the world, and such communication helps to interpret life. Sign language is the popular language of communication for hearing and speech-disabled people. When a sign language user interacts with a non-sign language user, it becomes difficult for a signer to express themselves to another person. A sign language recognition system can help a signer to interpret the sign of a non-sign language user. This study presents a sign language recognition system that is capable of recognizing Arabic Sign Language from recorded RGB videos. To achieve this, two datasets were considered, such as (1) the raw dataset and (2) the face-hand region-based segmented dataset produced from the raw dataset. Moreover, operational layer-based multi-layer perceptron "SelfMLP" is proposed in this study to build CNN-LSTM-SelfMLP models for Arabic Sign Language recognition. MobileNetV2 and ResNet18-based CNN backbones and three SelfMLPs were used to construct six different models of CNN-LSTM-SelfMLP architecture for performance comparison of Arabic Sign Language recognition. This study examined the signer-independent mode to deal with real-time application circumstances. As a result, MobileNetV2-LSTM-SelfMLP on the segmented dataset achieved the best accuracy of 87.69% with 88.57% precision, 87.69% recall, 87.72% F1 score, and 99.75% specificity. Overall, face-hand region-based segmentation and SelfMLP-infused MobileNetV2-LSTM-SelfMLP surpassed the previous findings on Arabic Sign Language recognition by 10.970% accuracy.


Assuntos
Aprendizado Profundo , Humanos , Idioma , Língua de Sinais , Comunicação , Reconhecimento Psicológico
2.
Sensors (Basel) ; 22(2)2022 Jan 12.
Artigo em Inglês | MEDLINE | ID: mdl-35062533

RESUMO

A real-time Bangla Sign Language interpreter can enable more than 200 k hearing and speech-impaired people to the mainstream workforce in Bangladesh. Bangla Sign Language (BdSL) recognition and detection is a challenging topic in computer vision and deep learning research because sign language recognition accuracy may vary on the skin tone, hand orientation, and background. This research has used deep machine learning models for accurate and reliable BdSL Alphabets and Numerals using two well-suited and robust datasets. The dataset prepared in this study comprises of the largest image database for BdSL Alphabets and Numerals in order to reduce inter-class similarity while dealing with diverse image data, which comprises various backgrounds and skin tones. The papers compared classification with and without background images to determine the best working model for BdSL Alphabets and Numerals interpretation. The CNN model trained with the images that had a background was found to be more effective than without background. The hand detection portion in the segmentation approach must be more accurate in the hand detection process to boost the overall accuracy in the sign recognition. It was found that ResNet18 performed best with 99.99% accuracy, precision, F1 score, sensitivity, and 100% specificity, which outperforms the works in the literature for BdSL Alphabets and Numerals recognition. This dataset is made publicly available for researchers to support and encourage further research on Bangla Sign Language Interpretation so that the hearing and speech-impaired individuals can benefit from this research.


Assuntos
Aprendizado Profundo , Língua de Sinais , Mãos , Humanos , Aprendizado de Máquina , Redes Neurais de Computação
3.
Sensors (Basel) ; 22(3)2022 Jan 25.
Artigo em Inglês | MEDLINE | ID: mdl-35161664

RESUMO

Cardiovascular diseases are the most common causes of death around the world. To detect and treat heart-related diseases, continuous blood pressure (BP) monitoring along with many other parameters are required. Several invasive and non-invasive methods have been developed for this purpose. Most existing methods used in hospitals for continuous monitoring of BP are invasive. On the contrary, cuff-based BP monitoring methods, which can predict systolic blood pressure (SBP) and diastolic blood pressure (DBP), cannot be used for continuous monitoring. Several studies attempted to predict BP from non-invasively collectible signals such as photoplethysmograms (PPG) and electrocardiograms (ECG), which can be used for continuous monitoring. In this study, we explored the applicability of autoencoders in predicting BP from PPG and ECG signals. The investigation was carried out on 12,000 instances of 942 patients of the MIMIC-II dataset, and it was found that a very shallow, one-dimensional autoencoder can extract the relevant features to predict the SBP and DBP with state-of-the-art performance on a very large dataset. An independent test set from a portion of the MIMIC-II dataset provided a mean absolute error (MAE) of 2.333 and 0.713 for SBP and DBP, respectively. On an external dataset of 40 subjects, the model trained on the MIMIC-II dataset provided an MAE of 2.728 and 1.166 for SBP and DBP, respectively. For both the cases, the results met British Hypertension Society (BHS) Grade A and surpassed the studies from the current literature.


Assuntos
Hipertensão , Fotopletismografia , Pressão Sanguínea , Determinação da Pressão Arterial , Eletrocardiografia , Humanos , Hipertensão/diagnóstico
4.
Sensors (Basel) ; 22(5)2022 Feb 24.
Artigo em Inglês | MEDLINE | ID: mdl-35270938

RESUMO

Diabetes mellitus (DM) can lead to plantar ulcers, amputation and death. Plantar foot thermogram images acquired using an infrared camera have been shown to detect changes in temperature distribution associated with a higher risk of foot ulceration. Machine learning approaches applied to such infrared images may have utility in the early diagnosis of diabetic foot complications. In this work, a publicly available dataset was categorized into different classes, which were corroborated by domain experts, based on a temperature distribution parameter-the thermal change index (TCI). We then explored different machine-learning approaches for classifying thermograms of the TCI-labeled dataset. Classical machine learning algorithms with feature engineering and the convolutional neural network (CNN) with image enhancement techniques were extensively investigated to identify the best performing network for classifying thermograms. The multilayer perceptron (MLP) classifier along with the features extracted from thermogram images showed an accuracy of 90.1% in multi-class classification, which outperformed the literature-reported performance metrics on this dataset.


Assuntos
Diabetes Mellitus , Pé Diabético , Algoritmos , Pé Diabético/diagnóstico por imagem , Humanos , Aprendizado de Máquina , Redes Neurais de Computação , Termografia
5.
Sensors (Basel) ; 20(4)2020 Feb 11.
Artigo em Inglês | MEDLINE | ID: mdl-32053914

RESUMO

Gait analysis is a systematic study of human locomotion, which can be utilized in variousapplications, such as rehabilitation, clinical diagnostics and sports activities. The various limitationssuch as cost, non-portability, long setup time, post-processing time etc., of the current gait analysistechniques have made them unfeasible for individual use. This led to an increase in research interestin developing smart insoles where wearable sensors can be employed to detect vertical groundreaction forces (vGRF) and other gait variables. Smart insoles are flexible, portable and comfortablefor gait analysis, and can monitor plantar pressure frequently through embedded sensors thatconvert the applied pressure to an electrical signal that can be displayed and analyzed further.Several research teams are still working to improve the insoles' features such as size, sensitivity ofinsoles sensors, durability, and the intelligence of insoles to monitor and control subjects' gait bydetecting various complications providing recommendation to enhance walking performance. Eventhough systematic sensor calibration approaches have been followed by different teams to calibrateinsoles' sensor, expensive calibration devices were used for calibration such as universal testingmachines or infrared motion capture cameras equipped in motion analysis labs. This paper providesa systematic design and characterization procedure for three different pressure sensors: forcesensitiveresistors (FSRs), ceramic piezoelectric sensors, and flexible piezoelectric sensors that canbe used for detecting vGRF using a smart insole. A simple calibration method based on a load cellis presented as an alternative to the expensive calibration techniques. In addition, to evaluate theperformance of the different sensors as a component for the smart insole, the acquired vGRF fromdifferent insoles were used to compare them. The results showed that the FSR is the most effectivesensor among the three sensors for smart insole applications, whereas the piezoelectric sensors canbe utilized in detecting the start and end of the gait cycle. This study will be useful for any researchgroup in replicating the design of a customized smart insole for gait analysis.


Assuntos
Análise da Marcha/métodos , Caminhada , Adulto , Desenho de Equipamento , Feminino , Análise da Marcha/instrumentação , Humanos , Masculino , Sistemas Microeletromecânicos , Pessoa de Meia-Idade , Pressão , Sapatos , Dispositivos Eletrônicos Vestíveis
6.
Sensors (Basel) ; 19(12)2019 Jun 20.
Artigo em Inglês | MEDLINE | ID: mdl-31226869

RESUMO

One of the major causes of death all over the world is heart disease or cardiac dysfunction. These diseases could be identified easily with the variations in the sound produced due to the heart activity. These sophisticated auscultations need important clinical experience and concentrated listening skills. Therefore, there is an unmet need for a portable system for the early detection of cardiac illnesses. This paper proposes a prototype model of a smart digital-stethoscope system to monitor patient's heart sounds and diagnose any abnormality in a real-time manner. This system consists of two subsystems that communicate wirelessly using Bluetooth low energy technology: A portable digital stethoscope subsystem, and a computer-based decision-making subsystem. The portable subsystem captures the heart sounds of the patient, filters and digitizes, and sends the captured heart sounds to a personal computer wirelessly to visualize the heart sounds and for further processing to make a decision if the heart sounds are normal or abnormal. Twenty-seven t-domain, f-domain, and Mel frequency cepstral coefficients (MFCC) features were used to train a public database to identify the best-performing algorithm for classifying abnormal and normal heart sound (HS). The hyper parameter optimization, along with and without a feature reduction method, was tested to improve accuracy. The cost-adjusted optimized ensemble algorithm can produce 97% and 88% accuracy of classifying abnormal and normal HS, respectively.


Assuntos
Cardiopatias/diagnóstico , Monitorização Fisiológica , Estetoscópios , Algoritmos , Auscultação , Cardiopatias/fisiopatologia , Ruídos Cardíacos/fisiologia , Humanos , Processamento de Sinais Assistido por Computador
7.
Comput Biol Med ; 182: 109179, 2024 Sep 25.
Artigo em Inglês | MEDLINE | ID: mdl-39326263

RESUMO

Sesamoiditis is a common equine disease with varying severity, leading to increased injury risks and performance degradation in horses. Accurate grading of sesamoiditis is crucial for effective treatment. Although deep learning-based approaches for grading sesamoiditis show promise, they remain underexplored and often lack clinical interpretability. To address this issue, we propose a novel, clinically interpretable multi-task learning model that integrates clinical knowledge with machine learning. The proposed model employs a dual-branch decoder to simultaneously perform sesamoiditis grading and vascular channel segmentation. Feature fusion is utilized to transfer knowledge between these tasks, enabling the identification of subtle radiographic variations. Additionally, our model generates a diagnostic report that, along with the vascular channel mask, serves as an explanation of the model's grading decisions, thereby increasing the transparency of the decision-making process. We validate our model on two datasets, demonstrating its superior performance compared to state-of-the-art models in terms of accuracy and generalization. This study provides a foundational framework for the interpretable grading of similar diseases.

8.
Comput Biol Med ; 181: 109030, 2024 Oct.
Artigo em Inglês | MEDLINE | ID: mdl-39173488

RESUMO

Laryngeal hemiplegia (LH) is a major upper respiratory tract (URT) complication in racehorses. Endoscopy imaging of horse throat is a gold standard for URT assessment. However, current manual assessment faces several challenges, stemming from the poor quality of endoscopy videos and subjectivity of manual grading. To overcome such limitations, we propose an explainable machine learning (ML)-based solution for efficient URT assessment. Specifically, a cascaded YOLOv8 architecture is utilized to segment the key semantic regions and landmarks per frame. Several spatiotemporal features are then extracted from key landmarks points and fed to a decision tree (DT) model to classify LH as Grade 1,2,3 or 4 denoting absence of LH, mild, moderate, and severe LH, respectively. The proposed method, validated through 5-fold cross-validation on 107 videos, showed promising performance in classifying different LH grades with 100%, 91.18%, 94.74% and 100% sensitivity values for Grade 1 to 4, respectively. Further validation on an external dataset of 72 cases confirmed its generalization capability with 90%, 80.95%, 100%, and 100% sensitivity values for Grade 1 to 4, respectively. We introduced several explainability related assessment functions, including: (i) visualization of YOLOv8 output to detect landmark estimation errors which can affect the final classification, (ii) time-series visualization to assess video quality, and (iii) backtracking of the DT output to identify borderline cases. We incorporated domain knowledge (e.g., veterinarian diagnostic procedures) into the proposed ML framework. This provides an assistive tool with clinical-relevance and explainability that can ease and speed up the URT assessment by veterinarians.


Assuntos
Aprendizado de Máquina , Gravação em Vídeo , Cavalos , Animais , Doenças dos Cavalos/diagnóstico por imagem , Endoscopia/métodos
9.
Biosensors (Basel) ; 13(3)2023 Feb 21.
Artigo em Inglês | MEDLINE | ID: mdl-36979514

RESUMO

Automated brain tumor segmentation from reconstructed microwave (RMW) brain images and image classification is essential for the investigation and monitoring of the progression of brain disease. The manual detection, classification, and segmentation of tumors are extremely time-consuming but crucial tasks due to the tumor's pattern. In this paper, we propose a new lightweight segmentation model called MicrowaveSegNet (MSegNet), which segments the brain tumor, and a new classifier called the BrainImageNet (BINet) model to classify the RMW images. Initially, three hundred (300) RMW brain image samples were obtained from our sensors-based microwave brain imaging (SMBI) system to create an original dataset. Then, image preprocessing and augmentation techniques were applied to make 6000 training images per fold for a 5-fold cross-validation. Later, the MSegNet and BINet were compared to state-of-the-art segmentation and classification models to verify their performance. The MSegNet has achieved an Intersection-over-Union (IoU) and Dice score of 86.92% and 93.10%, respectively, for tumor segmentation. The BINet has achieved an accuracy, precision, recall, F1-score, and specificity of 89.33%, 88.74%, 88.67%, 88.61%, and 94.33%, respectively, for three-class classification using raw RMW images, whereas it achieved 98.33%, 98.35%, 98.33%, 98.33%, and 99.17%, respectively, for segmented RMW images. Therefore, the proposed cascaded model can be used in the SMBI system.


Assuntos
Neoplasias Encefálicas , Aprendizado Profundo , Humanos , Micro-Ondas , Processamento de Imagem Assistida por Computador/métodos , Neoplasias Encefálicas/diagnóstico por imagem , Encéfalo/diagnóstico por imagem , Neuroimagem
10.
Bioengineering (Basel) ; 10(5)2023 Apr 28.
Artigo em Inglês | MEDLINE | ID: mdl-37237612

RESUMO

Magnetic resonance imaging (MRI) is commonly used in medical diagnosis and minimally invasive image-guided operations. During an MRI scan, the patient's electrocardiogram (ECG) may be required for either gating or patient monitoring. However, the challenging environment of an MRI scanner, with its several types of magnetic fields, creates significant distortions of the collected ECG data due to the Magnetohydrodynamic (MHD) effect. These changes can be seen as irregular heartbeats. These distortions and abnormalities hamper the detection of QRS complexes, and a more in-depth diagnosis based on the ECG. This study aims to reliably detect R-peaks in the ECG waveforms in 3 Tesla (T) and 7T magnetic fields. A novel model, Self-Attention MHDNet, is proposed to detect R peaks from the MHD corrupted ECG signal through 1D-segmentation. The proposed model achieves a recall and precision of 99.83% and 99.68%, respectively, for the ECG data acquired in a 3T setting, while 99.87% and 99.78%, respectively, in a 7T setting. This model can thus be used in accurately gating the trigger pulse for the cardiovascular functional MRI.

11.
IEEE Trans Neural Netw Learn Syst ; 34(11): 9363-9374, 2023 11.
Artigo em Inglês | MEDLINE | ID: mdl-35344496

RESUMO

Although numerous R-peak detectors have been proposed in the literature, their robustness and performance levels may significantly deteriorate in low-quality and noisy signals acquired from mobile electrocardiogram (ECG) sensors, such as Holter monitors. Recently, this issue has been addressed by deep 1-D convolutional neural networks (CNNs) that have achieved state-of-the-art performance levels in Holter monitors; however, they pose a high complexity level that requires special parallelized hardware setup for real-time processing. On the other hand, their performance deteriorates when a compact network configuration is used instead. This is an expected outcome as recent studies have demonstrated that the learning performance of CNNs is limited due to their strictly homogenous configuration with the sole linear neuron model. This has been addressed by operational neural networks (ONNs) with their heterogenous network configuration encapsulating neurons with various nonlinear operators. In this study, to further boost the peak detection performance along with an elegant computational efficiency, we propose 1-D Self-Organized ONNs (Self-ONNs) with generative neurons. The most crucial advantage of 1-D Self-ONNs over the ONNs is their self-organization capability that voids the need to search for the best operator set per neuron since each generative neuron has the ability to create the optimal operator during training. The experimental results over the China Physiological Signal Challenge-2020 (CPSC) dataset with more than one million ECG beats show that the proposed 1-D Self-ONNs can significantly surpass the state-of-the-art deep CNN with less computational complexity. Results demonstrate that the proposed solution achieves a 99.10% F1-score, 99.79% sensitivity, and 98.42% positive predictivity in the CPSC dataset, which is the best R-peak detection performance ever achieved.


Assuntos
Eletrocardiografia Ambulatorial , Redes Neurais de Computação , Eletrocardiografia/métodos , China , Modelos Lineares
12.
J Clin Med ; 12(14)2023 Jul 19.
Artigo em Inglês | MEDLINE | ID: mdl-37510889

RESUMO

Aortic valve defects are among the most prevalent clinical conditions. A severely damaged or non-functioning aortic valve is commonly replaced with a bioprosthetic heart valve (BHV) via the transcatheter aortic valve replacement (TAVR) procedure. Accurate pre-operative planning is crucial for a successful TAVR outcome. Assessment of computational fluid dynamics (CFD), finite element analysis (FEA), and fluid-solid interaction (FSI) analysis offer a solution that has been increasingly utilized to evaluate BHV mechanics and dynamics. However, the high computational costs and the complex operation of computational modeling hinder its application. Recent advancements in the deep learning (DL) domain can offer a real-time surrogate that can render hemodynamic parameters in a few seconds, thus guiding clinicians to select the optimal treatment option. Herein, we provide a comprehensive review of classical computational modeling approaches, medical imaging, and DL approaches for planning and outcome assessment of TAVR. Particularly, we focus on DL approaches in previous studies, highlighting the utilized datasets, deployed DL models, and achieved results. We emphasize the critical challenges and recommend several future directions for innovative researchers to tackle. Finally, an end-to-end smart DL framework is outlined for real-time assessment and recommendation of the best BHV design for TAVR. Ultimately, deploying such a framework in future studies will support clinicians in minimizing risks during TAVR therapy planning and will help in improving patient care.

13.
Bioengineering (Basel) ; 9(11)2022 Nov 15.
Artigo em Inglês | MEDLINE | ID: mdl-36421093

RESUMO

Cardiovascular diseases are one of the most severe causes of mortality, annually taking a heavy toll on lives worldwide. Continuous monitoring of blood pressure seems to be the most viable option, but this demands an invasive process, introducing several layers of complexities and reliability concerns due to non-invasive techniques not being accurate. This motivates us to develop a method to estimate the continuous arterial blood pressure (ABP) waveform through a non-invasive approach using Photoplethysmogram (PPG) signals. We explore the advantage of deep learning, as it would free us from sticking to ideally shaped PPG signals only by making handcrafted feature computation irrelevant, which is a shortcoming of the existing approaches. Thus, we present PPG2ABP, a two-stage cascaded deep learning-based method that manages to estimate the continuous ABP waveform from the input PPG signal with a mean absolute error of 4.604 mmHg, preserving the shape, magnitude, and phase in unison. However, the more astounding success of PPG2ABP turns out to be that the computed values of Diastolic Blood Pressure (DBP), Mean Arterial Pressure (MAP), and Systolic Blood Pressure (SBP) from the estimated ABP waveform outperform the existing works under several metrics (mean absolute error of 3.449 ± 6.147 mmHg, 2.310 ± 4.437 mmHg, and 5.727 ± 9.162 mmHg, respectively), despite that PPG2ABP is not explicitly trained to do so. Notably, both for DBP and MAP, we achieve Grade A in the BHS (British Hypertension Society) Standard and satisfy the AAMI (Association for the Advancement of Medical Instrumentation) standard.

14.
Sci Rep ; 12(1): 5515, 2022 04 01.
Artigo em Inglês | MEDLINE | ID: mdl-35365684

RESUMO

The human mucus layer plays a vital role in maintaining health by providing a physical barrier to pathogens. This biological hydrogel also provides the microenvironment for commensal bacteria. Common models used to study host-microbe interactions include gnotobiotic animals or mammalian-microbial co-culture platforms. Many of the current in vitro models lack a sufficient mucus layer to host these interactions. In this study, we engineered a mucus-like hydrogel Consisting of a mixed alginate-mucin (ALG-MUC) hydrogel network by using low concentration calcium chloride (CaCl2) as crosslinker. We demonstrated that the incorporation of ALG-MUC hydrogels into an aqueous two-phase system (ATPS) co-culture platform can support the growth of a mammalian monolayer and pathogenic bacteria. The ALG-MUC hydrogels displayed selective diffusivity against macromolecules and stability with ATPS microbial patterning. Additionally, we showed that the presence of mucin within hydrogels contributed to an increase in antimicrobial resistance in ATPS patterned microbial colonies. By using common laboratory chemicals to generate a mammalian-microbial co-culture system containing a representative mucus microenvironment, this model can be readily adopted by typical life science laboratories to study host-microbe interaction and drug discovery.


Assuntos
Interações entre Hospedeiro e Microrganismos , Muco , Alginatos/química , Animais , Hidrogéis/química , Mamíferos/metabolismo , Mucinas/metabolismo , Muco/metabolismo
15.
IEEE Trans Biomed Eng ; 69(12): 3572-3581, 2022 12.
Artigo em Inglês | MEDLINE | ID: mdl-35503842

RESUMO

OBJECTIVE: ECG recordings often suffer from a set of artifacts with varying types, severities, and durations, and this makes an accurate diagnosis by machines or medical doctors difficult and unreliable. Numerous studies have proposed ECG denoising; however, they naturally fail to restore the actual ECG signal corrupted with such artifacts due to their simple and naive noise model. In this pilot study, we propose a novel approach for blind ECG restoration using cycle-consistent generative adversarial networks (Cycle-GANs) where the quality of the signal can be improved to a clinical level ECG regardless of the type and severity of the artifacts corrupting the signal. METHODS: To further boost the restoration performance, we propose 1D operational Cycle-GANs with the generative neuron model. RESULTS: The proposed approach has been evaluated extensively using one of the largest benchmark ECG datasets from the China Physiological Signal Challenge (CPSC-2020) with more than one million beats. Besides the quantitative and qualitative evaluations, a group of cardiologists performed medical evaluations to validate the quality and usability of the restored ECG, especially for an accurate arrhythmia diagnosis. SIGNIFICANCE: As a pioneer study in ECG restoration, the corrupted ECG signals can be restored to clinical level quality. CONCLUSION: By means of the proposed ECG restoration, the ECG diagnosis accuracy and performance can significantly improve.


Assuntos
Algoritmos , Eletrocardiografia , Humanos , Projetos Piloto , Artefatos , Arritmias Cardíacas/diagnóstico , Processamento de Sinais Assistido por Computador
16.
IEEE Trans Biomed Eng ; 69(1): 119-128, 2022 01.
Artigo em Inglês | MEDLINE | ID: mdl-34110986

RESUMO

OBJECTIVE: Noise and low quality of ECG signals acquired from Holter or wearable devices deteriorate the accuracy and robustness of R-peak detection algorithms. This paper presents a generic and robust system for R-peak detection in Holter ECG signals. While many proposed algorithms have successfully addressed the problem of ECG R-peak detection, there is still a notable gap in the performance of these detectors on such low-quality ECG records. METHODS: In this study, a novel implementation of the 1D Convolutional Neural Network (CNN) is used integrated with a verification model to reduce the number of false alarms. This CNN architecture consists of an encoder block and a corresponding decoder block followed by a sample-wise classification layer to construct the 1D segmentation map of R-peaks from the input ECG signal. Once the proposed model has been trained, it can solely be used to detect R-peaks possibly in a single channel ECG data stream quickly and accurately, or alternatively, such a solution can be conveniently employed for real-time monitoring on a lightweight portable device. RESULTS: The model is tested on two open-access ECG databases: The China Physiological Signal Challenge (2020) database (CPSC-DB) with more than one million beats, and the commonly used MIT-BIH Arrhythmia Database (MIT-DB). Experimental results demonstrate that the proposed systematic approach achieves 99.30% F1-score, 99.69% recall, and 98.91% precision in CPSC-DB, which is the best R-peak detection performance ever achieved. Results also demonstrate similar or better performance than most competing algorithms on MIT-DB with 99.83% F1-score, 99.85% recall, and 99.82% precision. SIGNIFICANCE: Compared to all competing methods, the proposed approach can reduce the false-positives and false-negatives in Holter ECG signals by more than 54% and 82%, respectively. CONCLUSION: Finally, the simple and invariant nature of the parameters leads to a highly generic system and therefore applicable to any ECG dataset.


Assuntos
Eletrocardiografia , Processamento de Sinais Assistido por Computador , Algoritmos , Arritmias Cardíacas , Eletrocardiografia Ambulatorial , Humanos , Redes Neurais de Computação
17.
Cognit Comput ; 14(5): 1752-1772, 2022.
Artigo em Inglês | MEDLINE | ID: mdl-35035591

RESUMO

Novel coronavirus disease (COVID-19) is an extremely contagious and quickly spreading coronavirus infestation. Severe acute respiratory syndrome (SARS) and Middle East respiratory syndrome (MERS), which outbreak in 2002 and 2011, and the current COVID-19 pandemic are all from the same family of coronavirus. This work aims to classify COVID-19, SARS, and MERS chest X-ray (CXR) images using deep convolutional neural networks (CNNs). To the best of our knowledge, this classification scheme has never been investigated in the literature. A unique database was created, so-called QU-COVID-family, consisting of 423 COVID-19, 144 MERS, and 134 SARS CXR images. Besides, a robust COVID-19 recognition system was proposed to identify lung regions using a CNN segmentation model (U-Net), and then classify the segmented lung images as COVID-19, MERS, or SARS using a pre-trained CNN classifier. Furthermore, the Score-CAM visualization method was utilized to visualize classification output and understand the reasoning behind the decision of deep CNNs. Several deep learning classifiers were trained and tested; four outperforming algorithms were reported: SqueezeNet, ResNet18, InceptionV3, and DenseNet201. Original and preprocessed images were used individually and all together as the input(s) to the networks. Two recognition schemes were considered: plain CXR classification and segmented CXR classification. For plain CXRs, it was observed that InceptionV3 outperforms other networks with a 3-channel scheme and achieves sensitivities of 99.5%, 93.1%, and 97% for classifying COVID-19, MERS, and SARS images, respectively. In contrast, for segmented CXRs, InceptionV3 outperformed using the original CXR dataset and achieved sensitivities of 96.94%, 79.68%, and 90.26% for classifying COVID-19, MERS, and SARS images, respectively. The classification performance degrades with segmented CXRs compared to plain CXRs. However, the results are more reliable as the network learns from the main region of interest, avoiding irrelevant non-lung areas (heart, bones, or text), which was confirmed by the Score-CAM visualization. All networks showed high COVID-19 detection sensitivity (> 96%) with the segmented lung images. This indicates the unique radiographic signature of COVID-19 cases in the eyes of AI, which is often a challenging task for medical doctors.

18.
Comput Biol Med ; 142: 105238, 2022 03.
Artigo em Inglês | MEDLINE | ID: mdl-35077938

RESUMO

Harnessing the inherent anti-spoofing quality from electroencephalogram (EEG) signals has become a potential field of research in recent years. Although several studies have been conducted, still there are some vital challenges present in the deployment of EEG-based biometrics, which is stable and capable of handling the real-world scenario. One of the key challenges is the large signal variability of EEG when recorded on different days or sessions which impedes the performance of biometric systems significantly. To address this issue, a session invariant multimodal Self-organized Operational Neural Network (Self-ONN) based ensemble model combining EEG and keystroke dynamics is proposed in this paper. Our model is tested successfully on a large number of sessions (10 recording days) with many challenging noisy and variable environments for the identification and authentication tasks. In most of the previous studies, training and testing were performed either over a single recording session (same day) only or without ensuring appropriate splitting of the data on multiple recording days. Unlike those studies, in our work, we have rigorously split the data so that train and test sets do not share the data of the same recording day. The proposed multimodal Self-ONN based ensemble model has achieved identification accuracy of 98% in rigorous validation cases and outperformed the equivalent ensemble of deep CNN models. A novel Self-ONN Siamese network has also been proposed to measure the similarity of templates during the authentication task instead of the commonly used simple distance measure techniques. The multimodal Siamese network reduces the Equal Error Rate (EER) to 1.56% in rigorous authentication. The obtained results indicate that the proposed multimodal Self-ONN model can automatically extract session invariant unique non-linear features to identify and authenticate users with high accuracy.


Assuntos
Identificação Biométrica , Identificação Biométrica/métodos , Biometria , Coleta de Dados , Eletroencefalografia/métodos , Redes Neurais de Computação
19.
Diagnostics (Basel) ; 12(4)2022 Apr 07.
Artigo em Inglês | MEDLINE | ID: mdl-35453968

RESUMO

Problem-Since the outbreak of the COVID-19 pandemic, mass testing has become essential to reduce the spread of the virus. Several recent studies suggest that a significant number of COVID-19 patients display no physical symptoms whatsoever. Therefore, it is unlikely that these patients will undergo COVID-19 testing, which increases their chances of unintentionally spreading the virus. Currently, the primary diagnostic tool to detect COVID-19 is a reverse-transcription polymerase chain reaction (RT-PCR) test from the respiratory specimens of the suspected patient, which is invasive and a resource-dependent technique. It is evident from recent researches that asymptomatic COVID-19 patients cough and breathe in a different way than healthy people. Aim-This paper aims to use a novel machine learning approach to detect COVID-19 (symptomatic and asymptomatic) patients from the convenience of their homes so that they do not overburden the healthcare system and also do not spread the virus unknowingly by continuously monitoring themselves. Method-A Cambridge University research group shared such a dataset of cough and breath sound samples from 582 healthy and 141 COVID-19 patients. Among the COVID-19 patients, 87 were asymptomatic while 54 were symptomatic (had a dry or wet cough). In addition to the available dataset, the proposed work deployed a real-time deep learning-based backend server with a web application to crowdsource cough and breath datasets and also screen for COVID-19 infection from the comfort of the user's home. The collected dataset includes data from 245 healthy individuals and 78 asymptomatic and 18 symptomatic COVID-19 patients. Users can simply use the application from any web browser without installation and enter their symptoms, record audio clips of their cough and breath sounds, and upload the data anonymously. Two different pipelines for screening were developed based on the symptoms reported by the users: asymptomatic and symptomatic. An innovative and novel stacking CNN model was developed using three base learners from of eight state-of-the-art deep learning CNN algorithms. The stacking CNN model is based on a logistic regression classifier meta-learner that uses the spectrograms generated from the breath and cough sounds of symptomatic and asymptomatic patients as input using the combined (Cambridge and collected) dataset. Results-The stacking model outperformed the other eight CNN networks with the best classification performance for binary classification using cough sound spectrogram images. The accuracy, sensitivity, and specificity for symptomatic and asymptomatic patients were 96.5%, 96.42%, and 95.47% and 98.85%, 97.01%, and 99.6%, respectively. For breath sound spectrogram images, the metrics for binary classification of symptomatic and asymptomatic patients were 91.03%, 88.9%, and 91.5% and 80.01%, 72.04%, and 82.67%, respectively. Conclusion-The web-application QUCoughScope records coughing and breathing sounds, converts them to a spectrogram, and applies the best-performing machine learning model to classify the COVID-19 patients and healthy subjects. The result is then reported back to the test user in the application interface. Therefore, this novel system can be used by patients in their premises as a pre-screening method to aid COVID-19 diagnosis by prioritizing the patients for RT-PCR testing and thereby reducing the risk of spreading of the disease.

20.
Diagnostics (Basel) ; 11(5)2021 May 17.
Artigo em Inglês | MEDLINE | ID: mdl-34067937

RESUMO

Detecting COVID-19 at an early stage is essential to reduce the mortality risk of the patients. In this study, a cascaded system is proposed to segment the lung, detect, localize, and quantify COVID-19 infections from computed tomography images. An extensive set of experiments were performed using Encoder-Decoder Convolutional Neural Networks (ED-CNNs), UNet, and Feature Pyramid Network (FPN), with different backbone (encoder) structures using the variants of DenseNet and ResNet. The conducted experiments for lung region segmentation showed a Dice Similarity Coefficient (DSC) of 97.19% and Intersection over Union (IoU) of 95.10% using U-Net model with the DenseNet 161 encoder. Furthermore, the proposed system achieved an elegant performance for COVID-19 infection segmentation with a DSC of 94.13% and IoU of 91.85% using the FPN with DenseNet201 encoder. The proposed system can reliably localize infections of various shapes and sizes, especially small infection regions, which are rarely considered in recent studies. Moreover, the proposed system achieved high COVID-19 detection performance with 99.64% sensitivity and 98.72% specificity. Finally, the system was able to discriminate between different severity levels of COVID-19 infection over a dataset of 1110 subjects with sensitivity values of 98.3%, 71.2%, 77.8%, and 100% for mild, moderate, severe, and critical, respectively.

SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA