Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 11 de 11
Filtrar
Más filtros










Base de datos
Intervalo de año de publicación
1.
Sensors (Basel) ; 23(16)2023 Aug 14.
Artículo en Inglés | MEDLINE | ID: mdl-37631693

RESUMEN

Every one of us has a unique manner of communicating to explore the world, and such communication helps to interpret life. Sign language is the popular language of communication for hearing and speech-disabled people. When a sign language user interacts with a non-sign language user, it becomes difficult for a signer to express themselves to another person. A sign language recognition system can help a signer to interpret the sign of a non-sign language user. This study presents a sign language recognition system that is capable of recognizing Arabic Sign Language from recorded RGB videos. To achieve this, two datasets were considered, such as (1) the raw dataset and (2) the face-hand region-based segmented dataset produced from the raw dataset. Moreover, operational layer-based multi-layer perceptron "SelfMLP" is proposed in this study to build CNN-LSTM-SelfMLP models for Arabic Sign Language recognition. MobileNetV2 and ResNet18-based CNN backbones and three SelfMLPs were used to construct six different models of CNN-LSTM-SelfMLP architecture for performance comparison of Arabic Sign Language recognition. This study examined the signer-independent mode to deal with real-time application circumstances. As a result, MobileNetV2-LSTM-SelfMLP on the segmented dataset achieved the best accuracy of 87.69% with 88.57% precision, 87.69% recall, 87.72% F1 score, and 99.75% specificity. Overall, face-hand region-based segmentation and SelfMLP-infused MobileNetV2-LSTM-SelfMLP surpassed the previous findings on Arabic Sign Language recognition by 10.970% accuracy.


Asunto(s)
Aprendizaje Profundo , Humanos , Lenguaje , Lengua de Signos , Comunicación , Reconocimiento en Psicología
2.
J Clin Med ; 12(14)2023 Jul 19.
Artículo en Inglés | MEDLINE | ID: mdl-37510889

RESUMEN

Aortic valve defects are among the most prevalent clinical conditions. A severely damaged or non-functioning aortic valve is commonly replaced with a bioprosthetic heart valve (BHV) via the transcatheter aortic valve replacement (TAVR) procedure. Accurate pre-operative planning is crucial for a successful TAVR outcome. Assessment of computational fluid dynamics (CFD), finite element analysis (FEA), and fluid-solid interaction (FSI) analysis offer a solution that has been increasingly utilized to evaluate BHV mechanics and dynamics. However, the high computational costs and the complex operation of computational modeling hinder its application. Recent advancements in the deep learning (DL) domain can offer a real-time surrogate that can render hemodynamic parameters in a few seconds, thus guiding clinicians to select the optimal treatment option. Herein, we provide a comprehensive review of classical computational modeling approaches, medical imaging, and DL approaches for planning and outcome assessment of TAVR. Particularly, we focus on DL approaches in previous studies, highlighting the utilized datasets, deployed DL models, and achieved results. We emphasize the critical challenges and recommend several future directions for innovative researchers to tackle. Finally, an end-to-end smart DL framework is outlined for real-time assessment and recommendation of the best BHV design for TAVR. Ultimately, deploying such a framework in future studies will support clinicians in minimizing risks during TAVR therapy planning and will help in improving patient care.

3.
Bioengineering (Basel) ; 10(5)2023 Apr 28.
Artículo en Inglés | MEDLINE | ID: mdl-37237612

RESUMEN

Magnetic resonance imaging (MRI) is commonly used in medical diagnosis and minimally invasive image-guided operations. During an MRI scan, the patient's electrocardiogram (ECG) may be required for either gating or patient monitoring. However, the challenging environment of an MRI scanner, with its several types of magnetic fields, creates significant distortions of the collected ECG data due to the Magnetohydrodynamic (MHD) effect. These changes can be seen as irregular heartbeats. These distortions and abnormalities hamper the detection of QRS complexes, and a more in-depth diagnosis based on the ECG. This study aims to reliably detect R-peaks in the ECG waveforms in 3 Tesla (T) and 7T magnetic fields. A novel model, Self-Attention MHDNet, is proposed to detect R peaks from the MHD corrupted ECG signal through 1D-segmentation. The proposed model achieves a recall and precision of 99.83% and 99.68%, respectively, for the ECG data acquired in a 3T setting, while 99.87% and 99.78%, respectively, in a 7T setting. This model can thus be used in accurately gating the trigger pulse for the cardiovascular functional MRI.

4.
Bioengineering (Basel) ; 9(11)2022 Nov 15.
Artículo en Inglés | MEDLINE | ID: mdl-36421093

RESUMEN

Cardiovascular diseases are one of the most severe causes of mortality, annually taking a heavy toll on lives worldwide. Continuous monitoring of blood pressure seems to be the most viable option, but this demands an invasive process, introducing several layers of complexities and reliability concerns due to non-invasive techniques not being accurate. This motivates us to develop a method to estimate the continuous arterial blood pressure (ABP) waveform through a non-invasive approach using Photoplethysmogram (PPG) signals. We explore the advantage of deep learning, as it would free us from sticking to ideally shaped PPG signals only by making handcrafted feature computation irrelevant, which is a shortcoming of the existing approaches. Thus, we present PPG2ABP, a two-stage cascaded deep learning-based method that manages to estimate the continuous ABP waveform from the input PPG signal with a mean absolute error of 4.604 mmHg, preserving the shape, magnitude, and phase in unison. However, the more astounding success of PPG2ABP turns out to be that the computed values of Diastolic Blood Pressure (DBP), Mean Arterial Pressure (MAP), and Systolic Blood Pressure (SBP) from the estimated ABP waveform outperform the existing works under several metrics (mean absolute error of 3.449 ± 6.147 mmHg, 2.310 ± 4.437 mmHg, and 5.727 ± 9.162 mmHg, respectively), despite that PPG2ABP is not explicitly trained to do so. Notably, both for DBP and MAP, we achieve Grade A in the BHS (British Hypertension Society) Standard and satisfy the AAMI (Association for the Advancement of Medical Instrumentation) standard.

5.
Sensors (Basel) ; 22(5)2022 Feb 24.
Artículo en Inglés | MEDLINE | ID: mdl-35270938

RESUMEN

Diabetes mellitus (DM) can lead to plantar ulcers, amputation and death. Plantar foot thermogram images acquired using an infrared camera have been shown to detect changes in temperature distribution associated with a higher risk of foot ulceration. Machine learning approaches applied to such infrared images may have utility in the early diagnosis of diabetic foot complications. In this work, a publicly available dataset was categorized into different classes, which were corroborated by domain experts, based on a temperature distribution parameter-the thermal change index (TCI). We then explored different machine-learning approaches for classifying thermograms of the TCI-labeled dataset. Classical machine learning algorithms with feature engineering and the convolutional neural network (CNN) with image enhancement techniques were extensively investigated to identify the best performing network for classifying thermograms. The multilayer perceptron (MLP) classifier along with the features extracted from thermogram images showed an accuracy of 90.1% in multi-class classification, which outperformed the literature-reported performance metrics on this dataset.


Asunto(s)
Diabetes Mellitus , Pie Diabético , Algoritmos , Pie Diabético/diagnóstico por imagen , Humanos , Aprendizaje Automático , Redes Neurales de la Computación , Termografía
6.
Sensors (Basel) ; 22(3)2022 Jan 25.
Artículo en Inglés | MEDLINE | ID: mdl-35161664

RESUMEN

Cardiovascular diseases are the most common causes of death around the world. To detect and treat heart-related diseases, continuous blood pressure (BP) monitoring along with many other parameters are required. Several invasive and non-invasive methods have been developed for this purpose. Most existing methods used in hospitals for continuous monitoring of BP are invasive. On the contrary, cuff-based BP monitoring methods, which can predict systolic blood pressure (SBP) and diastolic blood pressure (DBP), cannot be used for continuous monitoring. Several studies attempted to predict BP from non-invasively collectible signals such as photoplethysmograms (PPG) and electrocardiograms (ECG), which can be used for continuous monitoring. In this study, we explored the applicability of autoencoders in predicting BP from PPG and ECG signals. The investigation was carried out on 12,000 instances of 942 patients of the MIMIC-II dataset, and it was found that a very shallow, one-dimensional autoencoder can extract the relevant features to predict the SBP and DBP with state-of-the-art performance on a very large dataset. An independent test set from a portion of the MIMIC-II dataset provided a mean absolute error (MAE) of 2.333 and 0.713 for SBP and DBP, respectively. On an external dataset of 40 subjects, the model trained on the MIMIC-II dataset provided an MAE of 2.728 and 1.166 for SBP and DBP, respectively. For both the cases, the results met British Hypertension Society (BHS) Grade A and surpassed the studies from the current literature.


Asunto(s)
Hipertensión , Fotopletismografía , Presión Sanguínea , Determinación de la Presión Sanguínea , Electrocardiografía , Humanos , Hipertensión/diagnóstico
7.
Sensors (Basel) ; 22(2)2022 Jan 12.
Artículo en Inglés | MEDLINE | ID: mdl-35062533

RESUMEN

A real-time Bangla Sign Language interpreter can enable more than 200 k hearing and speech-impaired people to the mainstream workforce in Bangladesh. Bangla Sign Language (BdSL) recognition and detection is a challenging topic in computer vision and deep learning research because sign language recognition accuracy may vary on the skin tone, hand orientation, and background. This research has used deep machine learning models for accurate and reliable BdSL Alphabets and Numerals using two well-suited and robust datasets. The dataset prepared in this study comprises of the largest image database for BdSL Alphabets and Numerals in order to reduce inter-class similarity while dealing with diverse image data, which comprises various backgrounds and skin tones. The papers compared classification with and without background images to determine the best working model for BdSL Alphabets and Numerals interpretation. The CNN model trained with the images that had a background was found to be more effective than without background. The hand detection portion in the segmentation approach must be more accurate in the hand detection process to boost the overall accuracy in the sign recognition. It was found that ResNet18 performed best with 99.99% accuracy, precision, F1 score, sensitivity, and 100% specificity, which outperforms the works in the literature for BdSL Alphabets and Numerals recognition. This dataset is made publicly available for researchers to support and encourage further research on Bangla Sign Language Interpretation so that the hearing and speech-impaired individuals can benefit from this research.


Asunto(s)
Aprendizaje Profundo , Lengua de Signos , Mano , Humanos , Aprendizaje Automático , Redes Neurales de la Computación
8.
Cognit Comput ; 14(5): 1752-1772, 2022.
Artículo en Inglés | MEDLINE | ID: mdl-35035591

RESUMEN

Novel coronavirus disease (COVID-19) is an extremely contagious and quickly spreading coronavirus infestation. Severe acute respiratory syndrome (SARS) and Middle East respiratory syndrome (MERS), which outbreak in 2002 and 2011, and the current COVID-19 pandemic are all from the same family of coronavirus. This work aims to classify COVID-19, SARS, and MERS chest X-ray (CXR) images using deep convolutional neural networks (CNNs). To the best of our knowledge, this classification scheme has never been investigated in the literature. A unique database was created, so-called QU-COVID-family, consisting of 423 COVID-19, 144 MERS, and 134 SARS CXR images. Besides, a robust COVID-19 recognition system was proposed to identify lung regions using a CNN segmentation model (U-Net), and then classify the segmented lung images as COVID-19, MERS, or SARS using a pre-trained CNN classifier. Furthermore, the Score-CAM visualization method was utilized to visualize classification output and understand the reasoning behind the decision of deep CNNs. Several deep learning classifiers were trained and tested; four outperforming algorithms were reported: SqueezeNet, ResNet18, InceptionV3, and DenseNet201. Original and preprocessed images were used individually and all together as the input(s) to the networks. Two recognition schemes were considered: plain CXR classification and segmented CXR classification. For plain CXRs, it was observed that InceptionV3 outperforms other networks with a 3-channel scheme and achieves sensitivities of 99.5%, 93.1%, and 97% for classifying COVID-19, MERS, and SARS images, respectively. In contrast, for segmented CXRs, InceptionV3 outperformed using the original CXR dataset and achieved sensitivities of 96.94%, 79.68%, and 90.26% for classifying COVID-19, MERS, and SARS images, respectively. The classification performance degrades with segmented CXRs compared to plain CXRs. However, the results are more reliable as the network learns from the main region of interest, avoiding irrelevant non-lung areas (heart, bones, or text), which was confirmed by the Score-CAM visualization. All networks showed high COVID-19 detection sensitivity (> 96%) with the segmented lung images. This indicates the unique radiographic signature of COVID-19 cases in the eyes of AI, which is often a challenging task for medical doctors.

9.
Comput Biol Med ; 142: 105238, 2022 03.
Artículo en Inglés | MEDLINE | ID: mdl-35077938

RESUMEN

Harnessing the inherent anti-spoofing quality from electroencephalogram (EEG) signals has become a potential field of research in recent years. Although several studies have been conducted, still there are some vital challenges present in the deployment of EEG-based biometrics, which is stable and capable of handling the real-world scenario. One of the key challenges is the large signal variability of EEG when recorded on different days or sessions which impedes the performance of biometric systems significantly. To address this issue, a session invariant multimodal Self-organized Operational Neural Network (Self-ONN) based ensemble model combining EEG and keystroke dynamics is proposed in this paper. Our model is tested successfully on a large number of sessions (10 recording days) with many challenging noisy and variable environments for the identification and authentication tasks. In most of the previous studies, training and testing were performed either over a single recording session (same day) only or without ensuring appropriate splitting of the data on multiple recording days. Unlike those studies, in our work, we have rigorously split the data so that train and test sets do not share the data of the same recording day. The proposed multimodal Self-ONN based ensemble model has achieved identification accuracy of 98% in rigorous validation cases and outperformed the equivalent ensemble of deep CNN models. A novel Self-ONN Siamese network has also been proposed to measure the similarity of templates during the authentication task instead of the commonly used simple distance measure techniques. The multimodal Siamese network reduces the Equal Error Rate (EER) to 1.56% in rigorous authentication. The obtained results indicate that the proposed multimodal Self-ONN model can automatically extract session invariant unique non-linear features to identify and authenticate users with high accuracy.


Asunto(s)
Identificación Biométrica , Identificación Biométrica/métodos , Biometría , Recolección de Datos , Electroencefalografía/métodos , Redes Neurales de la Computación
10.
Comput Biol Med ; 139: 105002, 2021 12.
Artículo en Inglés | MEDLINE | ID: mdl-34749094

RESUMEN

The immense spread of coronavirus disease 2019 (COVID-19) has left healthcare systems incapable to diagnose and test patients at the required rate. Given the effects of COVID-19 on pulmonary tissues, chest radiographic imaging has become a necessity for screening and monitoring the disease. Numerous studies have proposed Deep Learning approaches for the automatic diagnosis of COVID-19. Although these methods achieved outstanding performance in detection, they have used limited chest X-ray (CXR) repositories for evaluation, usually with a few hundred COVID-19 CXR images only. Thus, such data scarcity prevents reliable evaluation of Deep Learning models with the potential of overfitting. In addition, most studies showed no or limited capability in infection localization and severity grading of COVID-19 pneumonia. In this study, we address this urgent need by proposing a systematic and unified approach for lung segmentation and COVID-19 localization with infection quantification from CXR images. To accomplish this, we have constructed the largest benchmark dataset with 33,920 CXR images, including 11,956 COVID-19 samples, where the annotation of ground-truth lung segmentation masks is performed on CXRs by an elegant human-machine collaborative approach. An extensive set of experiments was performed using the state-of-the-art segmentation networks, U-Net, U-Net++, and Feature Pyramid Networks (FPN). The developed network, after an iterative process, reached a superior performance for lung region segmentation with Intersection over Union (IoU) of 96.11% and Dice Similarity Coefficient (DSC) of 97.99%. Furthermore, COVID-19 infections of various shapes and types were reliably localized with 83.05% IoU and 88.21% DSC. Finally, the proposed approach has achieved an outstanding COVID-19 detection performance with both sensitivity and specificity values above 99%.


Asunto(s)
COVID-19 , Humanos , Pulmón/diagnóstico por imagen , SARS-CoV-2 , Tórax , Rayos X
11.
Sensors (Basel) ; 20(4)2020 Feb 11.
Artículo en Inglés | MEDLINE | ID: mdl-32053914

RESUMEN

Gait analysis is a systematic study of human locomotion, which can be utilized in variousapplications, such as rehabilitation, clinical diagnostics and sports activities. The various limitationssuch as cost, non-portability, long setup time, post-processing time etc., of the current gait analysistechniques have made them unfeasible for individual use. This led to an increase in research interestin developing smart insoles where wearable sensors can be employed to detect vertical groundreaction forces (vGRF) and other gait variables. Smart insoles are flexible, portable and comfortablefor gait analysis, and can monitor plantar pressure frequently through embedded sensors thatconvert the applied pressure to an electrical signal that can be displayed and analyzed further.Several research teams are still working to improve the insoles' features such as size, sensitivity ofinsoles sensors, durability, and the intelligence of insoles to monitor and control subjects' gait bydetecting various complications providing recommendation to enhance walking performance. Eventhough systematic sensor calibration approaches have been followed by different teams to calibrateinsoles' sensor, expensive calibration devices were used for calibration such as universal testingmachines or infrared motion capture cameras equipped in motion analysis labs. This paper providesa systematic design and characterization procedure for three different pressure sensors: forcesensitiveresistors (FSRs), ceramic piezoelectric sensors, and flexible piezoelectric sensors that canbe used for detecting vGRF using a smart insole. A simple calibration method based on a load cellis presented as an alternative to the expensive calibration techniques. In addition, to evaluate theperformance of the different sensors as a component for the smart insole, the acquired vGRF fromdifferent insoles were used to compare them. The results showed that the FSR is the most effectivesensor among the three sensors for smart insole applications, whereas the piezoelectric sensors canbe utilized in detecting the start and end of the gait cycle. This study will be useful for any researchgroup in replicating the design of a customized smart insole for gait analysis.


Asunto(s)
Análisis de la Marcha/métodos , Caminata , Adulto , Diseño de Equipo , Femenino , Análisis de la Marcha/instrumentación , Humanos , Masculino , Sistemas Microelectromecánicos , Persona de Mediana Edad , Presión , Zapatos , Dispositivos Electrónicos Vestibles
SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA
...