Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 42
Filtrar
Mais filtros








Base de dados
Intervalo de ano de publicação
1.
Digit Health ; 10: 20552076241263317, 2024.
Artigo em Inglês | MEDLINE | ID: mdl-38882250

RESUMO

Background: Depression and anxiety are prevalent mental health issues addressed by online cognitive behavioral therapy (CBT) via mobile applications. This study introduces Sokoon, a gamified CBT app tailored for Arabic individuals, focusing on alleviating depression and anxiety symptoms (DASDs). Objectives: The objectives of this study were to: Evaluate the effectiveness of Sokoon in reducing symptoms of depression and anxiety. Assess the usability of the intervention through user engagement and adherence to CBT skills. Methods: A single-group pre-post design evaluated Sokoon's impact on adults with DASDs. In consultation with psychiatrists, Sokoon integrates evidence-based skills such as relaxation, gratitude, behavioral activation, and cognitive restructuring, represented by planets. Its design incorporates Hexad theory and gamification, supported by a dynamic difficulty adjustment algorithm. The study involves 30 participants aged 18-35 (86.7% female), specifically those with mild to moderate depression and anxiety. Results: Based on a sample of 30 participants, Sokoon, a smartphone-based intervention, significantly reduced symptoms of depression and anxiety (d = 2.7, d = 3.6, p < 0.001). Over a two-week trial, participants experienced a notable decrease in anxiety and depressive symptoms, indicating the effectiveness of the model. Sokoon shows potential as a valuable tool for addressing DASDs. Conclusion: Sokoon, the gamified CBT application, offers an innovative approach to increasing CBT skills adherence and engagement. By leveraging Hexad theory and gamification, Sokoon provides an enjoyable and engaging user experience while maintaining the effectiveness of traditional CBT techniques. The study findings suggest that Sokoon has a positive impact on reducing symptoms of depression and anxiety.

2.
Heliyon ; 10(10): e30954, 2024 May 30.
Artigo em Inglês | MEDLINE | ID: mdl-38779022

RESUMO

Complications in diabetes lead to diabetic retinopathy (DR) hence affecting the vision. Computerized methods performed a significant role in DR detection at the initial phase to cure vision loss. Therefore, a method is proposed in this study that consists of three models for localization, segmentation, and classification. A novel technique is designed with the combination of pre-trained ResNet-18 and YOLOv8 models based on the selection of optimum layers for the localization of DR lesions. The localized images are passed to the designed semantic segmentation model on selected layers and trained on optimized learning hyperparameters. The segmentation model performance is evaluated on the Grand-challenge IDRID segmentation dataset. The achieved results are computed in terms of mean IoU 0.95,0.94, 0.96, 0.94, and 0.95 on OD, SoftExs, HardExs, HAE, and MAs respectively. Another classification model is developed in which deep features are derived from the pre-trained Efficientnet-b0 model and optimized using a Genetic algorithm (GA) based on the selected parameters for grading of NPDR lesions. The proposed model achieved greater than 98 % accuracy which is superior to previous methods.

3.
Sensors (Basel) ; 24(4)2024 Feb 16.
Artigo em Inglês | MEDLINE | ID: mdl-38400423

RESUMO

The increasing demand for artificially intelligent smartphone cradles has prompted the need for real-time moving object detection. Real-time moving object tracking requires the development of algorithms for instant tracking analysis without delays. In particular, developing a system for smartphones should consider different operating systems and software development environments. Issues in current real-time moving object tracking systems arise when small and large objects coexist, causing the algorithm to prioritize larger objects or struggle with consistent tracking across varying scales. Fast object motion further complicates accurate tracking and leads to potential errors and misidentification. To address these issues, we propose a deep learning-based real-time moving object tracking system which provides an accuracy priority mode and a speed priority mode. The accuracy priority mode achieves a balance between the high accuracy and speed required in the smartphone environment. The speed priority mode optimizes the speed of inference to track fast-moving objects. The accuracy priority mode incorporates CSPNet with ResNet to maintain high accuracy, whereas the speed priority mode simplifies the complexity of the convolutional layer while maintaining accuracy. In our experiments, we evaluated both modes in terms of accuracy and speed.

4.
PLoS One ; 18(10): e0293064, 2023.
Artigo em Inglês | MEDLINE | ID: mdl-37824566

RESUMO

[This corrects the article DOI: 10.1371/journal.pone.0250959.].

5.
Front Plant Sci ; 14: 1239594, 2023.
Artigo em Inglês | MEDLINE | ID: mdl-37674739

RESUMO

The Internet of Things (IOT)-based smart farming promises ultrafast speeds and near real-time response. Precision farming enabled by the Internet of Things has the potential to boost efficiency and output while reducing water use. Therefore, IoT devices can aid farmers in keeping track crop health and development while also automating a variety of tasks (such as moisture level prediction, irrigation system, crop development, and nutrient levels). The IoT-based autonomous irrigation technique makes exact use of farmers' time, money, and power. High crop yields can be achieved through consistent monitoring and sensing of crops utilizing a variety of IoT sensors to inform farmers of optimal harvest times. In this paper, a smart framework for growing tomatoes is developed, with influence from IoT devices or modules. With the help of IoT modules, we can forecast soil moisture levels and fine-tune the watering schedule. To further aid farmers, a smartphone app is currently in development that will provide them with crucial data on the health of their tomato crops. Large-scale experiments validate the proposed model's ability to intelligently monitor the irrigation system, which contributes to higher tomato yields.

6.
Front Plant Sci ; 14: 1234555, 2023.
Artigo em Inglês | MEDLINE | ID: mdl-37636091

RESUMO

Agriculture is the most critical sector for food supply on the earth, and it is also responsible for supplying raw materials for other industrial productions. Currently, the growth in agricultural production is not sufficient to keep up with the growing population, which may result in a food shortfall for the world's inhabitants. As a result, increasing food production is crucial for developing nations with limited land and resources. It is essential to select a suitable crop for a specific region to increase its production rate. Effective crop production forecasting in that area based on historical data, including environmental and cultivation areas, and crop production amount, is required. However, the data for such forecasting are not publicly available. As such, in this paper, we take a case study of a developing country, Bangladesh, whose economy relies on agriculture. We first gather and preprocess the data from the relevant research institutions of Bangladesh and then propose an ensemble machine learning approach, called K-nearest Neighbor Random Forest Ridge Regression (KRR), to effectively predict the production of the major crops (three different kinds of rice, potato, and wheat). KRR is designed after investigating five existing traditional machine learning (Support Vector Regression, Naïve Bayes, and Ridge Regression) and ensemble learning (Random Forest and CatBoost) algorithms. We consider four classical evaluation metrics, i.e., mean absolute error, mean square error (MSE), root MSE, and R 2, to evaluate the performance of the proposed KRR over the other machine learning models. It shows 0.009 MSE, 99% R 2 for Aus; 0.92 MSE, 90% R 2 for Aman; 0.246 MSE, 99% R 2 for Boro; 0.062 MSE, 99% R 2 for wheat; and 0.016 MSE, 99% R 2 for potato production prediction. The Diebold-Mariano test is conducted to check the robustness of the proposed ensemble model, KRR. In most cases, it shows 1% and 5% significance compared to the benchmark ML models. Lastly, we design a recommender system that suggests suitable crops for a specific land area for cultivation in the next season. We believe that the proposed paradigm will help the farmers and personnel in the agricultural sector leverage proper crop cultivation and production.

7.
Sensors (Basel) ; 23(15)2023 Jul 26.
Artigo em Inglês | MEDLINE | ID: mdl-37571490

RESUMO

Optical coherence tomography (OCT) is widely used to detect and classify retinal diseases. However, OCT-image-based manual detection by ophthalmologists is prone to errors and subjectivity. Thus, various automation methods have been proposed; however, improvements in detection accuracy are required. Particularly, automated techniques using deep learning on OCT images are being developed to detect various retinal disorders at an early stage. Here, we propose a deep learning-based automatic method for detecting and classifying retinal diseases using OCT images. The diseases include age-related macular degeneration, branch retinal vein occlusion, central retinal vein occlusion, central serous chorioretinopathy, and diabetic macular edema. The proposed method comprises four main steps: three pretrained models, DenseNet-201, InceptionV3, and ResNet-50, are first modified according to the nature of the dataset, after which the features are extracted via transfer learning. The extracted features are improved, and the best features are selected using ant colony optimization. Finally, the best features are passed to the k-nearest neighbors and support vector machine algorithms for final classification. The proposed method, evaluated using OCT retinal images collected from Soonchunhyang University Bucheon Hospital, demonstrates an accuracy of 99.1% with the incorporation of ACO. Without ACO, the accuracy achieved is 97.4%. Furthermore, the proposed method exhibits state-of-the-art performance and outperforms existing techniques in terms of accuracy.


Assuntos
Aprendizado Profundo , Retinopatia Diabética , Edema Macular , Doenças Retinianas , Humanos , Retinopatia Diabética/diagnóstico por imagem , Tomografia de Coerência Óptica/métodos , Algoritmos
8.
Diagnostics (Basel) ; 13(13)2023 Jun 27.
Artigo em Inglês | MEDLINE | ID: mdl-37443585

RESUMO

Across all countries, both developing and developed, women face the greatest risk of breast cancer. Patients who have their breast cancer diagnosed and staged early have a better chance of receiving treatment before the disease spreads. The automatic analysis and classification of medical images are made possible by today's technology, allowing for quicker and more accurate data processing. The Internet of Things (IoT) is now crucial for the early and remote diagnosis of chronic diseases. In this study, mammography images from the publicly available online repository The Cancer Imaging Archive (TCIA) were used to train a deep transfer learning (DTL) model for an autonomous breast cancer diagnostic system. The data were pre-processed before being fed into the model. A popular deep learning (DL) technique, i.e., convolutional neural networks (CNNs), was combined with transfer learning (TL) techniques such as ResNet50, InceptionV3, AlexNet, VGG16, and VGG19 to boost prediction accuracy along with a support vector machine (SVM) classifier. Extensive simulations were analyzed by employing a variety of performances and network metrics to demonstrate the viability of the proposed paradigm. Outperforming some current works based on mammogram images, the experimental accuracy, precision, sensitivity, specificity, and f1-scores reached 97.99%, 99.51%, 98.43%, 80.08%, and 98.97%, respectively, on the huge dataset of mammography images categorized as benign and malignant, respectively. Incorporating Fog computing technologies, this model safeguards the privacy and security of patient data, reduces the load on centralized servers, and increases the output.

9.
Comput Math Methods Med ; 2022: 7502504, 2022.
Artigo em Inglês | MEDLINE | ID: mdl-36276999

RESUMO

Melanoma is a dangerous form of skin cancer that results in the demise of patients at the developed stage. Researchers have attempted to develop automated systems for the timely recognition of this deadly disease. However, reliable and precise identification of melanoma moles is a tedious and complex activity as there exist huge differences in the mass, structure, and color of the skin lesions. Additionally, the incidence of noise, blurring, and chrominance changes in the suspected images further enhance the complexity of the detection procedure. In the proposed work, we try to overcome the limitations of the existing work by presenting a deep learning (DL) model. Descriptively, after accomplishing the preprocessing task, we have utilized an object detection approach named CornerNet model to detect melanoma lesions. Then the localized moles are passed as input to the fuzzy K-means (FLM) clustering approach to perform the segmentation task. To assess the segmentation power of the proposed approach, two standard databases named ISIC-2017 and ISIC-2018 are employed. Extensive experimentation has been conducted to demonstrate the robustness of the proposed approach through both numeric and pictorial results. The proposed approach is capable of detecting and segmenting the moles of arbitrary shapes and orientations. Furthermore, the presented work can tackle the presence of noise, blurring, and brightness variations as well. We have attained the segmentation accuracy values of 99.32% and 99.63% over the ISIC-2017 and ISIC-2018 databases correspondingly which clearly depicts the effectiveness of our model for the melanoma mole segmentation.


Assuntos
Melanoma , Toupeiras , Neoplasias Cutâneas , Humanos , Animais , Processamento de Imagem Assistida por Computador/métodos , Algoritmos , Melanoma/diagnóstico por imagem , Análise por Conglomerados , Neoplasias Cutâneas/diagnóstico por imagem , Dermoscopia/métodos
10.
Artigo em Inglês | MEDLINE | ID: mdl-36293619

RESUMO

To date, neural efficiency, an ability to economically utilize mental resources, has not been investigated after cognitive training. The purpose of this study was to provide customized cognitive training and confirm its effect on neural efficiency by investigating prefrontal cortex (PFC) activity using functional near-infrared spectroscopy (fNIRS). Before training, a prediction algorithm based on the PFC activity with logistic regression was used to predict the customized difficulty level with 86% accuracy by collecting data when subjects performed four kinds of cognitive tasks. In the next step, the intervention study was designed using one pre-posttest group. Thirteen healthy adults participated in the virtual reality (VR)-based spatial cognitive training, which was conducted four times a week for 30 min for three weeks with customized difficulty levels for each session. To measure its effect, the trail-making test (TMT) and hemodynamic responses were measured for executive function and PFC activity. During the training, VR-based spatial cognitive performance was improved, and hemodynamic values were gradually increased as the training sessions progressed. In addition, after the training, the performance on the trail-making task (TMT) demonstrated a statistically significant improvement, and there was a statistically significant decrease in the PFC activity. The improved performance on the TMT coupled with the decreased PFC activity could be regarded as training-induced neural efficiency. These results suggested that personalized cognitive training could be effective in improving executive function and neural efficiency.


Assuntos
Córtex Pré-Frontal , Espectroscopia de Luz Próxima ao Infravermelho , Adulto , Humanos , Espectroscopia de Luz Próxima ao Infravermelho/métodos , Teste de Sequência Alfanumérica , Córtex Pré-Frontal/fisiologia , Cognição , Aprendizado de Máquina , Algoritmos
11.
Comput Intell Neurosci ; 2022: 8238375, 2022.
Artigo em Inglês | MEDLINE | ID: mdl-35875787

RESUMO

Human gait recognition has emerged as a branch of biometric identification in the last decade, focusing on individuals based on several characteristics such as movement, time, and clothing. It is also great for video surveillance applications. The main issue with these techniques is the loss of accuracy and time caused by traditional feature extraction and classification. With advances in deep learning for a variety of applications, particularly video surveillance and biometrics, we proposed a lightweight deep learning method for human gait recognition in this work. The proposed method includes sequential steps-pretrained deep models selection of features classification. Two lightweight pretrained models are initially considered and fine-tuned in terms of additional layers and freezing some middle layers. Following that, models were trained using deep transfer learning, and features were engineered on fully connected and average pooling layers. The fusion is performed using discriminant correlation analysis, which is then optimized using an improved moth-flame optimization algorithm. For final classification, the final optimum features are classified using an extreme learning machine (ELM). The experiments were carried out on two publicly available datasets, CASIA B and TUM GAID, and yielded an average accuracy of 91.20 and 98.60%, respectively. When compared to recent state-of-the-art techniques, the proposed method is found to be more accurate.


Assuntos
Aprendizado Profundo , Mariposas , Algoritmos , Animais , Marcha , Humanos , Aprendizado de Máquina , Redes Neurais de Computação
12.
J Supercomput ; 78(17): 19228-19245, 2022.
Artigo em Inglês | MEDLINE | ID: mdl-35754514

RESUMO

Wearable health devices and respiratory rates (RRs) have drawn attention to the healthcare domain as it helps healthcare workers monitor patients' health status continuously and in a non-invasive manner. However, to monitor health status outside healthcare professional settings, the reliability of this wearable device needs to be evaluated in complex environments (i.e., public street and transportation). Therefore, this study proposes a method to estimate RR from breathing sounds recorded by a microphone placed inside three types of masks: surgical, a respirator mask (Korean Filter 94), and reusable masks. The Welch periodogram method was used to estimate the power spectral density of the breathing signals to measure the RR. We evaluated the proposed method by collecting data from 10 healthy participants in four different environments: indoor (office) and outdoor (public street, public bus, and subway). The results obtained errors as low as 0% for accuracy and repeatability in most cases. This research demonstrated that the acoustic-based method could be employed as a wearable device to monitor RR continuously, even outside the hospital environment.

13.
Sci Rep ; 12(1): 7141, 2022 05 03.
Artigo em Inglês | MEDLINE | ID: mdl-35504945

RESUMO

Photoplethysmography imaging (PPGI) sensors have attracted a significant amount of attention as they enable the remote monitoring of heart rates (HRs) and thus do not require any additional devices to be worn on fingers or wrists. In this study, we mounted PPGI sensors on a robot for active and autonomous HR (R-AAH) estimation. We proposed an algorithm that provides accurate HR estimation, which can be performed in real time using vision and robot manipulation algorithms. By simplifying the extraction of facial skin images using saturation (S) values in the HSV color space, and selecting pixels based on the most frequent S value within the face image, we achieved a reliable HR assessment. The results of the proposed algorithm using the R-AAH method were evaluated by rigorous comparison with the results of existing algorithms on the UBFC-RPPG dataset (n = 42). The proposed algorithm yielded an average absolute error (AAE) of 0.71 beats per minute (bpm). The developed algorithm is simple, with a processing time of less than 1 s (275 ms for an 8-s window). The algorithm was further validated on our own dataset (BAMI-RPPG dataset [n = 14]) with an AAE of 0.82 bpm.


Assuntos
Algoritmos , Fotopletismografia , Diagnóstico por Imagem , Face , Frequência Cardíaca/fisiologia , Fotopletismografia/métodos
14.
J Healthc Eng ; 2022: 5329014, 2022.
Artigo em Inglês | MEDLINE | ID: mdl-35368962

RESUMO

Coronavirus disease 2019 (COVID-19) is a novel disease that affects healthcare on a global scale and cannot be ignored because of its high fatality rate. Computed tomography (CT) images are presently being employed to assist doctors in detecting COVID-19 in its early stages. In several scenarios, a combination of epidemiological criteria (contact during the incubation period), the existence of clinical symptoms, laboratory tests (nucleic acid amplification tests), and clinical imaging-based tests are used to diagnose COVID-19. This method can miss patients and cause more complications. Deep learning is one of the techniques that has been proven to be prominent and reliable in several diagnostic domains involving medical imaging. This study utilizes a convolutional neural network (CNN), stacked autoencoder, and deep neural network to develop a COVID-19 diagnostic system. In this system, classification undergoes some modification before applying the three CT image techniques to determine normal and COVID-19 cases. A large-scale and challenging CT image dataset was used in the training process of the employed deep learning model and reporting their final performance. Experimental outcomes show that the highest accuracy rate was achieved using the CNN model with an accuracy of 88.30%, a sensitivity of 87.65%, and a specificity of 87.97%. Furthermore, the proposed system has outperformed the current existing state-of-the-art models in detecting the COVID-19 virus using CT images.


Assuntos
COVID-19 , Aprendizado Profundo , COVID-19/diagnóstico por imagem , Humanos , Redes Neurais de Computação , Tomografia Computadorizada por Raios X/métodos
15.
J Healthc Eng ; 2022: 4130674, 2022.
Artigo em Inglês | MEDLINE | ID: mdl-35178226

RESUMO

Intelligent decision support systems (IDSS) for complex healthcare applications aim to examine a large quantity of complex healthcare data to assist doctors, researchers, pathologists, and other healthcare professionals. A decision support system (DSS) is an intelligent system that provides improved assistance in various stages of health-related disease diagnosis. At the same time, the SARS-CoV-2 infection that causes COVID-19 disease has spread globally from the beginning of 2020. Several research works reported that the imaging pattern based on computed tomography (CT) can be utilized to detect SARS-CoV-2. Earlier identification and detection of the diseases is essential to offer adequate treatment and avoid the severity of the disease. With this motivation, this study develops an efficient deep-learning-based fusion model with swarm intelligence (EDLFM-SI) for SARS-CoV-2 identification. The proposed EDLFM-SI technique aims to detect and classify the SARS-CoV-2 infection or not. Also, the EDLFM-SI technique comprises various processes, namely, data augmentation, preprocessing, feature extraction, and classification. Moreover, a fusion of capsule network (CapsNet) and MobileNet based feature extractors are employed. Besides, a water strider algorithm (WSA) is applied to fine-tune the hyperparameters involved in the DL models. Finally, a cascaded neural network (CNN) classifier is applied for detecting the existence of SARS-CoV-2. In order to showcase the improved performance of the EDLFM-SI technique, a wide range of simulations take place on the COVID-19 CT data set and the SARS-CoV-2 CT scan data set. The simulation outcomes highlighted the supremacy of the EDLFM-SI technique over the recent approaches.


Assuntos
COVID-19 , Aprendizado Profundo , Humanos , Inteligência , Redes Neurais de Computação , SARS-CoV-2
16.
Sensors (Basel) ; 22(2)2022 Jan 07.
Artigo em Inglês | MEDLINE | ID: mdl-35062409

RESUMO

The high data rates detail that internet-connected devices have been increasing exponentially. Cognitive radio (CR) is an auspicious technology used to address the resource shortage issue in wireless IoT networks. Resource optimization is considered a non-convex and nondeterministic polynomial (NP) complete problem within CR-based Internet of Things (IoT) networks (CR-IoT). Moreover, the combined optimization of conflicting objectives is a challenging issue in CR-IoT networks. In this paper, energy efficiency (EE) and spectral efficiency (SE) are considered as conflicting optimization objectives. This research work proposed a hybrid tabu search-based stimulated algorithm (HTSA) in order to achieve Pareto optimality between EE and SE. In addition, the fuzzy-based decision is employed to achieve better Pareto optimality. The performance of the proposed HTSA approach is analyzed using different resource allocation parameters and validated through simulation results.

17.
J Supercomput ; 78(4): 5269-5284, 2022.
Artigo em Inglês | MEDLINE | ID: mdl-34566258

RESUMO

Yoga is a form of exercise that is beneficial for health, focusing on physical, mental, and spiritual connections. However, practicing yoga and adopting incorrect postures can cause health problems, such as muscle sprains and pain. In this study, we propose the development of a yoga posture coaching system using an interactive display, based on a transfer learning technique. The 14 different yoga postures were collected from an RGB camera, and eight participants were required to perform each yoga posture 10 times. Data augmentation was applied to oversample and prevent over-fitting of the training datasets. Six transfer learning models (TL-VGG16-DA, TL-VGG19-DA, TL-MobileNet-DA, TL-MobileNetV2-DA, TL-InceptionV3-DA, and TL-DenseNet201-DA) were exploited for classification tasks to select the optimal model for the yoga coaching system, based on evaluation metrics. As a result, the TL-MobileNet-DA model was selected as the optimal model, showing an overall accuracy of 98.43%, sensitivity of 98.30%, specificity of 99.88%, and Matthews correlation coefficient of 0.9831. The study presented a yoga posture coaching system that recognized the yoga posture movement of users, in real time, according to the selected yoga posture guidance and can coach them to avoid incorrect postures.

18.
PLoS One ; 16(5): e0250959, 2021.
Artigo em Inglês | MEDLINE | ID: mdl-33970949

RESUMO

Compression at a very low bit rate(≤0.5bpp) causes degradation in video frames with standard decoding algorithms like H.261, H.262, H.264, and MPEG-1 and MPEG-4, which itself produces lots of artifacts. This paper focuses on an efficient pre-and post-processing technique (PP-AFT) to address and rectify the problems of quantization error, ringing, blocking artifact, and flickering effect, which significantly degrade the visual quality of video frames. The PP-AFT method differentiates the blocked images or frames using activity function into different regions and developed adaptive filters as per the classified region. The designed process also introduces an adaptive flicker extraction and removal method and a 2-D filter to remove ringing effects in edge regions. The PP-AFT technique is implemented on various videos, and results are compared with different existing techniques using performance metrics like PSNR-B, MSSIM, and GBIM. Simulation results show significant improvement in the subjective quality of different video frames. The proposed method outperforms state-of-the-art de-blocking methods in terms of PSNR-B with average value lying between (0.7-1.9db) while (35.83-47.7%) reduced average GBIM keeping MSSIM values very close to the original sequence statistically 0.978.


Assuntos
Algoritmos , Simulação por Computador/normas , Compressão de Dados/métodos , Aumento da Imagem/métodos , Razão Sinal-Ruído , Artefatos , Humanos
19.
Sensors (Basel) ; 21(1)2021 Jan 02.
Artigo em Inglês | MEDLINE | ID: mdl-33401652

RESUMO

Hypertension is an antecedent to cardiac disorders. According to the World Health Organization (WHO), the number of people affected with hypertension will reach around 1.56 billion by 2025. Early detection of hypertension is imperative to prevent the complications caused by cardiac abnormalities. Hypertension usually possesses no apparent detectable symptoms; hence, the control rate is significantly low. Computer-aided diagnosis based on machine learning and signal analysis has recently been applied to identify biomarkers for the accurate prediction of hypertension. This research proposes a new expert hypertension detection system (EHDS) from pulse plethysmograph (PuPG) signals for the categorization of normal and hypertension. The PuPG signal data set, including rich information of cardiac activity, was acquired from healthy and hypertensive subjects. The raw PuPG signals were preprocessed through empirical mode decomposition (EMD) by decomposing a signal into its constituent components. A combination of multi-domain features was extracted from the preprocessed PuPG signal. The features exhibiting high discriminative characteristics were selected and reduced through a proposed hybrid feature selection and reduction (HFSR) scheme. Selected features were subjected to various classification methods in a comparative fashion in which the best performance of 99.4% accuracy, 99.6% sensitivity, and 99.2% specificity was achieved through weighted k-nearest neighbor (KNN-W). The performance of the proposed EHDS was thoroughly assessed by tenfold cross-validation. The proposed EHDS achieved better detection performance in comparison to other electrocardiogram (ECG) and photoplethysmograph (PPG)-based methods.


Assuntos
Hipertensão , Adulto , Idoso , Algoritmos , Diagnóstico por Computador , Eletrocardiografia , Feminino , Frequência Cardíaca , Humanos , Hipertensão/diagnóstico , Aprendizado de Máquina , Masculino , Pessoa de Meia-Idade
20.
Entropy (Basel) ; 22(3)2020 Mar 18.
Artigo em Inglês | MEDLINE | ID: mdl-33286128

RESUMO

This paper investigated the behavior of the two-dimensional magnetohydrodynamics (MHD) nanofluid flow of water-based suspended carbon nanotubes (CNTs) with entropy generation and nonlinear thermal radiation in a Darcy-Forchheimer porous medium over a moving horizontal thin needle. The study also incorporated the effects of Hall current, magnetohydrodynamics, and viscous dissipation on dust particles. The said flow model was described using high order partial differential equations. An appropriate set of transformations was used to reduce the order of these equations. The reduced system was then solved by using a MATLAB tool bvp4c. The results obtained were compared with the existing literature, and excellent harmony was achieved in this regard. The results were presented using graphs and tables with coherent discussion. It was comprehended that Hall current parameter intensified the velocity profiles for both CNTs. Furthermore, it was perceived that the Bejan number boosted for higher values of Darcy-Forchheimer number.

SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA