Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 11 de 11
Filter
Add more filters










Publication year range
1.
Heliyon ; 10(1): e23151, 2024 Jan 15.
Article in English | MEDLINE | ID: mdl-38223736

ABSTRACT

Dengue is one of Pakistan's major health concerns. In this study, we aimed to advance our understanding of the levels of knowledge, attitudes, and practices (KAPs) in Pakistan's Dengue Fever (DF) hotspots. Initially, at-risk communities were systematically identified via a well-known spatial modeling technique, named, Kernel Density Estimation, which was later targeted for a household-based cross-sectional survey of KAPs. To collect data on sociodemographic and KAPs, random sampling was utilized (n = 385, 5 % margin of error). Later, the association of different demographics (characteristics), knowledge, and attitude factors-potentially related to poor preventive practices was assessed using bivariate (individual) and multivariable (model) logistic regression analyses. Most respondents (>90 %) identified fever as a sign of DF; headache (73.8 %), joint pain (64.4 %), muscular pain (50.9 %), pain behind the eyes (41.8 %), bleeding (34.3 %), and skin rash (36.1 %) were identified relatively less. Regression results showed significant associations of poor knowledge/attitude with poor preventive practices; dengue vector (odds ratio [OR] = 3.733, 95 % confidence interval [CI ] = 2.377-5.861; P < 0.001), DF symptoms (OR = 3.088, 95 % CI = 1.949-4.894; P < 0.001), dengue transmission (OR = 1.933, 95 % CI = 1.265-2.956; P = 0.002), and attitude (OR = 3.813, 95 % CI = 1.548-9.395; P = 0.004). Moreover, education level was stronger in bivariate analysis and the strongest independent factor of poor preventive practices in multivariable analysis (illiterate: adjusted OR = 6.833, 95 % CI = 2.979-15.672; P < 0.001) and primary education (adjusted OR = 4.046, 95 % CI = 1.997-8.199; P < 0.001). This situation highlights knowledge gaps within urban communities, particularly in understanding dengue transmission and signs/symptoms. The level of education in urban communities also plays a substantial role in dengue control, as observed in this study, where poor preventive practices were more prevalent among illiterate and less educated respondents.

2.
J Healthc Eng ; 2022: 7194419, 2022.
Article in English | MEDLINE | ID: mdl-35463679

ABSTRACT

An ECG is a diagnostic technique that examines and records the heart's electrical impulses. It is easy to categorise and prevent computational abstractions in the ECG signal using the conventional method for obtaining ECG features. It is a significant issue, but it is also a difficult and time-consuming chore for cardiologists and medical professionals. The proposed classifier eliminates all of the following limitations. Machine learning in healthcare equipment reduces moral transgressions. This study's primary purpose is to calculate the R-R interval and analyze the blockage utilising simple algorithms and approaches that give high accuracy. The MIT-BIH dataset may be used to rebuild the data. The acquired data may include both normal and abnormal ECGs. A Gabor filter is employed to generate a noiseless signal, and DCT-DOST is used to calculate the signal's amplitude. The amplitude is computed to detect any cardiac anomalies. A genetic algorithm derives the main highlights from the R peak and cycle segment length underlying the ECG signal. So, combining data with specific qualities maximises identification. The genetic algorithm aids in hereditary computations, which aids in multitarget improvement. Finally, Radial Basis Function Neural Network (RBFNN) is presented as an example. An efficient feedforward neural network lowers the number of local minima in the signal. It shows progress in identifying both normal and abnormal ECG signals.


Subject(s)
Electrocardiography , Humans , Algorithms , Arrhythmias, Cardiac , Delivery of Health Care , Electrocardiography/methods , Machine Learning , Signal Processing, Computer-Assisted
3.
Comput Intell Neurosci ; 2022: 8777355, 2022.
Article in English | MEDLINE | ID: mdl-35378817

ABSTRACT

Sign language is the native language of deaf people, which they use in their daily life, and it facilitates the communication process between deaf people. The problem faced by deaf people is targeted using sign language technique. Sign language refers to the use of the arms and hands to communicate, particularly among those who are deaf. This varies depending on the person and the location from which they come. As a result, there is no standardization about the sign language to be used; for example, American, British, Chinese, and Arab sign languages are all distinct. Here, in this study we trained a model, which will be able to classify the Arabic sign language, which consists of 32 Arabic alphabet sign classes. In images, sign language is detected through the pose of the hand. In this study, we proposed a framework, which consists of two CNN models, and each of them is individually trained on the training set. The final predictions of the two models were ensembled to achieve higher results. The dataset used in this study is released in 2019 and is called as ArSL2018. It is launched at the Prince Mohammad Bin Fahd University, Al Khobar, Saudi Arabia. The main contribution in this study is resizing the images to 64 ∗ 64 pixels, converting from grayscale images to three-channel images, and then applying the median filter to the images, which acts as lowpass filtering in order to smooth the images and reduce noise and to make the model more robust to avoid overfitting. Then, the preprocessed image is fed into two different models, which are ResNet50 and MobileNetV2. ResNet50 and MobileNetV2 architectures were implemented together. The results we achieved on the test set for the whole data are with an accuracy of about 97% after applying many preprocessing techniques and different hyperparameters for each model, and also different data augmentation techniques.


Subject(s)
Communication Aids for Disabled , Gestures , Computers , Humans , Language , Sign Language , United States
4.
J Healthc Eng ; 2022: 6005446, 2022.
Article in English | MEDLINE | ID: mdl-35388315

ABSTRACT

Human-computer interaction (HCI) has seen a paradigm shift from textual or display-based control toward more intuitive control modalities such as voice, gesture, and mimicry. Particularly, speech has a great deal of information, conveying information about the speaker's inner condition and his/her aim and desire. While word analysis enables the speaker's request to be understood, other speech features disclose the speaker's mood, purpose, and motive. As a result, emotion recognition from speech has become critical in current human-computer interaction systems. Moreover, the findings of the several professions involved in emotion recognition are difficult to combine. Many sound analysis methods have been developed in the past. However, it was not possible to provide an emotional analysis of people in a live speech. Today, the development of artificial intelligence and the high performance of deep learning methods bring studies on live data to the fore. This study aims to detect emotions in the human voice using artificial intelligence methods. One of the most important requirements of artificial intelligence works is data. The Ryerson Audio-Visual Database of Emotional Speech and Song (RAVDESS) open-source dataset was used in the study. The RAVDESS dataset contains more than 2000 data recorded as speeches and songs by 24 actors. Data were collected for eight different moods from the actors. It was aimed at detecting eight different emotion classes, including neutral, calm, happy, sad, angry, fearful, disgusted, and surprised moods. The multilayer perceptron (MLP) classifier, a widely used supervised learning algorithm, was preferred for classification. The proposed model's performance was compared with that of similar studies, and the results were evaluated. An overall accuracy of 81% was obtained for classifying eight different emotions by using the proposed model on the RAVDESS dataset.


Subject(s)
Artificial Intelligence , Speech , Computers , Emotions , Female , Humans , Male , Neural Networks, Computer
5.
Comput Intell Neurosci ; 2022: 7463091, 2022.
Article in English | MEDLINE | ID: mdl-35401731

ABSTRACT

Emotions play an essential role in human relationships, and many real-time applications rely on interpreting the speaker's emotion from their words. Speech emotion recognition (SER) modules aid human-computer interface (HCI) applications, but they are challenging to implement because of the lack of balanced data for training and clarity about which features are sufficient for categorization. This research discusses the impact of the classification approach, identifying the most appropriate combination of features and data augmentation on speech emotion detection accuracy. Selection of the correct combination of handcrafted features with the classifier plays an integral part in reducing computation complexity. The suggested classification model, a 1D convolutional neural network (1D CNN), outperforms traditional machine learning approaches in classification. Unlike most earlier studies, which examined emotions primarily through a single language lens, our analysis looks at numerous language data sets. With the most discriminating features and data augmentation, our technique achieves 97.09%, 96.44%, and 83.33% accuracy for the BAVED, ANAD, and SAVEE data sets, respectively.


Subject(s)
Neural Networks, Computer , Speech , Computers , Emotions , Humans , Language
6.
J Healthc Eng ; 2022: 2988262, 2022.
Article in English | MEDLINE | ID: mdl-35273784

ABSTRACT

A difficult challenge in the realm of biomedical engineering is the detection of physiological changes occurring inside the human body, which is a difficult undertaking. At the moment, these irregularities are graded manually, which is very difficult, time-consuming, and tiresome due to the many complexities associated with the methods involved in their identification. In order to identify illnesses at an early stage, the use of computer-assisted diagnostics has acquired increased attention as a result of the requirement of a disease detection system. The major goal of this proposed work is to build a computer-aided design (CAD) system to help in the early identification of glaucoma as well as the screening and treatment of the disease. The fundus camera is the most affordable image analysis modality available, and it meets the financial needs of the general public. The extraction of structural characteristics from the segmented optic disc and the segmented optic cup may be used to characterize glaucoma and determine its severity. For this study, the primary goal is to estimate the potential of the image analysis model for the early identification and diagnosis of glaucoma, as well as for the evaluation of ocular disorders. The suggested CAD system would aid the ophthalmologist in the diagnosis of ocular illnesses by providing a second opinion as a judgment made by human specialists in a controlled environment. An ensemble-based deep learning model for the identification and diagnosis of glaucoma is in its early stages now. This method's initial module is an ensemble-based deep learning model for glaucoma diagnosis, which is the first of its kind ever developed. It was decided to use three pretrained convolutional neural networks for the categorization of glaucoma. These networks included the residual network (ResNet), the visual geometry group network (VGGNet), and the GoogLeNet. It was necessary to use five different data sets in order to determine how well the proposed algorithm performed. These data sets included the DRISHTI-GS, the Optic Nerve Segmentation Database (DRIONS-DB), and the High-Resolution Fundus (HRF). Accuracy of 91.11% for the PSGIMSR data set and the sensitivity of 85.55% and specificity of 95.20% for the suggested ensemble architecture on the PSGIMSR data set were achieved. Similarly, accuracy rates of 95.63%, 98.67%, 95.64%, and 88.96% were achieved using the DRIONS-DB, HRF, DRISHTI-GS, and combined data sets, respectively.


Subject(s)
Glaucoma , Optic Disk , Diagnosis, Computer-Assisted/methods , Fundus Oculi , Glaucoma/diagnostic imaging , Humans , Optic Disk/diagnostic imaging , Supervised Machine Learning
7.
J Healthc Eng ; 2022: 8732213, 2022.
Article in English | MEDLINE | ID: mdl-35273786

ABSTRACT

Telehealth and remote patient monitoring (RPM) have been critical components that have received substantial attention and gained hold since the pandemic's beginning. Telehealth and RPM allow easy access to patient data and help provide high-quality care to patients at a low cost. This article proposes an Intelligent Remote Patient Activity Tracking System system that can monitor patient activities and vitals during those activities based on the attached sensors. An Internet of Things- (IoT-) enabled health monitoring device is designed using machine learning models to track patient's activities such as running, sleeping, walking, and exercising, the vitals during those activities such as body temperature and heart rate, and the patient's breathing pattern during such activities. Machine learning models are used to identify different activities of the patient and analyze the patient's respiratory health during various activities. Currently, the machine learning models are used to detect cough and healthy breathing only. A web application is also designed to track the data uploaded by the proposed devices.


Subject(s)
Internet of Things , Telemedicine , Artificial Intelligence , Humans , Machine Learning , Monitoring, Physiologic
8.
J Healthc Eng ; 2022: 1128217, 2022.
Article in English | MEDLINE | ID: mdl-35281546

ABSTRACT

The field of image processing is distinguished by the variety of functions it offers and the wide range of applications it has in biomedical imaging. It becomes a difficult and time-consuming process for radiologists to do the manual identification and categorization of the tumour. It is a complex and time-consuming procedure conducted by radiologists or clinical professionals to remove the contaminated tumour region from magnetic resonance (MR) pictures. It is the goal of this study to improve the performance and reduce the complexity of the image segmentation process by investigating FCM predicted image segmentation procedures in order to reduce the intricacy of the process. Furthermore, relevant characteristics are collected from each segmented tissue and aligned as input to the classifiers for autonomous identification and relegation of encephalon cancers in order to increase the accuracy and quality rate of the neural network classifier. An evaluation, validation, and presentation of the experimental performance of the suggested approach have been completed. A unique APSO (accelerated particle swarm optimization) based artificial neural network model (ANNM) for the relegation of benign and malignant tumours is presented in this study effort, which allows for the automated identification and categorization of brain tumours. Using APSO training to improve the suggested ANNM model parameters would give a unique method to alleviate the stressful work of radiologists performing manual identification of encephalon cancers from MR images. The use of an APSO-based ANNM (artificial neural network model) model for automated brain tumour classification has been presented in order to demonstrate the resilience of the classification model. It has been suggested to utilise the improved enhanced fuzzy c means (IEnFCM) method for image segmentation, while the GLCM (gray level co-occurrence matrix) feature extraction approach has been employed for feature extraction from magnetic resonance imaging (MR pictures).


Subject(s)
Algorithms , Brain Neoplasms , Brain Neoplasms/diagnostic imaging , Humans , Image Processing, Computer-Assisted/methods , Machine Learning , Magnetic Resonance Imaging/methods
9.
J Healthc Eng ; 2022: 4277436, 2022.
Article in English | MEDLINE | ID: mdl-35154620

ABSTRACT

In experimental analysis and computer-aided design sustain scheme, segmentation of cell liver and hepatic lesions by an automated method is a significant step for studying the biomarkers characteristics in experimental analysis and computer-aided design sustain scheme. Patient to patient, the change in lesion type is dependent on the size, imaging equipment (such as the setting dissimilarity approach), and timing of the lesion, all of which are different. With practical approaches, it is difficult to determine the stages of liver cancer based on the segmentation of lesion patterns. Based on the training accuracy rate, the present algorithm confronts a number of obstacles in some domains. The suggested work proposes a system for automatically detecting liver tumours and lesions in magnetic resonance imaging of the abdomen pictures by using 3D affine invariant and shape parameterization approaches, as well as the results of this study. This point-to-point parameterization addresses the frequent issues associated with concave surfaces by establishing a standard model level for the organ's surface throughout the modelling process. Initially, the geodesic active contour analysis approach is used to separate the liver area from the rest of the body. The proposal is as follows: It is possible to minimise the error rate during the training operations, which are carried out using Cascaded Fully Convolutional Neural Networks (CFCNs) using the input of the segmented tumour area. Liver segmentation may help to reduce the error rate during the training procedures. The stage analysis of the data sets, which are comprised of training and testing pictures, is used to get the findings and validate their validity. The accuracy attained by the Cascaded Fully Convolutional Neural Network (CFCN) for the liver tumour analysis is 94.21 percent, with a calculation time of less than 90 seconds per volume for the liver tumour analysis. The results of the trials show that the total accuracy rate of the training and testing procedure is 93.85 percent in the various volumes of 3DIRCAD datasets tested.


Subject(s)
Early Detection of Cancer , Liver Neoplasms , Abdomen , Humans , Image Processing, Computer-Assisted/methods , Liver Neoplasms/diagnostic imaging , Magnetic Resonance Imaging , Neural Networks, Computer
10.
Expert Syst ; 39(6): e12834, 2022 Jul.
Article in English | MEDLINE | ID: mdl-34898797

ABSTRACT

Following the COVID-19 pandemic, there has been an increase in interest in using digital resources to contain pandemics. To avoid, detect, monitor, regulate, track, and manage diseases, predict outbreaks and conduct data analysis and decision-making processes, a variety of digital technologies are used, ranging from artificial intelligence (AI)-powered machine learning (ML) or deep learning (DL) focused applications to blockchain technology and big data analytics enabled by cloud computing and the internet of things (IoT). In this paper, we look at how emerging technologies such as the IoT and sensors, AI, ML, DL, blockchain, augmented reality, virtual reality, cloud computing, big data, robots and drones, intelligent mobile apps, and 5G are advancing health care and paving the way to combat the COVID-19 pandemic. The aim of this research is to look at possible technologies, processes, and tools for addressing COVID-19 issues such as pre-screening, early detection, monitoring infected/quarantined individuals, forecasting future infection rates, and more. We also look at the research possibilities that have arisen as a result of the use of emerging technology to handle the COVID-19 crisis.

11.
Environ Res ; 199: 111370, 2021 08.
Article in English | MEDLINE | ID: mdl-34043971

ABSTRACT

Heavy metal ions in aqueous solutions are taken into account as one of the most harmful environmental issues that ominously affect human health. Pb(II) is a common pollutant among heavy metals found in industrial wastewater, and various methods were developed to remove the Pb(II). The adsorption method was more efficient, cheap, and eco-friendly to remove the Pb(II) from aqueous solutions. The removal efficiency depends on the process parameters (initial concentration, the adsorbent dosage of T-Fe3O4 nanocomposites, residence time, and adsorbent pH). The relationship between the process parameters and output is non-linear and complex. The purpose of the present study is to develop an artificial neural networks (ANN) model to estimate and analyze the relationship between Pb(II) removal and adsorption process parameters. The model was trained with the backpropagation algorithm. The model was validated with the unseen datasets. The correlation coefficient adj.R2 values for total datasets is 0.991. The relationship between the parameters and Pb(II) removal was analyzed by sensitivity analysis and creating a virtual adsorption process. The study determined that the ANN modeling was a reliable tool for predicting and optimizing adsorption process parameters for maximum lead removal from aqueous solutions.


Subject(s)
Nanocomposites , Water Pollutants, Chemical , Adsorption , Ferric Compounds , Humans , Hydrogen-Ion Concentration , Kinetics , Lead , Neural Networks, Computer , Solutions , Water Pollutants, Chemical/analysis
SELECTION OF CITATIONS
SEARCH DETAIL
...