Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 87
Filtrar
1.
BMC Med Inform Decis Mak ; 24(1): 92, 2024 Apr 05.
Artigo em Inglês | MEDLINE | ID: mdl-38575951

RESUMO

Emerging from the convergence of digital twin technology and the metaverse, consumer health (MCH) is witnessing a transformative shift. The amalgamation of bioinformatics with healthcare Big Data has ushered in a new era of disease prediction models that harness comprehensive medical data, enabling the anticipation of illnesses even before the onset of symptoms. In this model, deep neural networks stand out because they improve accuracy remarkably by increasing network depth and making weight changes using gradient descent. Nonetheless, traditional methods face their own set of challenges, including the issues of gradient instability and slow training. In this case, the Broad Learning System (BLS) stands out as a good alternative. It gets around the problems with gradient descent and lets you quickly rebuild a model through incremental learning. One problem with BLS is that it has trouble extracting complex features from complex medical data. This makes it less useful in a wide range of healthcare situations. In response to these challenges, we introduce DAE-BLS, a novel hybrid model that marries Denoising AutoEncoder (DAE) noise reduction with the efficiency of BLS. This hybrid approach excels in robust feature extraction, particularly within the intricate and multifaceted world of medical data. Validation using diverse datasets yields impressive results, with accuracies reaching as high as 98.50%. DAE-BLS's ability to rapidly adapt through incremental learning holds great promise for accurate and agile disease prediction, especially within the complex and dynamic healthcare scenarios of today.


Assuntos
Big Data , Tecnologia , Humanos , Biologia Computacional , Instalações de Saúde , Redes Neurais de Computação
2.
Methods ; 202: 88-102, 2022 06.
Artigo em Inglês | MEDLINE | ID: mdl-33610692

RESUMO

Skin cancer is one of the most common and dangerous cancer that exists worldwide. Malignant melanoma is one of the most dangerous skin cancer types has a high mortality rate. An estimated 196,060 melanoma cases will be diagnosed in 2020 in the USA. Many computerized techniques are presented in the past to diagnose skin lesions, but they are still failing to achieve significant accuracy. To improve the existing accuracy, we proposed a hierarchical framework based on two-dimensional superpixels and deep learning. First, we enhance the contrast of original dermoscopy images by fusing local and global enhanced images. The entire enhanced images are utilized in the next step to segmentation skin lesions using three-step superpixel lesion segmentation. The segmented lesions are mapped over the whole enhanced dermoscopy images and obtained only segmented color images. Then, a deep learning model (ResNet-50) is applied to these mapped images and learned features through transfer learning. The extracted features are further optimized using an improved grasshopper optimization algorithm, which is later classified through the Naïve Bayes classifier. The proposed hierarchical method has been evaluated on three datasets (Ph2, ISBI2016, and HAM1000), consisting of three, two, and seven skin cancer classes. On these datasets, our method achieved an accuracy of 95.40%, 91.1%, and 85.50%, respectively. The results show that this method can be helpful for the classification of skin cancer with improved accuracy.


Assuntos
Aprendizado Profundo , Melanoma , Dermatopatias , Neoplasias Cutâneas , Algoritmos , Teorema de Bayes , Dermoscopia/métodos , Humanos , Melanoma/diagnóstico por imagem , Melanoma/patologia , Neoplasias Cutâneas/diagnóstico por imagem , Neoplasias Cutâneas/patologia
3.
Sensors (Basel) ; 23(5)2023 Mar 02.
Artigo em Inglês | MEDLINE | ID: mdl-36904963

RESUMO

The performance of human gait recognition (HGR) is affected by the partial obstruction of the human body caused by the limited field of view in video surveillance. The traditional method required the bounding box to recognize human gait in the video sequences accurately; however, it is a challenging and time-consuming approach. Due to important applications, such as biometrics and video surveillance, HGR has improved performance over the last half-decade. Based on the literature, the challenging covariant factors that degrade gait recognition performance include walking while wearing a coat or carrying a bag. This paper proposed a new two-stream deep learning framework for human gait recognition. The first step proposed a contrast enhancement technique based on the local and global filters information fusion. The high-boost operation is finally applied to highlight the human region in a video frame. Data augmentation is performed in the second step to increase the dimension of the preprocessed dataset (CASIA-B). In the third step, two pre-trained deep learning models-MobilenetV2 and ShuffleNet-are fine-tuned and trained on the augmented dataset using deep transfer learning. Features are extracted from the global average pooling layer instead of the fully connected layer. In the fourth step, extracted features of both streams are fused using a serial-based approach and further refined in the fifth step by using an improved equilibrium state optimization-controlled Newton-Raphson (ESOcNR) selection method. The selected features are finally classified using machine learning algorithms for the final classification accuracy. The experimental process was conducted on 8 angles of the CASIA-B dataset and obtained an accuracy of 97.3, 98.6, 97.7, 96.5, 92.9, 93.7, 94.7, and 91.2%, respectively. Comparisons were conducted with state-of-the-art (SOTA) techniques, and showed improved accuracy and reduced computational time.


Assuntos
Aprendizado Profundo , Humanos , Algoritmos , Marcha , Aprendizado de Máquina , Biometria/métodos
4.
Sensors (Basel) ; 23(8)2023 Apr 14.
Artigo em Inglês | MEDLINE | ID: mdl-37112323

RESUMO

With the most recent developments in wearable technology, the possibility of continually monitoring stress using various physiological factors has attracted much attention. By reducing the detrimental effects of chronic stress, early diagnosis of stress can enhance healthcare. Machine Learning (ML) models are trained for healthcare systems to track health status using adequate user data. Insufficient data is accessible, however, due to privacy concerns, making it challenging to use Artificial Intelligence (AI) models in the medical industry. This research aims to preserve the privacy of patient data while classifying wearable-based electrodermal activities. We propose a Federated Learning (FL) based approach using a Deep Neural Network (DNN) model. For experimentation, we use the Wearable Stress and Affect Detection (WESAD) dataset, which includes five data states: transient, baseline, stress, amusement, and meditation. We transform this raw dataset into a suitable form for the proposed methodology using the Synthetic Minority Oversampling Technique (SMOTE) and min-max normalization pre-processing methods. In the FL-based technique, the DNN algorithm is trained on the dataset individually after receiving model updates from two clients. To decrease the over-fitting effect, every client analyses the results three times. Accuracies, Precision, Recall, F1-scores, and Area Under the Receiver Operating Curve (AUROC) values are evaluated for each client. The experimental result shows the effectiveness of the federated learning-based technique on a DNN, reaching 86.82% accuracy while also providing privacy to the patient's data. Using the FL-based DNN model over a WESAD dataset improves the detection accuracy compared to the previous studies while also providing the privacy of patient data.


Assuntos
Inteligência Artificial , Punho , Humanos , Resposta Galvânica da Pele , Articulação do Punho , Monitores de Aptidão Física
5.
Comput Mater Contin ; 76(2): 2201-2216, 2023 Aug 30.
Artigo em Inglês | MEDLINE | ID: mdl-38559807

RESUMO

Breast cancer is a major public health concern that affects women worldwide. It is a leading cause of cancer-related deaths among women, and early detection is crucial for successful treatment. Unfortunately, breast cancer can often go undetected until it has reached advanced stages, making it more difficult to treat. Therefore, there is a pressing need for accurate and efficient diagnostic tools to detect breast cancer at an early stage. The proposed approach utilizes SqueezeNet with fire modules and complex bypass to extract informative features from mammography images. The extracted features are then utilized to train a support vector machine (SVM) for mammography image classification. The SqueezeNet-guided SVM model, known as SNSVM, achieved promising results, with an accuracy of 94.10% and a sensitivity of 94.30%. A 10-fold cross-validation was performed to ensure the robustness of the results, and the mean and standard deviation of various performance indicators were calculated across multiple runs. This model also outperforms state-of-the-art models in all performance indicators, indicating its superior performance. This demonstrates the effectiveness of the proposed approach for breast cancer diagnosis using mammography images. The superior performance of the proposed model across all indicators makes it a promising tool for early breast cancer diagnosis. This may have significant implications for reducing breast cancer mortality rates.

6.
Pers Ubiquitous Comput ; 27(3): 733-750, 2023.
Artigo em Inglês | MEDLINE | ID: mdl-33456433

RESUMO

The novel human coronavirus disease COVID-19 has become the fifth documented pandemic since the 1918 flu pandemic. COVID-19 was first reported in Wuhan, China, and subsequently spread worldwide. Almost all of the countries of the world are facing this natural challenge. We present forecasting models to estimate and predict COVID-19 outbreak in Asia Pacific countries, particularly Pakistan, Afghanistan, India, and Bangladesh. We have utilized the latest deep learning techniques such as Long Short Term Memory networks (LSTM), Recurrent Neural Network (RNN), and Gated Recurrent Units (GRU) to quantify the intensity of pandemic for the near future. We consider the time variable and data non-linearity when employing neural networks. Each model's salient features have been evaluated to foresee the number of COVID-19 cases in the next 10 days. The forecasting performance of employed deep learning models shown up to July 01, 2020, is more than 90% accurate, which shows the reliability of the proposed study. We hope that the present comparative analysis will provide an accurate picture of pandemic spread to the government officials so that they can take appropriate mitigation measures.

7.
Sensors (Basel) ; 22(5)2022 Mar 04.
Artigo em Inglês | MEDLINE | ID: mdl-35271159

RESUMO

In condition based maintenance, different signal processing techniques are used to sense the faults through the vibration and acoustic emission signals, received from the machinery. These signal processing approaches mostly utilise time, frequency, and time-frequency domain analysis. The features obtained are later integrated with the different machine learning techniques to classify the faults into different categories. In this work, different statistical features of vibration signals in time and frequency domains are studied for the detection and localisation of faults in the roller bearings. These are later classified into healthy, outer race fault, inner race fault, and ball fault classes. The statistical features including skewness, kurtosis, average and root mean square values of time domain vibration signals are considered. These features are extracted from the second derivative of the time domain vibration signals and power spectral density of vibration signals. The vibration signal is also converted to the frequency domain and the same features are extracted. All three feature sets are concatenated, creating the time, frequency and spectral power domain feature vectors. These feature vectors are finally fed into the K- nearest neighbour, support vector machine and kernel linear discriminant analysis for the detection and classification of bearing faults. With the proposed method, the reduction percentage of more than 95% percent is achieved, which not only reduces the computational burden but also the classification time. Simulation results show that the signals are classified to achieve an average accuracy of 99.13% using KLDA and 96.64% using KNN classifiers. The results are also compared with the empirical mode decomposition (EMD) features and Fourier transform features without extracting any statistical information, which are two of the most widely used approaches in the literature. To gain a certain level of confidence in the classification results, a detailed statistical analysis is also provided.


Assuntos
Processamento de Sinais Assistido por Computador , Vibração , Simulação por Computador , Aprendizado de Máquina , Máquina de Vetores de Suporte
8.
Sensors (Basel) ; 22(4)2022 Feb 14.
Artigo em Inglês | MEDLINE | ID: mdl-35214375

RESUMO

The early prediction of Alzheimer's disease (AD) can be vital for the endurance of patients and establishes as an accommodating and facilitative factor for specialists. The proposed work presents a robotized predictive structure, dependent on machine learning (ML) methods for the forecast of AD. Neuropsychological measures (NM) and magnetic resonance imaging (MRI) biomarkers are deduced and passed on to a recurrent neural network (RNN). In the RNN, we have used long short-term memory (LSTM), and the proposed model will predict the biomarkers (feature vectors) of patients after 6, 12, 21 18, 24, and 36 months. These predicted biomarkers will go through fully connected neural network layers. The NN layers will then predict whether these RNN-predicted biomarkers belong to an AD patient or a patient with a mild cognitive impairment (MCI). The developed methodology has been tried on an openly available informational dataset (ADNI) and accomplished an accuracy of 88.24%, which is superior to the next-best available algorithms.


Assuntos
Doença de Alzheimer , Disfunção Cognitiva , Doença de Alzheimer/diagnóstico , Doença de Alzheimer/patologia , Biomarcadores , Disfunção Cognitiva/diagnóstico por imagem , Humanos , Imageamento por Ressonância Magnética/métodos , Memória de Curto Prazo
9.
Sensors (Basel) ; 22(2)2022 Jan 07.
Artigo em Inglês | MEDLINE | ID: mdl-35062405

RESUMO

Glaucoma is an eye disease initiated due to excessive intraocular pressure inside it and caused complete sightlessness at its progressed stage. Whereas timely glaucoma screening-based treatment can save the patient from complete vision loss. Accurate screening procedures are dependent on the availability of human experts who performs the manual analysis of retinal samples to identify the glaucomatous-affected regions. However, due to complex glaucoma screening procedures and shortage of human resources, we often face delays which can increase the vision loss ratio around the globe. To cope with the challenges of manual systems, there is an urgent demand for designing an effective automated framework that can accurately identify the Optic Disc (OD) and Optic Cup (OC) lesions at the earliest stage. Efficient and effective identification and classification of glaucomatous regions is a complicated job due to the wide variations in the mass, shade, orientation, and shapes of lesions. Furthermore, the extensive similarity between the lesion and eye color further complicates the classification process. To overcome the aforementioned challenges, we have presented a Deep Learning (DL)-based approach namely EfficientDet-D0 with EfficientNet-B0 as the backbone. The presented framework comprises three steps for glaucoma localization and classification. Initially, the deep features from the suspected samples are computed with the EfficientNet-B0 feature extractor. Then, the Bi-directional Feature Pyramid Network (BiFPN) module of EfficientDet-D0 takes the computed features from the EfficientNet-B0 and performs the top-down and bottom-up keypoints fusion several times. In the last step, the resultant localized area containing glaucoma lesion with associated class is predicted. We have confirmed the robustness of our work by evaluating it on a challenging dataset namely an online retinal fundus image database for glaucoma analysis (ORIGA). Furthermore, we have performed cross-dataset validation on the High-Resolution Fundus (HRF), and Retinal Image database for Optic Nerve Evaluation (RIM ONE DL) datasets to show the generalization ability of our work. Both the numeric and visual evaluations confirm that EfficientDet-D0 outperforms the newest frameworks and is more proficient in glaucoma classification.


Assuntos
Aprendizado Profundo , Glaucoma , Disco Óptico , Técnicas de Diagnóstico Oftalmológico , Fundo de Olho , Glaucoma/diagnóstico , Humanos
10.
Sensors (Basel) ; 22(2)2022 Jan 07.
Artigo em Inglês | MEDLINE | ID: mdl-35062409

RESUMO

The high data rates detail that internet-connected devices have been increasing exponentially. Cognitive radio (CR) is an auspicious technology used to address the resource shortage issue in wireless IoT networks. Resource optimization is considered a non-convex and nondeterministic polynomial (NP) complete problem within CR-based Internet of Things (IoT) networks (CR-IoT). Moreover, the combined optimization of conflicting objectives is a challenging issue in CR-IoT networks. In this paper, energy efficiency (EE) and spectral efficiency (SE) are considered as conflicting optimization objectives. This research work proposed a hybrid tabu search-based stimulated algorithm (HTSA) in order to achieve Pareto optimality between EE and SE. In addition, the fuzzy-based decision is employed to achieve better Pareto optimality. The performance of the proposed HTSA approach is analyzed using different resource allocation parameters and validated through simulation results.

11.
Sensors (Basel) ; 22(3)2022 Jan 21.
Artigo em Inglês | MEDLINE | ID: mdl-35161553

RESUMO

The variation in skin textures and injuries, as well as the detection and classification of skin cancer, is a difficult task. Manually detecting skin lesions from dermoscopy images is a difficult and time-consuming process. Recent advancements in the domains of the internet of things (IoT) and artificial intelligence for medical applications demonstrated improvements in both accuracy and computational time. In this paper, a new method for multiclass skin lesion classification using best deep learning feature fusion and an extreme learning machine is proposed. The proposed method includes five primary steps: image acquisition and contrast enhancement; deep learning feature extraction using transfer learning; best feature selection using hybrid whale optimization and entropy-mutual information (EMI) approach; fusion of selected features using a modified canonical correlation based approach; and, finally, extreme learning machine based classification. The feature selection step improves the system's computational efficiency and accuracy. The experiment is carried out on two publicly available datasets, HAM10000 and ISIC2018. The achieved accuracy on both datasets is 93.40 and 94.36 percent. When compared to state-of-the-art (SOTA) techniques, the proposed method's accuracy is improved. Furthermore, the proposed method is computationally efficient.


Assuntos
Dermatopatias , Neoplasias Cutâneas , Algoritmos , Inteligência Artificial , Entropia , Humanos , Neoplasias Cutâneas/diagnóstico por imagem
12.
Sensors (Basel) ; 22(3)2022 Jan 31.
Artigo em Inglês | MEDLINE | ID: mdl-35161843

RESUMO

Tracking moving objects is one of the most promising yet the most challenging research areas pertaining to computer vision, pattern recognition and image processing. The challenges associated with object tracking range from problems pertaining to camera axis orientations to object occlusion. In addition, variations in remote scene environments add to the difficulties related to object tracking. All the mentioned challenges and problems pertaining to object tracking make the procedure computationally complex and time-consuming. In this paper, a stochastic gradient-based optimization technique has been used in conjunction with particle filters for object tracking. First, the object that needs to be tracked is detected using the Maximum Average Correlation Height (MACH) filter. The object of interest is detected based on the presence of a correlation peak and average similarity measure. The results of object detection are fed to the tracking routine. The gradient descent technique is employed for object tracking and is used to optimize the particle filters. The gradient descent technique allows particles to converge quickly, allowing less time for the object to be tracked. The results of the proposed algorithm are compared with similar state-of-the-art tracking algorithms on five datasets that include both artificial moving objects and humans to show that the gradient-based tracking algorithm provides better results, both in terms of accuracy and speed.


Assuntos
Algoritmos , Processamento de Imagem Assistida por Computador , Humanos , Percepção
13.
Sensors (Basel) ; 22(6)2022 Mar 09.
Artigo em Inglês | MEDLINE | ID: mdl-35336292

RESUMO

Industry 4.0 smart manufacturing systems are equipped with sensors, smart machines, and intelligent robots. The automated in-plant transportation of manufacturing parts through throwing and catching robots is an attempt to accelerate the transportation process and increase productivity by the optimized utilization of in-plant facilities. Such an approach requires intelligent tracking and prediction of the final 3D catching position of thrown objects, while observing their initial flight trajectory in real-time, by catching robot in order to grasp them accurately. Due to non-deterministic nature of such mechanically thrown objects' flight, accurate prediction of their complete trajectory is only possible if we accurately observe initial trajectory as well as intelligently predict remaining trajectory. The thrown objects in industry can be of any shape but detecting and accurately predicting interception positions of any shape object is an extremely challenging problem that needs to be solved step by step. In this research work, we only considered spherical shape objects as their3D central position can be easily determined. Our work comprised of development of a 3D simulated environment which enabled us to throw object of any mass, diameter, or surface air friction properties in a controlled internal logistics environment. It also enabled us to throw object with any initial velocity and observe its trajectory by placing a simulated pinhole camera at any place within 3D vicinity of internal logistics. We also employed multi-view geometry among simulated cameras in order to observe trajectories more accurately. Hence, it provided us an ample opportunity of precise experimentation in order to create enormous dataset of thrown object trajectories to train an encoder-decoder bidirectional LSTM deep neural network. The trained neural network has given the best results for accurately predicting trajectory of thrown objects in real time.


Assuntos
Robótica , Redes Neurais de Computação
14.
Sensors (Basel) ; 22(4)2022 Feb 18.
Artigo em Inglês | MEDLINE | ID: mdl-35214494

RESUMO

This paper focuses on robustness and sensitivity analysis for sensor fault diagnosis of a voltage source converter based microgrid model. It uses robust control parameters such as minimum sensitivity parameter (H-), maximum robustness parameter (H∞), and compromised both (H-/H∞), being incorporated in the sliding mode observer theory using the game theoretic saddle point estimation achieved through convex optimization of constrained LMIs. The approach used works in a way that the mentioned robust control parameters are embedded in Hamilton-Jacobi-Isaacs-Equation (HJIE) and are also used to determine the inequality version of HJIE, which is, in terms of the Lyapunov function, faults/disturbances and augmented state/output estimation error as its variables. The stability analysis is also presented by negative definiteness of the same inequality version of HJIE, and additionally, it also gives linear matrix inequalities (LMIs), which are optimized using iterative convex optimization algorithms to give optimal sliding mode observer gains enhanced with robustness to maximal preset values of disturbances and sensitivity to minimal preset values of faults. The enhanced sliding mode observer is used to estimate states, faults, and disturbances using sliding mode observer theory. The optimality of sliding mode observer gains for sensitivity of the observer to minimal faults and robustness to maximal disturbance is a game theoretic saddle point estimation achieved through convex optimization of LMIs. The paper includes results for state estimation errors, faults' estimation/reconstruction, fault estimation errors, and fault-tolerant-control performance for current and potential transformer faults. The considered faults and disturbances in current and potential transformers are sinusoidal nature composite of magnitude/phase/harmonics at the same time.

15.
Sensors (Basel) ; 22(3)2022 Jan 21.
Artigo em Inglês | MEDLINE | ID: mdl-35161552

RESUMO

After lung cancer, breast cancer is the second leading cause of death in women. If breast cancer is detected early, mortality rates in women can be reduced. Because manual breast cancer diagnosis takes a long time, an automated system is required for early cancer detection. This paper proposes a new framework for breast cancer classification from ultrasound images that employs deep learning and the fusion of the best selected features. The proposed framework is divided into five major steps: (i) data augmentation is performed to increase the size of the original dataset for better learning of Convolutional Neural Network (CNN) models; (ii) a pre-trained DarkNet-53 model is considered and the output layer is modified based on the augmented dataset classes; (iii) the modified model is trained using transfer learning and features are extracted from the global average pooling layer; (iv) the best features are selected using two improved optimization algorithms known as reformed differential evaluation (RDE) and reformed gray wolf (RGW); and (v) the best selected features are fused using a new probability-based serial approach and classified using machine learning algorithms. The experiment was conducted on an augmented Breast Ultrasound Images (BUSI) dataset, and the best accuracy was 99.1%. When compared with recent techniques, the proposed framework outperforms them.


Assuntos
Neoplasias da Mama , Aprendizado Profundo , Mama , Neoplasias da Mama/diagnóstico por imagem , Feminino , Humanos , Probabilidade , Ultrassonografia Mamária
16.
Comput Mater Contin ; 70(2): 3081-3097, 2022.
Artigo em Inglês | MEDLINE | ID: mdl-35615529

RESUMO

Aim: Alcoholism is a disease that a patient becomes dependent or addicted to alcohol. This paper aims to design a novel artificial intelligence model that can recognize alcoholism more accurately. Methods: We propose the VGG-Inspired stochastic pooling neural network (VISPNN) model based on three components: (i) a VGG-inspired mainstay network, (ii) the stochastic pooling technique, which aims to outperform traditional max pooling and average pooling, and (iii) an improved 20-way data augmentation (Gaussian noise, salt-and-pepper noise, speckle noise, Poisson noise, horizontal shear, vertical shear, rotation, Gamma correction, random translation, and scaling on both raw image and its horizontally mirrored image). In addition, two networks (Net-I and Net-II) are proposed in ablation studies. Net-I is based on VISPNN by replacing stochastic pooling with ordinary max pooling. Net-II removes the 20-way data augmentation. Results: The results by ten runs of 10-fold cross-validation show that our VISPNN model gains a sensitivity of 97.98±1.32, a specificity of 97.80±1.35, a precision of 97.78±1.35, an accuracy of 97.89±1.11, an F1 score of 97.87±1.12, an MCC of 95.79±2.22, an FMI of 97.88±1.12, and an AUC of 0.9849, respectively. Conclusion: The performance of our VISPNN model is better than two internal networks (Net-I and Net-II) and ten state-of-the-art alcoholism recognition methods.

17.
Sensors (Basel) ; 21(24)2021 Dec 08.
Artigo em Inglês | MEDLINE | ID: mdl-34960274

RESUMO

The recent development in the area of IoT technologies is likely to be implemented extensively in the next decade. There is a great increase in the crime rate, and the handling officers are responsible for dealing with a broad range of cyber and Internet issues during investigation. IoT technologies are helpful in the identification of suspects, and few technologies are available that use IoT and deep learning together for face sketch synthesis. Convolutional neural networks (CNNs) and other constructs of deep learning have become major tools in recent approaches. A new-found architecture of the neural network is anticipated in this work. It is called Spiral-Net, which is a modified version of U-Net fto perform face sketch synthesis (the phase is known as the compiler network C here). Spiral-Net performs in combination with a pre-trained Vgg-19 network called the feature extractor F. It first identifies the top n matches from viewed sketches to a given photo. F is again used to formulate a feature map based on the cosine distance of a candidate sketch formed by C from the top n matches. A customized CNN configuration (called the discriminator D) then computes loss functions based on differences between the candidate sketch and the feature. Values of these loss functions alternately update C and F. The ensemble of these nets is trained and tested on selected datasets, including CUFS, CUFSF, and a part of the IIT photo-sketch dataset. Results of this modified U-Net are acquired by the legacy NLDA (1998) scheme of face recognition and its newer version, OpenBR (2013), which demonstrate an improvement of 5% compared with the current state of the art in its relevant domain.


Assuntos
Inteligência Artificial , Aprendizado Profundo , Algoritmos , Face , Redes Neurais de Computação
18.
Sensors (Basel) ; 21(21)2021 Nov 02.
Artigo em Inglês | MEDLINE | ID: mdl-34770595

RESUMO

In healthcare, a multitude of data is collected from medical sensors and devices, such as X-ray machines, magnetic resonance imaging, computed tomography (CT), and so on, that can be analyzed by artificial intelligence methods for early diagnosis of diseases. Recently, the outbreak of the COVID-19 disease caused many deaths. Computer vision researchers support medical doctors by employing deep learning techniques on medical images to diagnose COVID-19 patients. Various methods were proposed for COVID-19 case classification. A new automated technique is proposed using parallel fusion and optimization of deep learning models. The proposed technique starts with a contrast enhancement using a combination of top-hat and Wiener filters. Two pre-trained deep learning models (AlexNet and VGG16) are employed and fine-tuned according to target classes (COVID-19 and healthy). Features are extracted and fused using a parallel fusion approach-parallel positive correlation. Optimal features are selected using the entropy-controlled firefly optimization method. The selected features are classified using machine learning classifiers such as multiclass support vector machine (MC-SVM). Experiments were carried out using the Radiopaedia database and achieved an accuracy of 98%. Moreover, a detailed analysis is conducted and shows the improved performance of the proposed scheme.


Assuntos
COVID-19 , Aprendizado Profundo , Animais , Inteligência Artificial , Entropia , Vaga-Lumes , Humanos , SARS-CoV-2 , Tomografia Computadorizada por Raios X
19.
Sensors (Basel) ; 21(1)2021 Jan 02.
Artigo em Inglês | MEDLINE | ID: mdl-33401652

RESUMO

Hypertension is an antecedent to cardiac disorders. According to the World Health Organization (WHO), the number of people affected with hypertension will reach around 1.56 billion by 2025. Early detection of hypertension is imperative to prevent the complications caused by cardiac abnormalities. Hypertension usually possesses no apparent detectable symptoms; hence, the control rate is significantly low. Computer-aided diagnosis based on machine learning and signal analysis has recently been applied to identify biomarkers for the accurate prediction of hypertension. This research proposes a new expert hypertension detection system (EHDS) from pulse plethysmograph (PuPG) signals for the categorization of normal and hypertension. The PuPG signal data set, including rich information of cardiac activity, was acquired from healthy and hypertensive subjects. The raw PuPG signals were preprocessed through empirical mode decomposition (EMD) by decomposing a signal into its constituent components. A combination of multi-domain features was extracted from the preprocessed PuPG signal. The features exhibiting high discriminative characteristics were selected and reduced through a proposed hybrid feature selection and reduction (HFSR) scheme. Selected features were subjected to various classification methods in a comparative fashion in which the best performance of 99.4% accuracy, 99.6% sensitivity, and 99.2% specificity was achieved through weighted k-nearest neighbor (KNN-W). The performance of the proposed EHDS was thoroughly assessed by tenfold cross-validation. The proposed EHDS achieved better detection performance in comparison to other electrocardiogram (ECG) and photoplethysmograph (PPG)-based methods.


Assuntos
Hipertensão , Adulto , Idoso , Algoritmos , Diagnóstico por Computador , Eletrocardiografia , Feminino , Frequência Cardíaca , Humanos , Hipertensão/diagnóstico , Aprendizado de Máquina , Masculino , Pessoa de Meia-Idade
20.
Sensors (Basel) ; 21(23)2021 Nov 28.
Artigo em Inglês | MEDLINE | ID: mdl-34883944

RESUMO

Human action recognition (HAR) has gained significant attention recently as it can be adopted for a smart surveillance system in Multimedia. However, HAR is a challenging task because of the variety of human actions in daily life. Various solutions based on computer vision (CV) have been proposed in the literature which did not prove to be successful due to large video sequences which need to be processed in surveillance systems. The problem exacerbates in the presence of multi-view cameras. Recently, the development of deep learning (DL)-based systems has shown significant success for HAR even for multi-view camera systems. In this research work, a DL-based design is proposed for HAR. The proposed design consists of multiple steps including feature mapping, feature fusion and feature selection. For the initial feature mapping step, two pre-trained models are considered, such as DenseNet201 and InceptionV3. Later, the extracted deep features are fused using the Serial based Extended (SbE) approach. Later on, the best features are selected using Kurtosis-controlled Weighted KNN. The selected features are classified using several supervised learning algorithms. To show the efficacy of the proposed design, we used several datasets, such as KTH, IXMAS, WVU, and Hollywood. Experimental results showed that the proposed design achieved accuracies of 99.3%, 97.4%, 99.8%, and 99.9%, respectively, on these datasets. Furthermore, the feature selection step performed better in terms of computational time compared with the state-of-the-art.


Assuntos
Aprendizado Profundo , Algoritmos , Atividades Humanas , Humanos , Reconhecimento Automatizado de Padrão
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA