Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 20 de 108
Filter
Add more filters

Country/Region as subject
Publication year range
1.
Brief Bioinform ; 25(4)2024 May 23.
Article in English | MEDLINE | ID: mdl-38856168

ABSTRACT

Nucleic acid-binding proteins (NABPs), including DNA-binding proteins (DBPs) and RNA-binding proteins (RBPs), play important roles in essential biological processes. To facilitate functional annotation and accurate prediction of different types of NABPs, many machine learning-based computational approaches have been developed. However, the datasets used for training and testing as well as the prediction scopes in these studies have limited their applications. In this paper, we developed new strategies to overcome these limitations by generating more accurate and robust datasets and developing deep learning-based methods including both hierarchical and multi-class approaches to predict the types of NABPs for any given protein. The deep learning models employ two layers of convolutional neural network and one layer of long short-term memory. Our approaches outperform existing DBP and RBP predictors with a balanced prediction between DBPs and RBPs, and are more practically useful in identifying novel NABPs. The multi-class approach greatly improves the prediction accuracy of DBPs and RBPs, especially for the DBPs with ~12% improvement. Moreover, we explored the prediction accuracy of single-stranded DNA binding proteins and their effect on the overall prediction accuracy of NABP predictions.


Subject(s)
Computational Biology , DNA-Binding Proteins , Deep Learning , RNA-Binding Proteins , RNA-Binding Proteins/metabolism , DNA-Binding Proteins/metabolism , Computational Biology/methods , Neural Networks, Computer , Humans
2.
Curr Issues Mol Biol ; 46(2): 1360-1373, 2024 Feb 04.
Article in English | MEDLINE | ID: mdl-38392205

ABSTRACT

RNA-binding proteins (RBPs) play an important role in regulating biological processes, such as gene regulation. Understanding their behaviors, for example, their binding site, can be helpful in understanding RBP-related diseases. Studies have focused on predicting RNA binding by means of machine learning algorithms including deep convolutional neural network models. One of the integral parts of modeling deep learning is achieving optimal hyperparameter tuning and minimizing a loss function using optimization algorithms. In this paper, we investigate the role of optimization in the RBP classification problem using the CLIP-Seq 21 dataset. Three optimization methods are employed on the RNA-protein binding CNN prediction model; namely, grid search, random search, and Bayesian optimizer. The empirical results show an AUC of 94.42%, 93.78%, 93.23% and 92.68% on the ELAVL1C, ELAVL1B, ELAVL1A, and HNRNPC datasets, respectively, and a mean AUC of 85.30 on 24 datasets. This paper's findings provide evidence on the role of optimizers in improving the performance of RNA-protein binding prediction.

3.
Network ; : 1-37, 2024 May 28.
Article in English | MEDLINE | ID: mdl-38804548

ABSTRACT

Automated diagnosis of cancer from skin lesion data has been the focus of numerous research. Despite that it can be challenging to interpret these images because of features like colour illumination changes, variation in the sizes and forms of the lesions. To tackle these problems, the proposed model develops an ensemble of deep learning techniques for skin cancer diagnosis. Initially, skin imaging data are collected and preprocessed using resizing and anisotropic diffusion to enhance the quality of the image. Preprocessed images are fed into the Fuzzy-C-Means clustering technique to segment the region of diseases. Stacking-based ensemble deep learning approach is used for classification and the LSTM acts as a meta-classifier. Deep Neural Network (DNN) and Convolutional Neural Network (CNN) are used as input for LSTM. This segmented images are utilized to be input into the CNN, and the local binary pattern (LBP) technique is employed to extract DNN features from the segments of the image. The output from these two classifiers will be fed into the LSTM Meta classifier. This LSTM classifies the input data and predicts the skin cancer disease. The proposed approach had a greater accuracy of 97%. Hence, the developed model accurately predicts skin cancer disease.

4.
Network ; : 1-36, 2024 Jun 10.
Article in English | MEDLINE | ID: mdl-38855971

ABSTRACT

Predicting the stock market is one of the significant chores and has a successful prediction of stock rates, and it helps in making correct decisions. The prediction of the stock market is the main challenge due to blaring, chaotic data as well as non-stationary data. In this research, the support vector machine (SVM) is devised for performing an effective stock market prediction. At first, the input time series data is considered and the pre-processing of data is done by employing a standard scalar. Then, the time intrinsic features are extracted and the suitable features are selected in the feature selection stage by eliminating other features using recursive feature elimination. Afterwards, the Long Short-Term Memory (LSTM) based prediction is done, wherein LSTM is trained to employ Aquila circle-inspired optimization (ACIO) that is newly introduced by merging Aquila optimizer (AO) with circle-inspired optimization algorithm (CIOA). On the other hand, delay-based matrix formation is conducted by considering input time series data. After that, convolutional neural network (CNN)-based prediction is performed, where CNN is tuned by the same ACIO. Finally, stock market prediction is executed utilizing SVM by fusing the predicted outputs attained from LSTM-based prediction and CNN-based prediction. Furthermore, the SVM attains better outcomes of minimum mean absolute percentage error; (MAPE) and normalized root-mean-square error (RMSE) with values about 0.378 and 0.294.

5.
Environ Res ; 258: 119204, 2024 Oct 01.
Article in English | MEDLINE | ID: mdl-38802033

ABSTRACT

This study synthesized zinc oxide nanoparticles (ZnO NPs) using a novel green approach, with Sida acuta leaf extract as a capping and reducing agent to initiate nucleation and structure formation. The innovation of this study lies in demonstrating the originality of utilizing zinc oxide nanoparticles for antibacterial action, antioxidant potential, and catalytic degradation of Congo red dye. This unique approach harnesses eco-friendly methods to initiate nucleation and structure formation. The synthesized nanoparticles' structure and conformation were characterized using UV-vis (λmax = 280 nm), X-ray, atomic force microscopy, SEM, HR-TEM and FTIR. The antibacterial activity of the Nps was tested against Pseudomonas sp, Klebsiella sp, Staphylococcus aureus, and E. coli, demonstrating efficacy. The nanoparticles exhibited unique properties, with a crystallite size of 20 nm (XRD), a surface roughness of 2.5 nm (AFM), and a specific surface area of 60 m2/g (SEM). A Convolutional Neural Network (CNN) was effectively employed to accurately classify and analyze microscopic images of green-synthesized zinc oxide nanoparticles. This research revealed their exceptional antioxidant potential, with an average DPPH scavenging rate of 80% at a concentration of 0.05 mg/mL. Additionally, zeta potential measurements indicated a stable net negative surface charge of approximately -12.2 mV. These quantitative findings highlight the promising applications of green-synthesized ZnO NPs in healthcare, materials science, and environmental remediation. The ZnO nanoparticles exhibited catalytic capabilities for dye degradation, and the degradation rate was determined using UV spectroscopy. Key findings of the study encompass the green synthesis of versatile zinc oxide nanoparticles, demonstrating potent antibacterial action, antioxidant capabilities, and catalytic dye degradation potential. These nanoparticles offer multifaceted solutions with minimal environmental impact, addressing challenges in various fields, from healthcare to environmental remediation.


Subject(s)
Anti-Bacterial Agents , Antioxidants , Green Chemistry Technology , Plant Extracts , Plant Leaves , Zinc Oxide , Zinc Oxide/chemistry , Zinc Oxide/pharmacology , Anti-Bacterial Agents/pharmacology , Anti-Bacterial Agents/chemistry , Plant Extracts/chemistry , Plant Extracts/pharmacology , Antioxidants/chemistry , Antioxidants/pharmacology , Antioxidants/chemical synthesis , Plant Leaves/chemistry , Green Chemistry Technology/methods , Metal Nanoparticles/chemistry , Neural Networks, Computer , Catalysis , Congo Red/chemistry , Coloring Agents/chemistry
6.
Plant Dis ; 2024 Aug 19.
Article in English | MEDLINE | ID: mdl-39160128

ABSTRACT

Visual detection of stromata (brown-black, elevated fungal fruiting bodies) is a primary method for quantifying tar spot early in the season, as these structures are definitive signs of the disease and essential for effective disease monitoring and management. Here, we present Stromata Contour Detection Algorithm version 2 (SCDA v2), which addresses the limitations of the previously developed SCDA version 1 (SCDA v1) without the need for empirical search of the optimal Decision Making Input Parameters (DMIPs), while achieving higher and consistent accuracy in tar spot stromata detection. SCDA v2 operates in two components: (i) SCDA v1 producing tar-spot-like region proposals for a given input corn leaf Red-Green-Blue (RGB) image, and (ii) a pre-trained Convolutional Neural Network (CNN) classifier identifying true tar spot stromata from the region proposals. To demonstrate the enhanced performance of the SCDA v2, we utilized datasets of RGB images of corn leaves from field (low, middle, and upper canopies) and glasshouse conditions under variable environments, exhibiting different tar spot severities at various corn developmental stages. Various accuracy analyses (F1-score, linear regression, and Lin's concordance correlation), showed that SCDA v2 had a greater agreement with the reference data (human visual annotation) than SCDA v1. SCDA v2 achievd 73.7% mean Dice values (overall accuracy), compared to 30.8% for SCDA v1. The enhanced F1-score primarily resulted from eliminating overestimation cases using the CNN classifier. Our findings indicate the promising potential of SCDA v2 for glasshouse and field-scale applications, including tar spot phenotyping and surveillance projects.

7.
Sensors (Basel) ; 24(6)2024 Mar 18.
Article in English | MEDLINE | ID: mdl-38544195

ABSTRACT

Accurate paranasal sinus segmentation is essential for reducing surgical complications through surgical guidance systems. This study introduces a multiclass Convolutional Neural Network (CNN) segmentation model by comparing four 3D U-Net variations-normal, residual, dense, and residual-dense. Data normalization and training were conducted on a 40-patient test set (20 normal, 20 abnormal) using 5-fold cross-validation. The normal 3D U-Net demonstrated superior performance with an F1 score of 84.29% on the normal test set and 79.32% on the abnormal set, exhibiting higher true positive rates for the sphenoid and maxillary sinus in both sets. Despite effective segmentation in clear sinuses, limitations were observed in mucosal inflammation. Nevertheless, the algorithm's enhanced segmentation of abnormal sinuses suggests potential clinical applications, with ongoing refinements expected for broader utility.


Subject(s)
Deep Learning , Sinusitis , Humans , Sinusitis/diagnostic imaging , Neural Networks, Computer , Maxillary Sinus , Tomography, X-Ray Computed/methods , Image Processing, Computer-Assisted/methods
8.
Sensors (Basel) ; 24(6)2024 Mar 18.
Article in English | MEDLINE | ID: mdl-38544199

ABSTRACT

Surface crack detection is an integral part of infrastructure health surveys. This work presents a transformative shift towards rapid and reliable data collection capabilities, dramatically reducing the time spent on inspecting infrastructures. Two unmanned aerial vehicles (UAVs) were deployed, enabling the capturing of images simultaneously for efficient coverage of the structure. The suggested drone hardware is especially suitable for the inspection of infrastructure with confined spaces that UAVs with a broader footprint are incapable of accessing due to a lack of safe access or positioning data. The collected image data were analyzed using a binary classification convolutional neural network (CNN), effectively filtering out images containing cracks. A comparison of state-of-the-art CNN architectures against a novel CNN layout "CrackClassCNN" was investigated to obtain the optimal layout for classification. A Segment Anything Model (SAM) was employed to segment defect areas, and its performance was benchmarked against manually annotated images. The suggested "CrackClassCNN" achieved an accuracy rate of 95.02%, and the SAM segmentation process yielded a mean Intersection over Union (IoU) score of 0.778 and an F1 score of 0.735. It was concluded that the selected UAV platform, the communication network, and the suggested processing techniques were highly effective in surface crack detection.

9.
Sensors (Basel) ; 24(13)2024 Jul 05.
Article in English | MEDLINE | ID: mdl-39001155

ABSTRACT

Electrocardiography (ECG) has emerged as a ubiquitous diagnostic tool for the identification and characterization of diverse cardiovascular pathologies. Wearable health monitoring devices, equipped with on-device biomedical artificial intelligence (AI) processors, have revolutionized the acquisition, analysis, and interpretation of ECG data. However, these systems necessitate AI processors that exhibit flexible configuration, facilitate portability, and demonstrate optimal performance in terms of power consumption and latency for the realization of various functionalities. To address these challenges, this study proposes an instruction-driven convolutional neural network (CNN) processor. This processor incorporates three key features: (1) An instruction-driven CNN processor to support versatile ECG-based application. (2) A Processing element (PE) array design that simultaneously considers parallelism and data reuse. (3) An activation unit based on the CORDIC algorithm, supporting both Tanh and Sigmoid computations. The design has been implemented using 110 nm CMOS process technology, occupying a die area of 1.35 mm2 with 12.94 µW power consumption. It has been demonstrated with two typical ECG AI applications, including two-class (i.e., normal/abnormal) classification and five-class classification. The proposed 1-D CNN algorithm performs with a 97.95% accuracy for the two-class classification and 97.9% for the five-class classification, respectively.


Subject(s)
Algorithms , Electrocardiography , Neural Networks, Computer , Signal Processing, Computer-Assisted , Electrocardiography/methods , Humans , Artificial Intelligence , Wearable Electronic Devices
10.
Sensors (Basel) ; 24(14)2024 Jul 15.
Article in English | MEDLINE | ID: mdl-39065980

ABSTRACT

During underwater image processing, image quality is affected by the absorption and scattering of light in water, thus causing problems such as blurring and noise. As a result, poor image quality is unavoidable. To achieve overall satisfying research results, underwater image denoising is vital. This paper presents an underwater image denoising method, named HHDNet, designed to address noise issues arising from environmental interference and technical limitations during underwater robot photography. The method leverages a dual-branch network architecture to handle both high and low frequencies, incorporating a hybrid attention module specifically designed for the removal of high-frequency abrupt noise in underwater images. Input images are decomposed into high-frequency and low-frequency components using a Gaussian kernel. For the high-frequency part, a Global Context Extractor (GCE) module with a hybrid attention mechanism focuses on removing high-frequency abrupt signals by capturing local details and global dependencies simultaneously. For the low-frequency part, efficient residual convolutional units are used in consideration of less noise information. Experimental results demonstrate that HHDNet effectively achieves underwater image denoising tasks, surpassing other existing methods not only in denoising effectiveness but also in maintaining computational efficiency, and thus HHDNet provides more flexibility in underwater image noise removal.

11.
Sensors (Basel) ; 24(3)2024 Jan 31.
Article in English | MEDLINE | ID: mdl-38339649

ABSTRACT

Terahertz (THz) waves are electromagnetic waves in the 0.1 to 10 THz frequency range, and THz imaging is utilized in a range of applications, including security inspections, biomedical fields, and the non-destructive examination of materials. However, THz images have a low resolution due to the long wavelength of THz waves. Therefore, improving the resolution of THz images is a current hot research topic. We propose a novel network architecture called J-Net, which is an improved version of U-Net, to achieve THz image super-resolution. It employs simple baseline blocks which can extract low-resolution (LR) image features and learn the mapping of LR images to high-resolution (HR) images efficiently. All training was conducted using the DIV2K+Flickr2K dataset, and we employed the peak signal-to-noise ratio (PSNR) for quantitative comparison. In our comparisons with other THz image super-resolution methods, J-Net achieved a PSNR of 32.52 dB, surpassing other techniques by more than 1 dB. J-Net also demonstrates superior performance on real THz images compared to other methods. Experiments show that the proposed J-Net achieves a better PSNR and visual improvement compared with other THz image super-resolution methods.

12.
Sensors (Basel) ; 24(2)2024 Jan 07.
Article in English | MEDLINE | ID: mdl-38257445

ABSTRACT

This paper proposed a real-time fault diagnostic method for hydraulic systems using data collected from multiple sensors. The method is based on a proposed multi-sensor convolutional neural network (MS-CNN) that incorporates feature extraction, sensor selection, and fault diagnosis into an end-to-end model. Both the sensor selection process and fault diagnosis process are based on abstract fault-related features learned by a CNN deep learning model. Therefore, compared with the traditional sensor-and-feature selection method, the proposed MS-CNN can find the sensor channels containing higher-level fault-related features, which provides two advantages for diagnosis. First, the sensor selection can reduce the redundant information and improve the diagnostic performance of the model. Secondly, the reduced number of sensors simplifies the model, reducing communication burden and computational complexity. These two advantages make the MS-CNN suitable for real-time hydraulic system fault diagnosis, in which the multi-sensor feature extraction and the computation speed are both significant. The proposed MS-CNN approach is evaluated experimentally on an electric-hydraulic subsea control system test rig and an open-source dataset. The proposed method shows obvious superiority in terms of both diagnosis accuracy and computational speed when compared with traditional CNN models and other state-of-the-art multi-sensor diagnostic methods.

13.
Sensors (Basel) ; 24(5)2024 Feb 26.
Article in English | MEDLINE | ID: mdl-38475050

ABSTRACT

Latent Low-Rank Representation (LatLRR) has emerged as a prominent approach for fusing visible and infrared images. In this approach, images are decomposed into three fundamental components: the base part, salient part, and sparse part. The aim is to blend the base and salient features to reconstruct images accurately. However, existing methods often focus more on combining the base and salient parts, neglecting the importance of the sparse component, whereas we advocate for the comprehensive inclusion of all three parts generated from LatLRR image decomposition into the image fusion process, a novel proposition introduced in this study. Moreover, the effective integration of Convolutional Neural Network (CNN) technology with LatLRR remains challenging, particularly after the inclusion of sparse parts. This study utilizes fusion strategies involving weighted average, summation, VGG19, and ResNet50 in various combinations to analyze the fusion performance following the introduction of sparse parts. The research findings show a significant enhancement in fusion performance achieved through the inclusion of sparse parts in the fusion process. The suggested fusion strategy involves employing deep learning techniques for fusing both base parts and sparse parts while utilizing a summation strategy for the fusion of salient parts. The findings improve the performance of LatLRR-based methods and offer valuable insights for enhancement, leading to advancements in the field of image fusion.

14.
Sensors (Basel) ; 24(12)2024 Jun 19.
Article in English | MEDLINE | ID: mdl-38931770

ABSTRACT

This paper proposes a convolutional neural network (CNN) model of the signal distribution control algorithm (SDCA) to maximize the dynamic vehicular traffic signal flow for each junction phase. The aim of the proposed algorithm is to determine the reward value and new state. It deconstructs the routing components of the current multi-directional queuing system (MDQS) architecture to identify optimal policies for every traffic scenario. Initially, the state value is divided into a function value and a parameter value. Combining these two scenarios updates the resulting optimized state value. Ultimately, an analogous criterion is developed for the current dataset. Next, the error or loss value for the present scenario is computed. Furthermore, utilizing the Deep Q-learning methodology with a quad agent enhances previous study discoveries. The recommended method outperforms all other traditional approaches in effectively optimizing traffic signal timing.

15.
Sensors (Basel) ; 24(9)2024 Apr 29.
Article in English | MEDLINE | ID: mdl-38732947

ABSTRACT

The remaining useful life (RUL) prediction of RF circuits is an important tool for circuit reliability. Data-driven-based approaches do not require knowledge of the failure mechanism and reduce the dependence on knowledge of complex circuits, and thus can effectively realize RUL prediction. This manuscript proposes a novel RUL prediction method based on a gated recurrent unit-convolutional neural network (GRU-CNN). Firstly, the data are normalized to improve the efficiency of the algorithm; secondly, the degradation of the circuit is evaluated using the hybrid health score based on the Euclidean and Manhattan distances; then, the life cycle of the RF circuits is segmented based on the hybrid health scores; and finally, an RUL prediction is carried out for the circuits at each stage using the GRU-CNN model. The results show that the RMSE of the GRU-CNN model in the normal operation stage is only 3/5 of that of the GRU and CNN models, while the prediction uncertainty is minimized.

16.
Sensors (Basel) ; 24(15)2024 Jul 23.
Article in English | MEDLINE | ID: mdl-39123812

ABSTRACT

Maintaining security in communication networks has long been a major concern. This issue has become increasingly crucial due to the emergence of new communication architectures like the Internet of Things (IoT) and the advancement and complexity of infiltration techniques. For usage in networks based on the Internet of Things, previous intrusion detection systems (IDSs), which often use a centralized design to identify threats, are now ineffective. For the resolution of these issues, this study presents a novel and cooperative approach to IoT intrusion detection that may be useful in resolving certain current security issues. The suggested approach chooses the most important attributes that best describe the communication between objects by using Black Hole Optimization (BHO). Additionally, a novel method for describing the network's matrix-based communication properties is put forward. The inputs of the suggested intrusion detection model consist of these two feature sets. The suggested technique splits the network into a number of subnets using the software-defined network (SDN). Monitoring of each subnet is done by a controller node, which uses a parallel combination of convolutional neural networks (PCNN) to determine the presence of security threats in the traffic passing through its subnet. The proposed method also uses the majority voting approach for the cooperation of controller nodes in order to more accurately detect attacks. The findings demonstrate that, in comparison to the prior approaches, the suggested cooperative strategy can detect assaults in the NSLKDD and NSW-NB15 datasets with an accuracy of 99.89 and 97.72 percent, respectively. This is a minimum 0.6 percent improvement.

17.
Sensors (Basel) ; 24(6)2024 Mar 21.
Article in English | MEDLINE | ID: mdl-38544255

ABSTRACT

Near-infrared (NIR) spectroscopy is widely used as a nondestructive evaluation (NDE) tool for predicting wood properties. When deploying NIR models, one faces challenges in ensuring representative training data, which large datasets can mitigate but often at a significant cost. Machine learning and deep learning NIR models are at an even greater disadvantage because they typically require higher sample sizes for training. In this study, NIR spectra were collected to predict the modulus of elasticity (MOE) of southern pine lumber (training set = 573 samples, testing set = 145 samples). To account for the limited size of the training data, this study employed a generative adversarial network (GAN) to generate synthetic NIR spectra. The training dataset was fed into a GAN to generate 313, 573, and 1000 synthetic spectra. The original and enhanced datasets were used to train artificial neural networks (ANNs), convolutional neural networks (CNNs), and light gradient boosting machines (LGBMs) for MOE prediction. Overall, results showed that data augmentation using GAN improved the coefficient of determination (R2) by up to 7.02% and reduced the error of predictions by up to 4.29%. ANNs and CNNs benefited more from synthetic spectra than LGBMs, which only yielded slight improvement. All models showed optimal performance when 313 synthetic spectra were added to the original training data; further additions did not improve model performance because the quality of the datapoints generated by GAN beyond a certain threshold is poor, and one of the main reasons for this can be the size of the initial training data fed into the GAN. LGBMs showed superior performances than ANNs and CNNs on both the original and enhanced training datasets, which highlights the significance of selecting an appropriate machine learning or deep learning model for NIR spectral-data analysis. The results highlighted the positive impact of GAN on the predictive performance of models utilizing NIR spectroscopy as an NDE technique and monitoring tool for wood mechanical-property evaluation. Further studies should investigate the impact of the initial size of training data, the optimal number of generated synthetic spectra, and machine learning or deep learning models that could benefit more from data augmentation using GANs.


Subject(s)
Data Analysis , Wood , Elastic Modulus , Light , Machine Learning
18.
Sensors (Basel) ; 24(4)2024 Feb 08.
Article in English | MEDLINE | ID: mdl-38400286

ABSTRACT

The monitoring of the lifetime of cutting tools often faces problems such as life data loss, drift, and distortion. The prediction of the lifetime in this situation is greatly compromised with respect to the accuracy. The recent rise of deep learning, such as Gated Recurrent Unit Units (GRUs), Hidden Markov Models (HMMs), Convolutional Neural Networks (CNNs), Recurrent Neural Networks (RNNs), Attention networks, and Transformers, has dramatically improved the data problems in tool lifetime prediction, substantially enhancing the accuracy of tool wear prediction. In this paper, we introduce a novel approach known as PCHIP-Enhanced ConvGRU (PECG), which leverages multiple-feature fusion for tool wear prediction. When compared to traditional models such as CNNs, the CNN Block, and GRUs, our method consistently outperformed them across all key performance metrics, with a primary focus on the accuracy. PECG addresses the challenge of missing tool wear measurement data in relation to sensor data. By employing PCHIP interpolation to fill in the gaps in the wear values, we have developed a model that combines the strengths of both CNNs and GRUs with data augmentation. The experimental results demonstrate that our proposed method achieved an exceptional relative accuracy of 0.8522, while also exhibiting a Pearson's Correlation Coefficient (PCC) exceeding 0.95. This innovative approach not only predicts tool wear with remarkable precision, but also offers enhanced stability.

19.
Sensors (Basel) ; 24(12)2024 Jun 19.
Article in English | MEDLINE | ID: mdl-38931751

ABSTRACT

This work addresses the challenge of classifying multiclass visual EEG signals into 40 classes for brain-computer interface applications using deep learning architectures. The visual multiclass classification approach offers BCI applications a significant advantage since it allows the supervision of more than one BCI interaction, considering that each class label supervises a BCI task. However, because of the nonlinearity and nonstationarity of EEG signals, using multiclass classification based on EEG features remains a significant challenge for BCI systems. In the present work, mutual information-based discriminant channel selection and minimum-norm estimate algorithms were implemented to select discriminant channels and enhance the EEG data. Hence, deep EEGNet and convolutional recurrent neural networks were separately implemented to classify the EEG data for image visualization into 40 labels. Using the k-fold cross-validation approach, average classification accuracies of 94.8% and 89.8% were obtained by implementing the aforementioned network architectures. The satisfactory results obtained with this method offer a new implementation opportunity for multitask embedded BCI applications utilizing a reduced number of both channels (<50%) and network parameters (<110 K).


Subject(s)
Algorithms , Brain-Computer Interfaces , Deep Learning , Electroencephalography , Neural Networks, Computer , Electroencephalography/methods , Humans , Signal Processing, Computer-Assisted
20.
Sensors (Basel) ; 24(3)2024 Jan 30.
Article in English | MEDLINE | ID: mdl-38339606

ABSTRACT

In recent years, radar emitter signal recognition has enjoyed a wide range of applications in electronic support measure systems and communication security. More and more deep learning algorithms have been used to improve the recognition accuracy of radar emitter signals. However, complex deep learning algorithms and data preprocessing operations have a huge demand for computing power, which cannot meet the requirements of low power consumption and high real-time processing scenarios. Therefore, many research works have remained in the experimental stage and cannot be actually implemented. To tackle this problem, this paper proposes a resource reuse computing acceleration platform based on field programmable gate arrays (FPGA), and implements a one-dimensional (1D) convolutional neural network (CNN) and long short-term memory (LSTM) neural network (NN) model for radar emitter signal recognition, directly targeting the intermediate frequency (IF) data of radar emitter signal for classification and recognition. The implementation of the 1D-CNN-LSTM neural network on FPGA is realized by multiplexing the same systolic array to accomplish the parallel acceleration of 1D convolution and matrix vector multiplication operations. We implemented our network on Xilinx XCKU040 to evaluate the effectiveness of our proposed solution. Our experiments show that the system can achieve 7.34 giga operations per second (GOPS) data throughput with only 5.022 W power consumption when the radar emitter signal recognition rate is 96.53%, which greatly improves the energy efficiency ratio and real-time performance of the radar emitter recognition system.

SELECTION OF CITATIONS
SEARCH DETAIL