Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 23
Filtrar
1.
Sci Rep ; 13(1): 20803, 2023 Nov 27.
Artigo em Inglês | MEDLINE | ID: mdl-38012224

RESUMO

During the production of metal material, various complex defects may come into being on the surface, together with large amount of background texture information, causing false or missing detection in the process of small defect detection. To resolve those problems, this paper introduces a new model which combines the advantages of CSPlayer module and Global Attention Enhancement Mechanism based on the YOLOv5s model. First of all, we replace C3 module with CSPlayer module to augment the neural network model, so as to improve its flexibility and adaptability. Then, we introduce the Global Attention Mechanism (GAM) and build the generalized additive model. In the meanwhile, the attention weights of all dimensions are weighted and averaged as output to promote the detection speed and accuracy. The results of the experiment in which the GC10-DET augmented dataset is involved, show that the improved algorithm model performs better than YOLOv5s in precision, mAP@0.5 and mAP@0.5: 0.95 by 5.3%, 1.4% and 1.7% respectively, and it also has a higher reasoning speed.

2.
Sci Rep ; 13(1): 9805, 2023 Jun 16.
Artigo em Inglês | MEDLINE | ID: mdl-37328545

RESUMO

To solve the problem of missed and false detection caused by the large number of tiny targets and complex background textures in a printed circuit board (PCB), we propose a global contextual attention augmented YOLO model with ConvMixer prediction heads (GCC-YOLO). In this study, we apply a high-resolution feature layer (P2) to gain more details and positional information of small targets. Moreover, in order to suppress the background noisy information and further enhance the feature extraction capability, a global contextual attention module (GC) is introduced in the backbone network and combined with a C3 module. Furthermore, in order to reduce the loss of shallow feature information due to the deepening of network layers, a bi-directional weighted feature pyramid (BiFPN) feature fusion structure is introduced. Finally, a ConvMixer module is introduced and combined with the C3 module to create a new prediction head, which improves the small target detection capability of the model while reducing the parameters. Test results on the PCB dataset show that GCC-YOLO improved the Precision, Recall, mAP@0.5, and mAP@0.5:0.95 by 0.2%, 1.8%, 0.5%, and 8.3%, respectively, compared to YOLOv5s; moreover, it has a smaller model volume and faster reasoning speed compared to other algorithms.


Assuntos
Algoritmos , Rememoração Mental , Resolução de Problemas , Tratos Piramidais , Registros
3.
Sensors (Basel) ; 23(11)2023 May 27.
Artigo em Inglês | MEDLINE | ID: mdl-37299841

RESUMO

Aiming at the problems of low detection efficiency and poor detection accuracy caused by texture feature interference and dramatic changes in the scale of defect on steel surfaces, an improved YOLOv5s model is proposed. In this study, we propose a novel re-parameterized large kernel C3 module, which enables the model to obtain a larger effective receptive field and improve the ability of feature extraction under complex texture interference. Moreover, we construct a feature fusion structure with a multi-path spatial pyramid pooling module to adapt to the scale variation of steel surface defects. Finally, we propose a training strategy that applies different kernel sizes for feature maps of different scales so that the receptive field of the model can adapt to the scale changes of the feature maps to the greatest extent. The experiment on the NEU-DET dataset shows that our model improved the detection accuracy of crazing and rolled in-scale, which contain a large number of weak texture features and are densely distributed by 14.4% and 11.1%, respectively. Additionally, the detection accuracy of inclusion and scratched defects with prominent scale changes and significant shape features was improved by 10.5% and 6.6%, respectively. Meanwhile, the mean average precision value reaches 76.8%, compared with the YOLOv5s and YOLOv8s, which increased by 8.6% and 3.7%, respectively.

4.
Phys Med Biol ; 68(10)2023 05 02.
Artigo em Inglês | MEDLINE | ID: mdl-36958057

RESUMO

Objective.Cardiovascular disease (CVD) is a group of diseases affecting cardiac and blood vessels, and short-axis cardiac magnetic resonance (CMR) images are considered the gold standard for the diagnosis and assessment of CVD. In CMR images, accurate segmentation of cardiac structures (e.g. left ventricle) assists in the parametric quantification of cardiac function. However, the dynamic beating of the heart renders the location of the heart with respect to other tissues difficult to resolve, and the myocardium and its surrounding tissues are similar in grayscale. This makes it challenging to accurately segment the cardiac images. Our goal is to develop a more accurate CMR image segmentation approach.Approach.In this study, we propose a regional perception and multi-scale feature fusion network (RMFNet) for CMR image segmentation. We design two regional perception modules, a window selection transformer (WST) module and a grid extraction transformer (GET) module. The WST module introduces a window selection block to adaptively select the window of interest to perceive information, and a windowed transformer block to enhance global information extraction within each feature window. The WST module enhances the network performance by improving the window of interest. The GET module grids the feature maps to decrease the redundant information in the feature maps and enhances the extraction of latent feature information of the network. The RMFNet further introduces a novel multi-scale feature extraction module to improve the ability to retain detailed information.Main results.The RMFNet is validated with experiments on three cardiac data sets. The results show that the RMFNet outperforms other advanced methods in overall performance. The RMFNet is further validated for generalizability on a multi-organ data set. The results also show that the RMFNet surpasses other comparison methods.Significance.Accurate medical image segmentation can reduce the stress of radiologists and play an important role in image-guided clinical procedures.


Assuntos
Doenças Cardiovasculares , Coração , Humanos , Ventrículos do Coração , Miocárdio , Percepção , Processamento de Imagem Assistida por Computador
5.
J Xray Sci Technol ; 31(2): 301-317, 2023.
Artigo em Inglês | MEDLINE | ID: mdl-36617767

RESUMO

BACKGROUND: Lung cancer has the second highest cancer mortality rate in the world today. Although lung cancer screening using CT images is a common way for early lung cancer detection, accurately detecting lung nodules remains a challenged issue in clinical practice. OBJECTIVE: This study aims to develop a new weighted bidirectional recursive pyramid algorithm to address the problems of small size of lung nodules, large proportion of background region, and complex lung structures in lung nodule detection of CT images. METHODS: First, the weighted bidirectional recursive feature pyramid network (BiPRN) is proposed, which can increase the ability of network model to extract feature information and achieve multi-scale fusion information. Second, a CBAM_CSPDarknet53 structure is developed to incorporate an attention mechanism as a feature extraction module, which can aggregate both spatial information and channel information of the feature map. Third, the weighted BiRPN and CBAM_CSPDarknet53 are applied to the YOLOvX model for lung nodule detection experiments, named BiRPN-YOLOvX, where YOLOvX represents different versions of YOLO. To verify the effectiveness of our weighted BiRPN and CBAM_ CSPDarknet53 algorithm, they are fused with different models of YOLOv3, YOLOv4 and YOLOv5, and extensive experiments are carried out using the publicly available lung nodule datasets LUNA16 and LIDC-IDRI. The training set of LUNA16 contains 949 images, and the validation and testing sets each contain 118 images. There are 1987, 248 and 248 images in LIDC-IDRI's training, validation and testing sets, respectively. RESULTS: The sensitivity of lung nodule detection using BiRPN-YOLOv5 reaches 98.7% on LUNA16 and 96.2% on LIDC-IDRI, respectively. CONCLUSION: This study demonstrates that the proposed new method has potential to help improve the sensitivity of lung nodule detection in future clinical practice.


Assuntos
Neoplasias Pulmonares , Nódulo Pulmonar Solitário , Humanos , Neoplasias Pulmonares/diagnóstico por imagem , Nódulo Pulmonar Solitário/diagnóstico por imagem , Detecção Precoce de Câncer , Tomografia Computadorizada por Raios X/métodos , Interpretação de Imagem Radiográfica Assistida por Computador/métodos , Bases de Dados Factuais , Pulmão/diagnóstico por imagem , Algoritmos
6.
Plants (Basel) ; 11(21)2022 Oct 31.
Artigo em Inglês | MEDLINE | ID: mdl-36365386

RESUMO

Humans depend heavily on agriculture, which is the main source of prosperity. The various plant diseases that farmers must contend with have constituted a lot of challenges in crop production. The main issues that should be taken into account for maximizing productivity are the recognition and prevention of plant diseases. Early diagnosis of plant disease is essential for maximizing the level of agricultural yield as well as saving costs and reducing crop loss. In addition, the computerization of the whole process makes it simple for implementation. In this paper, an intelligent method based on deep learning is presented to recognize nine common tomato diseases. To this end, a residual neural network algorithm is presented to recognize tomato diseases. This research is carried out on four levels of diversity including depth size, discriminative learning rates, training and validation data split ratios, and batch sizes. For the experimental analysis, five network depths are used to measure the accuracy of the network. Based on the experimental results, the proposed method achieved the highest F1 score of 99.5%, which outperformed most previous competing methods in tomato leaf disease recognition. Further testing of our method on the Flavia leaf image dataset resulted in a 99.23% F1 score. However, the method had a drawback that some of the false predictions were of tomato early light and tomato late blight, which are two classes of fine-grained distinction.

7.
Sensors (Basel) ; 22(15)2022 Aug 02.
Artigo em Inglês | MEDLINE | ID: mdl-35957334

RESUMO

The pattern synthesis of antenna arrays is a substantial factor that can enhance the effectiveness and validity of a wireless communication system. This work proposes an advanced marine predator algorithm (AMPA) to synthesize the beam patterns of a non-uniform circular antenna array (CAA). The AMPA utilizes an adaptive velocity update mechanism with a chaotic sequence parameter to improve the exploration and exploitation capability of the algorithm. The MPA structure is simplified and upgraded to overcome being stuck in the local optimum. The AMPA is employed for the joint optimization of amplitude current and inter-element spacing to suppress the peak sidelobe level (SLL) of 8-element, 10-element, 12-element, and 18-element CAAs, taking into consideration the mutual coupling effects. The results show that it attains better performances in relation to SLL suppression and convergence rate, in comparison with some other algorithms for the optimization case.


Assuntos
Algoritmos , Ácido alfa-Amino-3-hidroxi-5-metil-4-isoxazol Propiônico
8.
Healthcare (Basel) ; 10(3)2022 Mar 08.
Artigo em Inglês | MEDLINE | ID: mdl-35326972

RESUMO

Brain tumor is one of the most aggressive diseases nowadays, resulting in a very short life span if it is diagnosed at an advanced stage. The treatment planning phase is thus essential for enhancing the quality of life for patients. The use of Magnetic Resonance Imaging (MRI) in the diagnosis of brain tumors is extremely widespread, but the manual interpretation of large amounts of images requires considerable effort and is prone to human errors. Hence, an automated method is necessary to identify the most common brain tumors. Convolutional Neural Network (CNN) architectures are successful in image classification due to their high layer count, which enables them to conceive the features effectively on their own. The tuning of CNN hyperparameters is critical in every dataset since it has a significant impact on the efficiency of the training model. Given the high dimensionality and complexity of the data, manual hyperparameter tuning would take an inordinate amount of time, with the possibility of failing to identify the optimal hyperparameters. In this paper, we proposed a Bayesian Optimization-based efficient hyperparameter optimization technique for CNN. This method was evaluated by classifying 3064 T-1-weighted CE-MRI images into three types of brain tumors (Glioma, Meningioma, and Pituitary). Based on Transfer Learning, the performance of five well-recognized deep pre-trained models is compared with that of the optimized CNN. After using Bayesian Optimization, our CNN was able to attain 98.70% validation accuracy at best without data augmentation or cropping lesion techniques, while VGG16, VGG19, ResNet50, InceptionV3, and DenseNet201 achieved 97.08%, 96.43%, 89.29%, 92.86%, and 94.81% validation accuracy, respectively. Moreover, the proposed model outperforms state-of-the-art methods on the CE-MRI dataset, demonstrating the feasibility of automating hyperparameter optimization.

9.
Phys Med Biol ; 67(5)2022 03 03.
Artigo em Inglês | MEDLINE | ID: mdl-35168211

RESUMO

Objective.Left ventricular (LV) segmentation of cardiac magnetic resonance imaging (MRI) is essential for diagnosing and treating the early stage of heart diseases. In convolutional neural networks, the target information of the LV in feature maps may be lost with convolution and max-pooling, particularly at the end of systolic. Fine segmentation of ventricular contour is still a challenge, and it may cause problems with inaccurate calculation of clinical parameters (e.g. ventricular volume). In order to improve the similarity of the neural network output and the target segmentation region, in this paper, a fine-grained calibrated double-attention convolutional network (FCDA-Net) is proposed to finely segment the endocardium and epicardium from ventricular MRI.Approach.FCDA-Nettakes the U-net as the backbone network, and the encoder-decoder structure incorporates a double grouped-attention module that is constructed by a fine calibration spatial attention module (fcSAM) and a fine calibration channel attention module (fcCAM). The double grouped-attention mechanism enhances the expression of information in both spatial and channelwise feature maps to achieve fine calibration.Main Results.The proposed approach is evaluated on the public MICCAI 2009 challenge dataset, and ablation experiments are conducted to demonstrate the effect of each grouped-attention module. Compared with other advanced segmentation methods,FCDA-Netcan obtain better LV segmentation performance.Significance.The LV segmentation results of MRI can be used to perform more accurate quantitative analysis of many essential clinical parameters and it can play an important role in image-guided clinical surgery.


Assuntos
Cardiopatias , Ventrículos do Coração , Endocárdio , Coração , Ventrículos do Coração/diagnóstico por imagem , Humanos , Redes Neurais de Computação
10.
Healthcare (Basel) ; 10(1)2022 Jan 15.
Artigo em Inglês | MEDLINE | ID: mdl-35052328

RESUMO

Novel coronavirus (COVID-19) has been endangering human health and life since 2019. The timely quarantine, diagnosis, and treatment of infected people are the most necessary and important work. The most widely used method of detecting COVID-19 is real-time polymerase chain reaction (RT-PCR). Along with RT-PCR, computed tomography (CT) has become a vital technique in diagnosing and managing COVID-19 patients. COVID-19 reveals a number of radiological signatures that can be easily recognized through chest CT. These signatures must be analyzed by radiologists. It is, however, an error-prone and time-consuming process. Deep Learning-based methods can be used to perform automatic chest CT analysis, which may shorten the analysis time. The aim of this study is to design a robust and rapid medical recognition system to identify positive cases in chest CT images using three Ensemble Learning-based models. There are several techniques in Deep Learning for developing a detection system. In this paper, we employed Transfer Learning. With this technique, we can apply the knowledge obtained from a pre-trained Convolutional Neural Network (CNN) to a different but related task. In order to ensure the robustness of the proposed system for identifying positive cases in chest CT images, we used two Ensemble Learning methods namely Stacking and Weighted Average Ensemble (WAE) to combine the performances of three fine-tuned Base-Learners (VGG19, ResNet50, and DenseNet201). For Stacking, we explored 2-Levels and 3-Levels Stacking. The three generated Ensemble Learning-based models were trained on two chest CT datasets. A variety of common evaluation measures (accuracy, recall, precision, and F1-score) are used to perform a comparative analysis of each method. The experimental results show that the WAE method provides the most reliable performance, achieving a high recall value which is a desirable outcome in medical applications as it poses a greater risk if a true infected patient is not identified.

11.
Sensors (Basel) ; 22(1)2022 Jan 05.
Artigo em Inglês | MEDLINE | ID: mdl-35009926

RESUMO

Nowadays, wireless energy transfer (WET) is a new strategy that has the potential to essentially resolve energy and lifespan issues in a wireless sensor network (WSN). We investigate the process of a wireless energy transfer-based wireless sensor network via a wireless mobile charging device (WMCD) and develop a periodic charging scheme to keep the network operative. This paper aims to reduce the overall system energy consumption and total distance traveled, and increase the ratio of charging device vacation time. We propose an energy renewable management system based on particle swarm optimization (ERMS-PSO) to achieve energy savings based on an investigation of the total energy consumption. In this new strategy, we introduce two sets of energies called emin (minimum energy level) and ethresh (threshold energy level). When the first node reaches the emin, it will inform the base station, which will calculate all nodes that fall under ethresh and send a WMCD to charge them in one cycle. These settled energy levels help to manage when a sensor node needs to be charged before reaching the general minimum energy in the node and will help the network to operate for a long time without failing. In contrast to previous schemes in which the wireless mobile charging device visited and charged all nodes for each cycle, in our strategy, the charging device should visit only a few nodes that use more energy than others. Mathematical outcomes demonstrate that our proposed strategy can considerably reduce the total energy consumption and distance traveled by the charging device and increase its vacation time ratio while retaining performance, and ERMS-PSO is more practical for real-world networks because it can keep the network operational with less complexity than other schemes.

12.
PeerJ Comput Sci ; 8: e1141, 2022.
Artigo em Inglês | MEDLINE | ID: mdl-37346305

RESUMO

Online meeting applications (apps) have emerged as a potential solution for conferencing, education and meetings, etc. during the COVID-19 outbreak and are used by private companies and governments alike. A large number of such apps compete with each other by providing a different set of functions towards users' satisfaction. These apps take users' feedback in the form of opinions and reviews which are later used to improve the quality of services. Sentiment analysis serves as the key function to obtain and analyze users' sentiments from the posted feedback indicating the importance of efficient and accurate sentiment analysis. This study proposes the novel idea of self voting classification (SVC) where multiple variants of the same model are trained using different feature extraction approaches and the final prediction is based on the ensemble of these variants. For experiments, the data collected from the Google Play store for online meeting apps were used. Primarily, the focus of this study is to use a support vector machine (SVM) with the proposed SVC approach using both soft voting (SV) and hard voting (HV) criteria, however, decision tree, logistic regression, and k nearest neighbor have also been investigated for performance appraisal. Three variants of models are trained on a bag of words, term frequency-inverse document frequency, and hashing features to make the ensemble. Experimental results indicate that the proposed SVC approach can elevate the performance of traditional machine learning models substantially. The SVM obtains 1.00 and 0.98 accuracy scores, using HV and SV criteria, respectively when used with the proposed SVC approach. Topic-wise sentiment analysis using the latent Dirichlet allocation technique is performed as well for topic modeling.

13.
Comput Intell Neurosci ; 2022: 7372984, 2022.
Artigo em Inglês | MEDLINE | ID: mdl-39411549

RESUMO

In recent years, the use of long short-term memory (LSTM) has made significant contributions to various fields and the use of intelligent optimization algorithms combined with LSTM is also one of the best ways to improve model shortcomings and increase classification accuracy. Reservoir identification is a key and difficult point in the process of logging, so using LSTM to identify the reservoir is very important. To improve the logging reservoir identification accuracy of LSTM, an improved equalization optimizer algorithm (TAFEO) is proposed in this paper to optimize the number of neurons and various parameters of LSTM. The TAFEO algorithm mainly employs tent chaotic mapping to enhance the population diversity of the algorithm, convergence factor is introduced to better balance the local and global search, and then, a premature disturbance strategy is employed to overcome the shortcomings of local minima. The optimization performance of the TAFEO algorithm is tested with 16 benchmark test functions and Wilcoxon rank-sum test for optimization results. The improved algorithm is superior to many intelligent optimization algorithms in accuracy and convergence speed and has good robustness. The receiver operating characteristic (ROC) curve is used to evaluate the performance of the optimized LSTM model. Through the simulation and comparison of UCI datasets, the results show that the performance of the LSTM model based on TAFEO has been significantly improved, and the maximum area under the ROC curve value can get 99.43%. In practical logging applications, LSTM based on an equalization optimizer is effective in well-logging reservoir identification, the highest recognition accuracy can get 95.01%, and the accuracy of reservoir identification is better than other existing identification methods.

14.
Sensors (Basel) ; 21(21)2021 Nov 08.
Artigo em Inglês | MEDLINE | ID: mdl-34770711

RESUMO

Constant monitoring of road surfaces helps to show the urgency of deterioration or problems in the road construction and to improve the safety level of the road surface. Conditional generative adversarial networks (cGAN) are a powerful tool to generate or transform the images used for crack detection. The advantage of this method is the highly accurate results in vector-based images, which are convenient for mathematical analysis of the detected cracks at a later time. However, images taken under established parameters are different from images in real-world contexts. Another potential problem of cGAN is that it is difficult to detect the shape of an object when the resulting accuracy is low, which can seriously affect any further mathematical analysis of the detected crack. To tackle this issue, this paper proposes a method called improved cGAN with attention gate (ICGA) for roadway surface crack detection. To obtain a more accurate shape of the detected target object, ICGA establishes a multi-level model with independent stages. In the first stage, everything except the road is treated as noise and removed from the image. These images are stored in a new dataset. In the second stage, ICGA determines the cracks. Therefore, ICGA focuses on the redistribution of cracks, not the auxiliary elements in the image. ICGA adds two attention gates to a U-net architecture and improves the segmentation capacities of the generator in pix2pix. Extensive experimental results on dashboard camera images of the Unsupervised Llamas dataset show that our method has better performance than other state-of-the-art methods.


Assuntos
Atenção , Processamento de Imagem Assistida por Computador
15.
Comput Intell Neurosci ; 2020: 4159241, 2020.
Artigo em Inglês | MEDLINE | ID: mdl-32908473

RESUMO

Emergency response to hazardous gases in the environment is an important research field in environmental monitoring. In recent years, with the rapid development of sensor technology and mobile device technology, more autonomous search algorithms for hazardous gas emission sources are proposed in uncertain environment, which can avoid emergency personnel from contacting hazardous gas in a short distance. Infotaxis is an autonomous search strategy without a concentration gradient, which uses scattered sensor data to track the location of the release source in turbulent environment. This paper optimizes the imbalance of exploitation and exploration in the reward function of Infotaxis algorithm and proposes a mobile strategy for the three-dimensional scene. In two-dimensional and three-dimensional scenes, the average steps of search tasks are used as the evaluation criteria to analyze the information trend algorithm combined with different reward functions and mobile strategies. The results show that the balance between the exploitation item and exploration item of the reward function proposed in this paper is better than that of the reward function in the Infotaxis algorithm, no matter in the two-dimensional scenes or in the three-dimensional scenes.


Assuntos
Robótica , Algoritmos
16.
Sensors (Basel) ; 20(16)2020 Aug 07.
Artigo em Inglês | MEDLINE | ID: mdl-32784692

RESUMO

Due to the spectral complexity and high dimensionality of hyperspectral images (HSIs), the processing of HSIs is susceptible to the curse of dimensionality. In addition, the classification results of ground truth are not ideal. To overcome the problem of the curse of dimensionality and improve classification accuracy, an improved spatial-spectral weight manifold embedding (ISS-WME) algorithm, which is based on hyperspectral data with their own manifold structure and local neighbors, is proposed in this study. The manifold structure was constructed using the structural weight matrix and the distance weight matrix. The structural weight matrix was composed of within-class and between-class coefficient representation matrices. These matrices were obtained by using the collaborative representation method. Furthermore, the distance weight matrix integrated the spatial and spectral information of HSIs. The ISS-WME algorithm describes the whole structure of the data by the weight matrix constructed by combining the within-class and between-class matrices and the spatial-spectral information of HSIs, and the nearest neighbor samples of the data are retained without changing when embedding to the low-dimensional space. To verify the classification effect of the ISS-WME algorithm, three classical data sets, namely Indian Pines, Pavia University, and Salinas scene, were subjected to experiments for this paper. Six methods of dimensionality reduction (DR) were used for comparison experiments using different classifiers such as k-nearest neighbor (KNN) and support vector machine (SVM). The experimental results show that the ISS-WME algorithm can represent the HSI structure better than other methods, and effectively improves the classification accuracy of HSIs.

17.
Comput Intell Neurosci ; 2020: 6858541, 2020.
Artigo em Inglês | MEDLINE | ID: mdl-32831819

RESUMO

Bird swarm algorithm is one of the swarm intelligence algorithms proposed recently. However, the original bird swarm algorithm has some drawbacks, such as easy to fall into local optimum and slow convergence speed. To overcome these short-comings, a dynamic multi-swarm differential learning quantum bird swarm algorithm which combines three hybrid strategies was established. First, establishing a dynamic multi-swarm bird swarm algorithm and the differential evolution strategy was adopted to enhance the randomness of the foraging behavior's movement, which can make the bird swarm algorithm have a stronger global exploration capability. Next, quantum behavior was introduced into the bird swarm algorithm for more efficient search solution space. Then, the improved bird swarm algorithm is used to optimize the number of decision trees and the number of predictor variables on the random forest classification model. In the experiment, the 18 benchmark functions, 30 CEC2014 functions, and the 8 UCI datasets are tested to show that the improved algorithm and model are very competitive and outperform the other algorithms and models. Finally, the effective random forest classification model was applied to actual oil logging prediction. As the experimental results show, the three strategies can significantly boost the performance of the bird swarm algorithm and the proposed learning scheme can guarantee a more stable random forest classification model with higher accuracy and efficiency compared to others.


Assuntos
Algoritmos , Biomimética , Classificação/métodos , Simulação por Computador
18.
Sensors (Basel) ; 19(18)2019 Sep 11.
Artigo em Inglês | MEDLINE | ID: mdl-31514381

RESUMO

The gas sensor array has long been a major tool for measuring gas due to its high sensitivity, quick response, and low power consumption. This goal, however, faces a difficult challenge because of the cross-sensitivity of the gas sensor. This paper presents a novel gas mixture analysis method for gas sensor array applications. The features extracted from the raw data utilizing principal component analysis (PCA) were used to complete random forest (RF) modeling, which enabled qualitative identification. Support vector regression (SVR), optimized by the particle swarm optimization (PSO) algorithm, was used to select hyperparameters C and γ to establish the optimal regression model for the purpose of quantitative analysis. Utilizing the dataset, we evaluated the effectiveness of our approach. Compared with logistic regression (LR) and support vector machine (SVM), the average recognition rate of PCA combined with RF was the highest (97%). The fitting effect of SVR optimized by PSO for gas concentration was better than that of SVR and solved the problem of hyperparameters selection.

19.
Sensors (Basel) ; 19(3)2019 Jan 24.
Artigo em Inglês | MEDLINE | ID: mdl-30682823

RESUMO

Hyperspectral Images (HSIs) contain enriched information due to the presence of various bands, which have gained attention for the past few decades. However, explosive growth in HSIs' scale and dimensions causes "Curse of dimensionality" and "Hughes phenomenon". Dimensionality reduction has become an important means to overcome the "Curse of dimensionality". In hyperspectral images, labeled samples are more difficult to collect because they require many labor and material resources. Semi-supervised dimensionality reduction is very important in mining high-dimensional data due to the lack of costly-labeled samples. The promotion of the supervised dimensionality reduction method to the semi-supervised method is mostly done by graph, which is a powerful tool for characterizing data relationships and manifold exploration. To take advantage of the spatial information of data, we put forward a novel graph construction method for semi-supervised learning, called SLIC Superpixel-based l 2 , 1 -norm Robust Principal Component Analysis (SURPCA2,1), which integrates superpixel segmentation method Simple Linear Iterative Clustering (SLIC) into Low-rank Decomposition. First, the SLIC algorithm is adopted to obtain the spatial homogeneous regions of HSI. Then, the l 2 , 1 -norm RPCA is exploited in each superpixel area, which captures the global information of homogeneous regions and preserves spectral subspace segmentation of HSIs very well. Therefore, we have explored the spatial and spectral information of hyperspectral image simultaneously by combining superpixel segmentation with RPCA. Finally, a semi-supervised dimensionality reduction framework based on SURPCA2,1 graph is used for feature extraction task. Extensive experiments on multiple HSIs showed that the proposed spectral-spatial SURPCA2,1 is always comparable to other compared graphs with few labeled samples.

20.
Comput Intell Neurosci ; 2017: 3678487, 2017.
Artigo em Inglês | MEDLINE | ID: mdl-28912801

RESUMO

Unlike Support Vector Machine (SVM), Multiple Kernel Learning (MKL) allows datasets to be free to choose the useful kernels based on their distribution characteristics rather than a precise one. It has been shown in the literature that MKL holds superior recognition accuracy compared with SVM, however, at the expense of time consuming computations. This creates analytical and computational difficulties in solving MKL algorithms. To overcome this issue, we first develop a novel kernel approximation approach for MKL and then propose an efficient Low-Rank MKL (LR-MKL) algorithm by using the Low-Rank Representation (LRR). It is well-acknowledged that LRR can reduce dimension while retaining the data features under a global low-rank constraint. Furthermore, we redesign the binary-class MKL as the multiclass MKL based on pairwise strategy. Finally, the recognition effect and efficiency of LR-MKL are verified on the datasets Yale, ORL, LSVT, and Digit. Experimental results show that the proposed LR-MKL algorithm is an efficient kernel weights allocation method in MKL and boosts the performance of MKL largely.


Assuntos
Algoritmos , Conjuntos de Dados como Assunto , Máquina de Vetores de Suporte
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA