Your browser doesn't support javascript.
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 34
Filtrar
Mais filtros










Base de dados
Intervalo de ano de publicação
1.
Artigo em Inglês | MEDLINE | ID: mdl-31944999

RESUMO

Group recommendation research has recently received much attention in a recommender system community. Currently, several deep-learning-based methods are used in group recommendation to learn preferences of groups on items and predict the next ones in which groups may be interested. However, their recommendation effectiveness is disappointing. To address this challenge, this article proposes a novel model called a multiattention-based group recommendation model (MAGRM). It well utilizes multiattention-based deep neural network structures to achieve accurate group recommendation. We train its two closely related modules: vector representation for group features and preference learning for groups on items. The former is proposed to learn to accurately represent each group's deep semantic features. It integrates four aspects of subfeatures: group co-occurrence, group description, and external and internal social features. In particular, we employ multiattention networks to learn to capture internal social features for groups. The latter employs a neural attention mechanism to depict preference interactions between each group and its members and then combines group and item features to accurately learn group preferences on items. Through extensive experiments on two real-world databases, we show that MAGRM remarkably outperforms the state-of-the-art methods in solving a group recommendation problem.

2.
Artigo em Inglês | MEDLINE | ID: mdl-31880561

RESUMO

Deep belief network (DBN) is an efficient learning model for unknown data representation, especially nonlinear systems. However, it is extremely hard to design a satisfactory DBN with a robust structure because of traditional dense representation. In addition, backpropagation algorithm-based fine-tuning tends to yield poor performance since its ease of being trapped into local optima. In this article, we propose a novel DBN model based on adaptive sparse restricted Boltzmann machines (AS-RBM) and partial least square (PLS) regression fine-tuning, abbreviated as ARP-DBN, to obtain a more robust and accurate model than the existing ones. First, the adaptive learning step size is designed to accelerate an RBM training process, and two regularization terms are introduced into such a process to realize sparse representation. Second, initial weight derived from AS-RBM is further optimized via layer-by-layer PLS modeling starting from the output layer to input one. Third, we present the convergence and stability analysis of the proposed method. Finally, our approach is tested on Mackey-Glass time-series prediction, 2-D function approximation, and unknown system identification. Simulation results demonstrate that it has higher learning accuracy and faster learning speed. It can be used to build a more robust model than the existing ones.

3.
IEEE Trans Cybern ; 2019 Oct 14.
Artigo em Inglês | MEDLINE | ID: mdl-31613787

RESUMO

This paper investigates a finite-frequency H_/H∞ fault detection method for discrete-time T-S fuzzy systems with unmeasurable premise variables. To minimize the effect of uncertainties on system performance and maximize that of actuator faults on the generated residual, both the H∞ disturbance attenuation index and finite-frequency H_ fault sensitivity index are utilized. Since the premised variables are unmeasurable, the existing generalized Kalman-Yakubovich-Popov lemma cannot be directly extended to these nonlinear systems. In this paper, the conditions of allowing one to design the proposed H_/H∞ fault detection observer are established and transformed into linear matrix inequalities. Some scalars and slack matrices are introduced to bring extra degrees of freedom in observer design. Finally, a single-link robotic manipulator model is utilized to illustrate that the proposed technique can detect faults with smaller amplitude than that required by a normal H∞ observer technique.

4.
Artigo em Inglês | MEDLINE | ID: mdl-31514161

RESUMO

Domain adaptation (DA) is widely used in learning problems lacking labels. Recent studies show that deep adversarial DA models can make markable improvements in performance, which include symmetric and asymmetric architectures. However, the former has poor generalization ability, whereas the latter is very hard to train. In this article, we propose a novel adversarial DA method named adversarial residual transform networks (ARTNs) to improve the generalization ability, which directly transforms the source features into the space of target features. In this model, residual connections are used to share features and adversarial loss is reconstructed, thus making the model more generalized and easier to train. Moreover, a special regularization term is added to the loss function to alleviate a vanishing gradient problem, which enables its training process stable. A series of experiments based on Amazon review data set, digits data sets, and Office-31 image data sets are conducted to show that the proposed ARTN can be comparable with the methods of the state of the art.

5.
IEEE Trans Cybern ; 2019 Jul 10.
Artigo em Inglês | MEDLINE | ID: mdl-31295134

RESUMO

In recent years, image processing in a Euclidean domain has been well studied. Practical problems in computer vision and geometric modeling involve image data defined in irregular domains, which can be modeled by huge graphs. In this paper, a wavelet frame-based fuzzy C-means (FCM) algorithm for segmenting images on graphs is presented. To enhance its robustness, images on graphs are first filtered by using spatial information. Since a real image usually exhibits sparse approximation under a tight wavelet frame system, feature spaces of images on graphs can be obtained. Combining the original and filtered feature sets, this paper uses the FCM algorithm for segmentation of images on graphs contaminated by noise of different intensities. Finally, some supporting numerical experiments and comparison with other FCM-related algorithms are provided. Experimental results reported for synthetic and real images on graphs demonstrate that the proposed algorithm is effective and efficient, and has a better ability for segmentation of images on graphs than other improved FCM algorithms existing in the literature. The approach can effectively remove noise and retain feature details of images on graphs. It offers a new avenue for segmenting images in irregular domains.

6.
IEEE Trans Image Process ; 28(12): 6091-6102, 2019 Dec.
Artigo em Inglês | MEDLINE | ID: mdl-31251187

RESUMO

Large-deformation image registration is important in theory and application in computer vision, but is a difficult task for non-rigid registration methods. In this paper, we propose a structural Tensor and Driving force-based Log-Demons algorithm for it, named TDLog-Demons for short. The structural tensor of an image is proposed to obtain a highly accurate deformation field. The driving force is proposed to solve the registration issue of large-deformation that often causes Log-Demons to trap into local minima. It is defined as a point correspondence obtained via multisupport-region-order-based gradient histogram descriptor matching on image's boundary points. It is integrated into an exponentially decreasing form with the velocity field of Log-Demons to move the points accurately and to speed up a registration process. Consequently, the driving force-based Log-Demons can well deal with large-deformation image registration. Extensive experiments demonstrate that the TDLog-Demons not only captures large deformations at a high accuracy but also yields a smooth deformation.

7.
IEEE Trans Cybern ; 2019 Apr 04.
Artigo em Inglês | MEDLINE | ID: mdl-30969935

RESUMO

Quality-of-service (QoS) data vary over time, making it vital to capture the temporal patterns hidden in such dynamic data for predicting missing ones with high accuracy. However, currently latent factor (LF) analysis-based QoS-predictors are mostly defined on static QoS data without the consideration of such temporal dynamics. To address this issue, this paper presents a biased non-negative latent factorization of tensors (BNLFTs) model for temporal pattern-aware QoS prediction. Its main idea is fourfold: 1) incorporating linear biases into the model for describing QoS fluctuations; 2) constraining the model to be non-negative for describing QoS non-negativity; 3) deducing a single LF-dependent, non-negative, and multiplicative update scheme for training the model; and 4) incorporating an alternating direction method into the model for faster convergence. The empirical studies on two dynamic QoS datasets from real applications show that compared with the state-of-the-art QoS-predictors, BNLFT represents temporal patterns more precisely with high computational efficiency, thereby achieving the most accurate predictions for missing QoS data.

8.
IEEE Trans Cybern ; 2019 Feb 27.
Artigo em Inglês | MEDLINE | ID: mdl-30835233

RESUMO

High-dimensional and sparse (HiDS) matrices are commonly seen in big-data-related industrial applications like recommender systems. Latent factor (LF) models have proven to be accurate and efficient in extracting hidden knowledge from them. However, they mostly fail to fulfill the non-negativity constraints that describe the non-negative nature of many industrial data. Moreover, existing models suffer from slow convergence rate. An alternating-direction-method of multipliers-based non-negative LF (AMNLF) model decomposes the task of non-negative LF analysis on an HiDS matrix into small subtasks, where each task is solved based on the latest solutions to the previously solved ones, thereby achieving fast convergence and high prediction accuracy for its missing data. This paper theoretically analyzes the characteristics of an AMNLF model, and presents detailed empirical studies regarding its performance on nine HiDS matrices from industrial applications currently in use. Therefore, its capability of addressing HiDS matrices is justified in both theory and practice.

9.
IEEE Trans Cybern ; 49(5): 1944-1955, 2019 May.
Artigo em Inglês | MEDLINE | ID: mdl-29993706

RESUMO

Rescheduling is a necessary procedure for a flexible job shop when newly arrived priority jobs must be inserted into an existing schedule. Instability measures the amount of change made to the existing schedule and is an important metrics to evaluate the quality of rescheduling solutions. This paper focuses on a flexible job-shop rescheduling problem (FJRP) for new job insertion. First, it formulates FJRP for new job insertion arising from pump remanufacturing. This paper deals with bi-objective FJRPs to minimize: 1) instability and 2) one of the following indices: a) makespan; b) total flow time; c) machine workload; and d) total machine workload. Next, it discretizes a novel and simple metaheuristic, named Jaya, resulting in DJaya and improves it to solve FJRP. Two simple heuristics are employed to initialize high-quality solutions. Finally, it proposes five objective-oriented local search operators and four ensembles of them to improve the performance of DJaya. Finally, it performs experiments on seven real-life cases with different scales from pump remanufacturing and compares DJaya with some state-of-the-art algorithms. The results show that DJaya is effective and efficient for solving the concerned FJRPs.

10.
IEEE Trans Cybern ; 49(6): 2011-2021, 2019 Jun.
Artigo em Inglês | MEDLINE | ID: mdl-29994037

RESUMO

Particle swarm optimizer (PSO) is a population-based optimization technique applied to a wide range of problems. In the literature, many PSO variants have been proposed to deal with noise-free or noisy environments, respectively. While in real-life applications, noise emerges irregularly and unpredictably. As a result, PSO for a noise-free environment loses its accuracy when noise exists, while PSO for a noisy environment wastes its resampling resource when noise does not exist. To handle such scenario, a PSO variant that can work well in both noise-free and noisy environments is required, which does, to the authors' best knowledge, not exist yet. To fill such gap, this work proposes a novel PSO variant named as dual-environmental PSO (DEPSO). It uses a weighted search center based on top- k elite particles to guide the swarm. It averages their positions rather than resampling fitness values of particles to achieve noise reduction, which challenges the indispensable role of the resampling method in a noisy environment and adapts to a noise-free environment as well. Two theoretical analyses are presented for noise reduction and finer local optimization capabilities. Experimental results performed on CEC2013 benchmark functions indicate that DEPSO outperforms state-of-the-art PSO variants in both noise-free and noisy environments.

11.
IEEE Trans Neural Netw Learn Syst ; 29(9): 4152-4165, 2018 09.
Artigo em Inglês | MEDLINE | ID: mdl-29990027

RESUMO

A support vector machine (SVM) plays a prominent role in classic machine learning, especially classification and regression. Through its structural risk minimization, it has enjoyed a good reputation in effectively reducing overfitting, avoiding dimensional disaster, and not falling into local minima. Nevertheless, existing SVMs do not perform well when facing class imbalance and large-scale samples. Undersampling is a plausible alternative to solve imbalanced problems in some way, but suffers from soaring computational complexity and reduced accuracy because of its enormous iterations and random sampling process. To improve their classification performance in dealing with data imbalance problems, this work proposes a weighted undersampling (WU) scheme for SVM based on space geometry distance, and thus produces an improved algorithm named WU-SVM. In WU-SVM, majority samples are grouped into some subregions (SRs) and assigned different weights according to their Euclidean distance to the hyper plane. The samples in an SR with higher weight have more chance to be sampled and put to use in each learning iteration, so as to retain the data distribution information of original data sets as much as possible. Comprehensive experiments are performed to test WU-SVM via 21 binary-class and six multiclass publically available data sets. The results show that it well outperforms the state-of-the-art methods in terms of three popular metrics for imbalanced classification, i.e., area under the curve, F-Measure, and G-Mean.

12.
IEEE Trans Cybern ; 48(3): 890-903, 2018 Mar.
Artigo em Inglês | MEDLINE | ID: mdl-28391215

RESUMO

Discovering and tracking spatiotemporal event patterns have many applications. For example, in a smart-home project, a set of spatiotemporal pattern learning automata are used to monitor a user's repetitive activities, by which the home's automaticity can be promoted while some of his/her burdens can be reduced. Existing algorithms for spatiotemporal event pattern recognition in dynamic noisy environment are based on fixed structure stochastic automata whose state transition function is fixed and predesigned to guarantee their immunity to noise. However, such design is conservative because it needs continuous and identical feedbacks to converge, thus leading to its very low convergence rate. In many real-life applications, such as ambient assisted living, consecutive nonoccurrences of an elder resident's routine activities should be treated with an alert as quickly as possible. On the other hand, no alert should be output even for some occurrences in order to diminish the effects caused by noise. Clearly, confronting a pattern's change, slow speed and low accuracy may degrade a user's life security. This paper proposes a fast and accurate leaning automaton based on variable structure stochastic automata to satisfy the realistic requirements for both speed and accuracy. Bias toward alert is necessary for elder residents while the existing method can only support the bias toward "no alert." This paper introduces a method to allow bias toward alert or no alert to meet a user's specific bias requirement. Experimental results show its better performance than the state-of-the-art methods.

13.
IEEE Trans Cybern ; 48(4): 1216-1228, 2018 Apr.
Artigo em Inglês | MEDLINE | ID: mdl-28422674

RESUMO

Generating highly accurate predictions for missing quality-of-service (QoS) data is an important issue. Latent factor (LF)-based QoS-predictors have proven to be effective in dealing with it. However, they are based on first-order solvers that cannot well address their target problem that is inherently bilinear and nonconvex, thereby leaving a significant opportunity for accuracy improvement. This paper proposes to incorporate an efficient second-order solver into them to raise their accuracy. To do so, we adopt the principle of Hessian-free optimization and successfully avoid the direct manipulation of a Hessian matrix, by employing the efficiently obtainable product between its Gauss-Newton approximation and an arbitrary vector. Thus, the second-order information is innovatively integrated into them. Experimental results on two industrial QoS datasets indicate that compared with the state-of-the-art predictors, the newly proposed one achieves significantly higher prediction accuracy at the expense of affordable computational burden. Hence, it is especially suitable for industrial applications requiring high prediction accuracy of unknown QoS data.

14.
IEEE/ACM Trans Comput Biol Bioinform ; 15(4): 1365-1378, 2018.
Artigo em Inglês | MEDLINE | ID: mdl-28534784

RESUMO

The problem of predicting the three-dimensional (3-D) structure of a protein from its one-dimensional sequence has been called the "holy grail of molecular biology", and it has become an important part of structural genomics projects. Despite the rapid developments in computer technology and computational intelligence, it remains challenging and fascinating. In this paper, to solve it we propose a multi-objective evolutionary algorithm. We decompose the protein energy function Chemistry at HARvard Macromolecular Mechanics force fields into bond and non-bond energies as the first and second objectives. Considering the effect of solvent, we innovatively adopt a solvent-accessible surface area as the third objective. We use 66 benchmark proteins to verify the proposed method and obtain better or competitive results in comparison with the existing methods. The results suggest the necessity to incorporate the effect of solvent into a multi-objective evolutionary algorithm to improve protein structure prediction in terms of accuracy and efficiency.


Assuntos
Algoritmos , Biologia Computacional/métodos , Conformação Proteica , Proteínas , Bases de Dados de Proteínas , Interações Hidrofóbicas e Hidrofílicas , Modelos Moleculares , Proteínas/química , Proteínas/genética , Solventes , Água
15.
IEEE Trans Cybern ; 47(12): 4263-4274, 2017 Dec.
Artigo em Inglês | MEDLINE | ID: mdl-28113413

RESUMO

Under-sampling is a popular data preprocessing method in dealing with class imbalance problems, with the purposes of balancing datasets to achieve a high classification rate and avoiding the bias toward majority class examples. It always uses full minority data in a training dataset. However, some noisy minority examples may reduce the performance of classifiers. In this paper, a new under-sampling scheme is proposed by incorporating a noise filter before executing resampling. In order to verify the efficiency, this scheme is implemented based on four popular under-sampling methods, i.e., Undersampling + Adaboost, RUSBoost, UnderBagging, and EasyEnsemble through benchmarks and significance analysis. Furthermore, this paper also summarizes the relationship between algorithm performance and imbalanced ratio. Experimental results indicate that the proposed scheme can improve the original undersampling-based methods with significance in terms of three popular metrics for imbalanced classification, i.e., the area under the curve, -measure, and -mean.

16.
IEEE Trans Cybern ; 47(11): 3658-3668, 2017 Nov.
Artigo em Inglês | MEDLINE | ID: mdl-27411233

RESUMO

The economy of scale provided by cloud attracts a growing number of organizations and industrial companies to deploy their applications in cloud data centers (CDCs) and to provide services to users around the world. The uncertainty of arriving tasks makes it a big challenge for private CDC to cost-effectively schedule delay bounded tasks without exceeding their delay bounds. Unlike previous studies, this paper takes into account the cost minimization problem for private CDC in hybrid clouds, where the energy price of private CDC and execution price of public clouds both show the temporal diversity. Then, this paper proposes a temporal task scheduling algorithm (TTSA) to effectively dispatch all arriving tasks to private CDC and public clouds. In each iteration of TTSA, the cost minimization problem is modeled as a mixed integer linear program and solved by a hybrid simulated-annealing particle-swarm-optimization. The experimental results demonstrate that compared with the existing methods, the optimal or suboptimal scheduling strategy produced by TTSA can efficiently increase the throughput and reduce the cost of private CDC while meeting the delay bounds of all the tasks.

17.
IEEE Trans Neural Netw Learn Syst ; 27(3): 579-92, 2016 Mar.
Artigo em Inglês | MEDLINE | ID: mdl-26011893

RESUMO

Nonnegative matrix factorization (NMF)-based models possess fine representativeness of a target matrix, which is critically important in collaborative filtering (CF)-based recommender systems. However, current NMF-based CF recommenders suffer from the problem of high computational and storage complexity, as well as slow convergence rate, which prevents them from industrial usage in context of big data. To address these issues, this paper proposes an alternating direction method (ADM)-based nonnegative latent factor (ANLF) model. The main idea is to implement the ADM-based optimization with regard to each single feature, to obtain high convergence rate as well as low complexity. Both computational and storage costs of ANLF are linear with the size of given data in the target matrix, which ensures high efficiency when dealing with extremely sparse matrices usually seen in CF problems. As demonstrated by the experiments on large, real data sets, ANLF also ensures fast convergence and high prediction accuracy, as well as the maintenance of nonnegativity constraints. Moreover, it is simple and easy to implement for real applications of learning systems.

18.
IEEE Trans Cybern ; 46(11): 2435-2446, 2016 Nov.
Artigo em Inglês | MEDLINE | ID: mdl-26469851

RESUMO

Disassembly modeling and planning are meaningful and important to the reuse, recovery, and recycling of obsolete and discarded products. However, the existing methods pay little or no attention to resources constraints, e.g., disassembly operators and tools. Thus a resulting plan when being executed may be ineffective in actual product disassembly. This paper proposes to model and optimize selective disassembly sequences subject to multiresource constraints to maximize disassembly profit. Moreover, two scatter search algorithms with different combination operators, namely one with precedence preserved crossover combination operator and another with path-relink combination operator, are designed to solve the proposed model. Their validity is shown by comparing them with the optimization results from well-known optimization software CPLEX for different cases. The experimental results illustrate the effectiveness of the proposed method.

19.
IEEE Trans Cybern ; 46(9): 2083-93, 2016 Sep.
Artigo em Inglês | MEDLINE | ID: mdl-26292356

RESUMO

Motivated by neuroscience discoveries during the last few years, many studies consider pulse-coupled neural networks with spike-timing as an essential component in information processing by the brain. There also exists some technical challenges while simulating the networks of artificial spiking neurons. The existing studies use a Hodgkin-Huxley (H-H) model to describe spiking dynamics and neuro-computational properties of each neuron. But they fail to address the effect of specific non-Gaussian noise on an artificial H-H neuron system. This paper aims to analyze how an artificial H-H neuron responds to add different types of noise using an electrical current and subunit noise model. The spiking and bursting behavior of this neuron is also investigated through numerical simulations. In addition, through statistic analysis, the intensity of different kinds of noise distributions is discussed to obtain their relationship with the mean firing rate, interspike intervals, and stochastic resonance.


Assuntos
Potenciais de Ação/fisiologia , Modelos Neurológicos , Neurônios/fisiologia , Encéfalo/fisiologia , Humanos , Rede Nervosa/fisiologia , Processos Estocásticos
20.
IEEE Trans Neural Netw Learn Syst ; 27(3): 524-37, 2016 Mar.
Artigo em Inglês | MEDLINE | ID: mdl-25910255

RESUMO

Automatic Web-service selection is an important research topic in the domain of service computing. During this process, reliable predictions for quality of service (QoS) based on historical service invocations are vital to users. This work aims at making highly accurate predictions for missing QoS data via building an ensemble of nonnegative latent factor (NLF) models. Its motivations are: 1) the fulfillment of nonnegativity constraints can better represent the positive value nature of QoS data, thereby boosting the prediction accuracy and 2) since QoS prediction is a learning task, it is promising to further improve the prediction accuracy with a carefully designed ensemble model. To achieve this, we first implement an NLF model for QoS prediction. This model is then diversified through feature sampling and randomness injection to form a diversified NLF model, based on which an ensemble is built. Comparison results between the proposed ensemble and several widely employed and state-of-the-art QoS predictors on two large, real data sets demonstrate that the former can outperform the latter well in terms of prediction accuracy.

SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA