Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 10 de 10
Filtrar
Mais filtros










Base de dados
Intervalo de ano de publicação
1.
Artigo em Inglês | MEDLINE | ID: mdl-37999965

RESUMO

Concept-cognitive learning is an emerging area of cognitive computing, which refers to continuously learning new knowledge by imitating the human cognition process. However, the existing research on concept-cognitive learning is still at the level of complete cognition as well as cognitive operators, which is far from the real cognition process. Meanwhile, the current classification algorithms based on concept-cognitive learning models (CCLMs) are not mature enough yet since their cognitive results highly depend on the cognition order of attributes. To address the above problems, this article presents a novel concept-cognitive learning method, namely, stochastic incremental incomplete concept-cognitive learning method (SI2CCLM), whose cognition process adopts a stochastic strategy that is independent of the order of attributes. Moreover, a new classification algorithm based on SI2CCLM is developed, and the analysis of the parameters and convergence of the algorithm is made. Finally, we show the cognitive effectiveness of SI2CCLM by comparing it with other concept-cognitive learning methods. In addition, the average accuracy of our model on 24 datasets is 82.02%, which is higher than the compared 20 classification algorithms, and the elapsed time of our model also has advantages.

2.
IEEE Trans Pattern Anal Mach Intell ; 45(4): 4051-4070, 2023 Apr.
Artigo em Inglês | MEDLINE | ID: mdl-35849673

RESUMO

Generalized zero-shot learning (GZSL) aims to train a model for classifying data samples under the condition that some output classes are unknown during supervised learning. To address this challenging task, GZSL leverages semantic information of the seen (source) and unseen (target) classes to bridge the gap between both seen and unseen classes. Since its introduction, many GZSL models have been formulated. In this review paper, we present a comprehensive review on GZSL. First, we provide an overview of GZSL including the problems and challenges. Then, we introduce a hierarchical categorization for the GZSL methods and discuss the representative methods in each category. In addition, we discuss the available benchmark data sets and applications of GZSL, along with a discussion on the research gaps and directions for future investigations.

3.
IEEE Trans Neural Netw Learn Syst ; 34(10): 6898-6912, 2023 Oct.
Artigo em Inglês | MEDLINE | ID: mdl-35737612

RESUMO

Dominance-based rough approximation discovers inconsistencies from ordered criteria and satisfies the requirement of the dominance principle between single-valued domains of condition attributes and decision classes. When the ordered decision system (ODS) is no longer single-valued, how to utilize the dominance principle to deal with multivalued ordered data is a promising research direction, and it is the most challenging step to design a feature selection algorithm in interval-valued ODS (IV-ODS). In this article, we first present novel thresholds of interval dominance degree (IDD) and interval overlap degree (IOD) between interval values to make the dominance principle applicable to an IV-ODS, and then, the interval-valued dominance relation in the IV-ODS is constructed by utilizing the above two developed parameters. Based on the proposed interval-valued dominance relation, the interval-valued dominance-based rough set approach (IV-DRSA) and their corresponding properties are investigated. Moreover, the interval dominance-based feature selection rules based on IV-DRSA are provided, and the relevant algorithms for deriving the interval-valued dominance relation and the feature selection methods are established in IV-ODS. To illustrate the effectiveness of the parameters variation on feature selection rules, experimental evaluation is performed using 12 datasets coming from the University of California-Irvine (UCI) repository.

4.
IEEE Trans Cybern ; 48(2): 703-715, 2018 Feb.
Artigo em Inglês | MEDLINE | ID: mdl-28436910

RESUMO

The generalization ability of a classifier learned from a training set is usually dependent on the classifier's uncertainty, which is often described by the fuzziness of the classifier's outputs on the training set. Since the exact dependency relation between generalization and uncertainty of a classifier is quite complicated, it is difficult to clearly or explicitly express this relation in general. This paper shows a specific study on this relation from the viewpoint of complexity of classification by choosing extreme learning machines as the classification algorithms. It concludes that the generalization ability of a classifier is statistically becoming better with the increase of uncertainty when the complexity of the classification problem is relatively high, and the generalization ability is statistically becoming worse with the increase of uncertainty when the complexity is relatively low. This paper tries to provide some useful guidelines for improving the generalization ability of classifiers by adjusting uncertainty based on the problem complexity.

5.
IEEE Trans Cybern ; 45(7): 1262-75, 2015 Jul.
Artigo em Inglês | MEDLINE | ID: mdl-25291806

RESUMO

A key issue in decision tree (DT) induction with continuous valued attributes is to design an effective strategy for splitting nodes. The traditional approach to solving this problem is adopting the candidate cut point (CCP) with the highest discriminative ability, which is evaluated by some frequency based heuristic measures. However, such methods ignore the class permutation of examples in the node, and they cannot distinguish the CCPs with the same or similar frequency information, thus may fail to induce a better and smaller tree. In this paper, a new concept, i.e., segment of examples, is proposed to differentiate the CCPs with same frequency information. Then, a new hybrid scheme that combines the two heuristic measures, i.e., frequency and segment, is developed for splitting DT nodes. The relationship between frequency and the expected number of segments, which is regarded as a random variable, is also given. Experimental comparisons demonstrate that the proposed scheme is not only effective to improve the generalization capability, but also valid to reduce the size of the tree.

6.
Neural Netw ; 63: 87-93, 2015 Mar.
Artigo em Inglês | MEDLINE | ID: mdl-25514097

RESUMO

The generalization ability of ELM can be improved by fusing a number of individual ELMs. This paper proposes a new scheme of fusing ELMs based on upper integrals, which differs from all the existing fuzzy integral models of classifier fusion. The new scheme uses the upper integral to reasonably assign tested samples to different ELMs for maximizing the classification efficiency. By solving an optimization problem of upper integrals, we obtain the proportions of assigning samples to different ELMs and their combinations. The definition of upper integral guarantees such a conclusion that the classification accuracy of the fused ELM is not less than that of any individual ELM theoretically. Numerical simulations demonstrate that most existing fusion methodologies such as Bagging and Boosting can be improved by our upper integral model.


Assuntos
Algoritmos , Redes Neurais de Computação , Modelos Teóricos
7.
IEEE Trans Cybern ; 44(5): 620-35, 2014 May.
Artigo em Inglês | MEDLINE | ID: mdl-23782843

RESUMO

Fusing a number of classifiers can generally improve the performance of individual classifiers, and the fuzzy integral, which can clearly express the interaction among the individual classifiers, has been acknowledged as an effective tool of fusion. In order to make the best use of the individual classifiers and their combinations, we propose in this paper a new scheme of classifier fusion based on upper integrals, which differs from all the existing models. Instead of being a fusion operator, the upper integral is used to reasonably arrange the finite resources, and thus to maximize the classification efficiency. By solving an optimization problem of upper integrals, we obtain a scheme for assigning proportions of examples to different individual classifiers and their combinations. According to these proportions, new examples could be classified by different individual classifiers and their combinations, and the combination of classifiers that specific examples should be submitted to depends on their performance. The definition of upper integral guarantees such a conclusion that the classification efficiency of the fused classifier is not less than that of any individual classifier theoretically. Furthermore, numerical simulations demonstrate that most existing fusion methodologies, such as bagging and boosting, can be improved by our upper integral model.

8.
IEEE Trans Cybern ; 44(1): 21-39, 2014 Jan.
Artigo em Inglês | MEDLINE | ID: mdl-23757531

RESUMO

An important way to improve the performance of naive Bayesian classifiers (NBCs) is to remove or relax the fundamental assumption of independence among the attributes, which usually results in an estimation of joint probability density function (p.d.f.) instead of the estimation of marginal p.d.f. in the NBC design. This paper proposes a non-naive Bayesian classifier (NNBC) in which the independence assumption is removed and the marginal p.d.f. estimation is replaced by the joint p.d.f. estimation. A new technique of estimating the class-conditional p.d.f. based on the optimal bandwidth selection, which is the crucial part of the joint p.d.f. estimation, is applied in our NNBC. Three well-known indexes for measuring the performance of Bayesian classifiers, which are classification accuracy, area under receiver operating characteristic curve, and probability mean square error, are adopted to conduct a comparison among the four Bayesian models, i.e., normal naive Bayesian, flexible naive Bayesian (FNB), the homologous model of FNB (FNBROT), and our proposed NNBC. The comparative results show that NNBC is statistically superior to the other three models regarding the three indexes. And, in the comparison with support vector machine and four boosting-based classification methods, NNBC achieves a relatively favorable classification accuracy while significantly reducing the training time.

9.
IEEE Trans Neural Netw ; 18(5): 1294-305, 2007 Sep.
Artigo em Inglês | MEDLINE | ID: mdl-18220181

RESUMO

The generalization error bounds found by current error models using the number of effective parameters of a classifier and the number of training samples are usually very loose. These bounds are intended for the entire input space. However, support vector machine (SVM), radial basis function neural network (RBFNN), and multilayer perceptron neural network (MLPNN) are local learning machines for solving problems and treat unseen samples near the training samples to be more important. In this paper, we propose a localized generalization error model which bounds from above the generalization error within a neighborhood of the training samples using stochastic sensitivity measure. It is then used to develop an architecture selection technique for a classifier with maximal coverage of unseen samples by specifying a generalization error threshold. Experiments using 17 University of California at Irvine (UCI) data sets show that, in comparison with cross validation (CV), sequential learning, and two other ad hoc methods, our technique consistently yields the best testing classification accuracy with fewer hidden neurons and less training time.


Assuntos
Algoritmos , Modelos Estatísticos , Redes Neurais de Computação , Reconhecimento Automatizado de Padrão/métodos , Simulação por Computador , Reprodutibilidade dos Testes , Sensibilidade e Especificidade
10.
IEEE Trans Syst Man Cybern B Cybern ; 34(5): 1979-87, 2004 Oct.
Artigo em Inglês | MEDLINE | ID: mdl-15503494

RESUMO

When fuzzy production rules are used to approximate reasoning, interaction exists among rules that have the same consequent. Due to this interaction, the weighted average model frequently used in approximate reasoning does not work well in many real-world problems. In order to model and handle this interaction, this paper proposes to use a nonadditive nonnegative set function to replace the weights assigned to rules having the same consequent, and to draw the reasoning conclusion based on an integral with respect to the nonadditive nonnegative set function, rather than on the weighted average model. Handling interaction in fuzzy production rule reasoning in this way can lead to a good understanding of the rules base and an improvement of reasoning accuracy. This paper also investigates how to determine from data the nonadditive set function that cannot be specified by a domain expert.


Assuntos
Algoritmos , Inteligência Artificial , Sistemas de Apoio a Decisões Clínicas , Técnicas de Apoio para a Decisão , Diagnóstico por Computador/métodos , Lógica Fuzzy , Reconhecimento Automatizado de Padrão , Sistemas Inteligentes , Humanos
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA