Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 95
Filtrar
1.
Ann N Y Acad Sci ; 1505(1): 79-101, 2021 12.
Artículo en Inglés | MEDLINE | ID: mdl-34173249

RESUMEN

Conceptual abstraction and analogy-making are key abilities underlying humans' abilities to learn, reason, and robustly adapt their knowledge to new domains. Despite a long history of research on constructing artificial intelligence (AI) systems with these abilities, no current AI system is anywhere close to a capability of forming humanlike abstractions or analogies. This paper reviews the advantages and limitations of several approaches toward this goal, including symbolic methods, deep learning, and probabilistic program induction. The paper concludes with several proposals for designing challenge tasks and evaluation measures in order to make quantifiable and generalizable progress in this area.


Asunto(s)
Inteligencia Artificial , Reconocimiento de Normas Patrones Automatizadas/métodos , Desempeño Psicomotor , Semántica , Pensamiento , Inteligencia Artificial/tendencias , Humanos , Reconocimiento de Normas Patrones Automatizadas/tendencias , Desempeño Psicomotor/fisiología , Pensamiento/fisiología
2.
Neural Netw ; 136: 11-16, 2021 Apr.
Artículo en Inglés | MEDLINE | ID: mdl-33422928

RESUMEN

In recent times, feature extraction attracted much attention in machine learning and pattern recognition fields. This paper extends and improves a scheme for linear feature extraction that can be used in supervised multi-class classification problems. Inspired by recent frameworks for robust sparse LDA and Inter-class sparsity, we propose a unifying criterion able to retain the advantages of these two powerful linear discriminant methods. We introduce an iterative alternating minimization scheme in order to estimate the linear transformation and the orthogonal matrix. The linear transformation is efficiently updated via the steepest descent gradient technique. The proposed framework is generic in the sense that it allows the combination and tuning of other linear discriminant embedding methods. We used our proposed method to fine tune the linear solutions delivered by two recent linear methods: RSLDA and RDA_FSIS. Experiments have been conducted on public image datasets of different types including objects, faces, and digits. The proposed framework compared favorably with several competing methods.


Asunto(s)
Algoritmos , Reconocimiento de Normas Patrones Automatizadas/tendencias , Aprendizaje Automático Supervisado/tendencias , Análisis Discriminante , Aprendizaje Automático/tendencias , Reconocimiento de Normas Patrones Automatizadas/métodos
3.
Neural Netw ; 135: 68-77, 2021 Mar.
Artículo en Inglés | MEDLINE | ID: mdl-33360149

RESUMEN

The goal of n-shot learning is the classification of input data from small datasets. This type of learning is challenging in neural networks, which typically need a high number of data during the training process. Recent advancements in data augmentation allow us to produce an infinite number of target conditions from the primary condition. This process includes two main steps for finding the best augmentations and training the data with the new augmentation techniques. Optimizing these two steps for n-shot learning is still an open problem. In this paper, we propose a new method for auto-augmentation to address both of these problems. The proposed method can potentially extract many possible types of information from a small number of available data points in n-shot learning. The results of our experiments on five prominent n-shot learning datasets show the effectiveness of the proposed method.


Asunto(s)
Bases de Datos Factuales , Aprendizaje Profundo , Redes Neurales de la Computación , Reconocimiento de Normas Patrones Automatizadas/métodos , Estimulación Luminosa/métodos , Bases de Datos Factuales/tendencias , Aprendizaje Profundo/tendencias , Humanos , Reconocimiento de Normas Patrones Automatizadas/tendencias
4.
IEEE Trans Neural Netw Learn Syst ; 32(11): 5034-5046, 2021 11.
Artículo en Inglés | MEDLINE | ID: mdl-33290230

RESUMEN

Many computer vision tasks, such as monocular depth estimation and height estimation from a satellite orthophoto, have a common underlying goal, which is regression of dense continuous values for the pixels given a single image. We define them as dense continuous-value regression (DCR) tasks. Recent approaches based on deep convolutional neural networks significantly improve the performance of DCR tasks, particularly on pixelwise regression accuracy. However, it still remains challenging to simultaneously preserve the global structure and fine object details in complex scenes. In this article, we take advantage of the efficiency of Laplacian pyramid on representing multiscale contents to reconstruct high-quality signals for complex scenes. We design a Laplacian pyramid neural network (LAPNet), which consists of a Laplacian pyramid decoder (LPD) for signal reconstruction and an adaptive dense feature fusion (ADFF) module to fuse features from the input image. More specifically, we build an LPD to effectively express both global and local scene structures. In our LPD, the upper and lower levels, respectively, represent scene layouts and shape details. We introduce a residual refinement module to progressively complement high-frequency details for signal prediction at each level. To recover the signals at each individual level in the pyramid, an ADFF module is proposed to adaptively fuse multiscale image features for accurate prediction. We conduct comprehensive experiments to evaluate a number of variants of our model on three important DCR tasks, i.e., monocular depth estimation, single-image height estimation, and density map estimation for crowd counting. Experiments demonstrate that our method achieves new state-of-the-art performance in both qualitative and quantitative evaluation on the NYU-D V2 and KITTI for monocular depth estimation, the challenging Urban Semantic 3D (US3D) for satellite height estimation, and four challenging benchmarks for crowd counting. These results demonstrate that the proposed LAPNet is a universal and effective architecture for DCR problems.


Asunto(s)
Aprendizaje Profundo/tendencias , Procesamiento de Imagen Asistido por Computador/tendencias , Redes Neurales de la Computación , Reconocimiento de Normas Patrones Automatizadas/tendencias , Humanos , Procesamiento de Imagen Asistido por Computador/métodos , Reconocimiento de Normas Patrones Automatizadas/métodos
5.
Neural Netw ; 135: 1-12, 2021 Mar.
Artículo en Inglés | MEDLINE | ID: mdl-33310193

RESUMEN

Knowledge graph reasoning aims to find reasoning paths for relations over incomplete knowledge graphs (KG). Prior works may not take into account that the rewards for each position (vertex in the graph) may be different. We propose the distance-aware reward in the reinforcement learning framework to assign different rewards for different positions. We observe that KG embeddings are learned from independent triples and therefore cannot fully cover the information described in the local neighborhood. To this effect, we integrate a graph self-attention (GSA) mechanism to capture more comprehensive entity information from the neighboring entities and relations. To let the model remember the path, we incorporate the GSA mechanism with GRU to consider the memory of relations in the path. Our approach can train the agent in one-pass, thus eliminating the pre-training or fine-tuning process, which significantly reduces the problem complexity. Experimental results demonstrate the effectiveness of our method. We found that our model can mine more balanced paths for each relation.


Asunto(s)
Bases de Datos Factuales , Aprendizaje Profundo , Reconocimiento de Normas Patrones Automatizadas/métodos , Refuerzo en Psicología , Algoritmos , Bases de Datos Factuales/tendencias , Aprendizaje Profundo/tendencias , Humanos , Conocimiento , Reconocimiento de Normas Patrones Automatizadas/tendencias
6.
IEEE Trans Neural Netw Learn Syst ; 32(11): 4901-4915, 2021 11.
Artículo en Inglés | MEDLINE | ID: mdl-33017295

RESUMEN

Conventional artificial neural network (ANN) learning algorithms for classification tasks, either derivative-based optimization algorithms or derivative-free optimization algorithms work by training ANN first (or training and validating ANN) and then testing ANN, which are a two-stage and one-pass learning mechanism. Thus, this learning mechanism may not guarantee the generalization ability of a trained ANN. In this article, a novel bilevel learning model is constructed for self-organizing feed-forward neural network (FFNN), in which the training and testing processes are integrated into a unified framework. In this bilevel model, the upper level optimization problem is built for testing error on testing data set and network architecture based on network complexity, whereas the lower level optimization problem is constructed for network weights based on training error on training data set. For the bilevel framework, an interactive learning algorithm is proposed to optimize the architecture and weights of an FFNN with consideration of both training error and testing error. In this interactive learning algorithm, a hybrid binary particle swarm optimization (BPSO) taken as an upper level optimizer is used to self-organize network architecture, whereas the Levenberg-Marquardt (LM) algorithm as a lower level optimizer is utilized to optimize the connection weights of an FFNN. The bilevel learning model and algorithm have been tested on 20 benchmark classification problems. Experimental results demonstrate that the bilevel learning algorithm can significantly produce more compact FFNNs with more excellent generalization ability when compared with conventional learning algorithms.


Asunto(s)
Algoritmos , Redes Neurales de la Computación , Reconocimiento de Normas Patrones Automatizadas/tendencias , Aprendizaje Automático Supervisado/tendencias , Humanos , Reconocimiento de Normas Patrones Automatizadas/métodos
7.
IEEE Trans Neural Netw Learn Syst ; 32(11): 4864-4878, 2021 11.
Artículo en Inglés | MEDLINE | ID: mdl-33027004

RESUMEN

In the context of supervised statistical learning, it is typically assumed that the training set comes from the same distribution that draws the test samples. When this is not the case, the behavior of the learned model is unpredictable and becomes dependent upon the degree of similarity between the distribution of the training set and the distribution of the test set. One of the research topics that investigates this scenario is referred to as domain adaptation (DA). Deep neural networks brought dramatic advances in pattern recognition and that is why there have been many attempts to provide good DA algorithms for these models. Herein we take a different avenue and approach the problem from an incremental point of view, where the model is adapted to the new domain iteratively. We make use of an existing unsupervised domain-adaptation algorithm to identify the target samples on which there is greater confidence about their true label. The output of the model is analyzed in different ways to determine the candidate samples. The selected samples are then added to the source training set by self-labeling, and the process is repeated until all target samples are labeled. This approach implements a form of adversarial training in which, by moving the self-labeled samples from the target to the source set, the DA algorithm is forced to look for new features after each iteration. Our results report a clear improvement with respect to the non-incremental case in several data sets, also outperforming other state-of-the-art DA algorithms.


Asunto(s)
Algoritmos , Redes Neurales de la Computación , Reconocimiento de Normas Patrones Automatizadas/tendencias , Aprendizaje Automático no Supervisado/tendencias , Humanos , Reconocimiento de Normas Patrones Automatizadas/métodos
8.
IEEE Trans Neural Netw Learn Syst ; 32(11): 4826-4838, 2021 11.
Artículo en Inglés | MEDLINE | ID: mdl-33021943

RESUMEN

While most deep learning architectures are built on convolution, alternative foundations such as morphology are being explored for purposes such as interpretability and its connection to the analysis and processing of geometric structures. The morphological hit-or-miss operation has the advantage that it considers both foreground information and background information when evaluating the target shape in an image. In this article, we identify limitations in the existing hit-or-miss neural definitions and formulate an optimization problem to learn the transform relative to deeper architectures. To this end, we model the semantically important condition that the intersection of the hit and miss structuring elements (SEs) should be empty and present a way to express Don't Care (DNC), which is important for denoting regions of an SE that are not relevant to detecting a target pattern. Our analysis shows that convolution, in fact, acts like a hit-to-miss transform through semantic interpretation of its filter differences. On these premises, we introduce an extension that outperforms conventional convolution on benchmark data. Quantitative experiments are provided on synthetic and benchmark data, showing that the direct encoding hit-or-miss transform provides better interpretability on learned shapes consistent with objects, whereas our morphologically inspired generalized convolution yields higher classification accuracy. Finally, qualitative hit and miss filter visualizations are provided relative to single morphological layer.


Asunto(s)
Algoritmos , Aprendizaje Profundo/tendencias , Redes Neurales de la Computación , Reconocimiento de Normas Patrones Automatizadas/tendencias , Humanos , Reconocimiento de Normas Patrones Automatizadas/métodos
9.
IEEE Trans Neural Netw Learn Syst ; 32(11): 4793-4813, 2021 11.
Artículo en Inglés | MEDLINE | ID: mdl-33079674

RESUMEN

Recently, artificial intelligence and machine learning in general have demonstrated remarkable performances in many tasks, from image processing to natural language processing, especially with the advent of deep learning (DL). Along with research progress, they have encroached upon many different fields and disciplines. Some of them require high level of accountability and thus transparency, for example, the medical sector. Explanations for machine decisions and predictions are thus needed to justify their reliability. This requires greater interpretability, which often means we need to understand the mechanism underlying the algorithms. Unfortunately, the blackbox nature of the DL is still unresolved, and many machine decisions are still poorly understood. We provide a review on interpretabilities suggested by different research works and categorize them. The different categories show different dimensions in interpretability research, from approaches that provide "obviously" interpretable information to the studies of complex patterns. By applying the same categorization to interpretability in medical research, it is hoped that: 1) clinicians and practitioners can subsequently approach these methods with caution; 2) insight into interpretability will be born with more considerations for medical practices; and 3) initiatives to push forward data-based, mathematically grounded, and technically grounded medical education are encouraged.


Asunto(s)
Aprendizaje Automático/tendencias , Redes Neurales de la Computación , Reconocimiento de Normas Patrones Automatizadas/tendencias , Encuestas y Cuestionarios , Inteligencia Artificial/tendencias , Humanos , Procesamiento de Imagen Asistido por Computador/métodos , Procesamiento de Imagen Asistido por Computador/tendencias , Reconocimiento de Normas Patrones Automatizadas/métodos
10.
Neural Netw ; 129: 334-343, 2020 Sep.
Artículo en Inglés | MEDLINE | ID: mdl-32593930

RESUMEN

Visual trackers using deep neural networks have demonstrated favorable performance in object tracking. However, training a deep classification network using overlapped initial target regions may lead an overfitted model. To increase the model generalization, we propose an appearance variation adaptation (AVA) tracker that aligns the feature distributions of target regions over time by learning an adaptation mask in an adversarial network. The proposed adversarial network consists of a generator and a discriminator network that compete with each other over optimizing a discriminator loss in a mini-max optimization problem. Specifically, the discriminator network aims to distinguish recent target regions from earlier ones by minimizing the discriminator loss, while the generator network aims to produce an adaptation mask to maximize the discriminator loss. We incorporate a gradient reverse layer in the adversarial network to solve the aforementioned mini-max optimization in an end-to-end manner. We compare the performance of the proposed AVA tracker with the most recent state-of-the-art trackers by doing extensive experiments on OTB50, OTB100, and VOT2016 tracking benchmarks. Among the compared methods, AVA yields the highest area under curve (AUC) score of 0.712 and the highest average precision score of 0.951 on the OTB50 tracking benchmark. It achieves the second best AUC score of 0.688 and the best precision score of 0.924 on the OTB100 tracking benchmark. AVA also achieves the second best expected average overlap (EAO) score of 0.366, the best failure rate of 0.68, and the second best accuracy of 0.53 on the VOT2016 tracking benchmark.


Asunto(s)
Adaptación Fisiológica , Redes Neurales de la Computación , Reconocimiento de Normas Patrones Automatizadas/métodos , Humanos , Reconocimiento de Normas Patrones Automatizadas/tendencias , Estimulación Luminosa/métodos
11.
Neural Netw ; 127: 141-159, 2020 Jul.
Artículo en Inglés | MEDLINE | ID: mdl-32361379

RESUMEN

Linear Discriminant Analysis (LDA) and its variants are widely used as feature extraction methods. They have been used for different classification tasks. However, these methods have some limitations that need to be overcome. The main limitation is that the projection obtained by LDA does not provide a good interpretability for the features. In this paper, we propose a novel supervised method used for multi-class classification that simultaneously performs feature selection and extraction. The targeted projection transformation focuses on the most discriminant original features, and at the same time, makes sure that the transformed features (extracted features) belonging to each class have common sparsity. Our proposed method is called Robust Discriminant Analysis with Feature Selection and Inter-class Sparsity (RDA_FSIS). The corresponding model integrates two types of sparsity. The first type is obtained by imposing the ℓ2,1 constraint on the projection matrix in order to perform feature selection. The second type of sparsity is obtained by imposing the inter-class sparsity constraint used for ensuring a common sparsity structure in each class. An orthogonal matrix is also introduced in our model in order to guarantee that the extracted features can retain the main variance of the original data and thus improve the robustness to noise. The proposed method retrieves the LDA transformation by taking into account the two types of sparsity. Various experiments are conducted on several image datasets including faces, objects and digits. The projected features are used for multi-class classification. Obtained results show that the proposed method outperforms other competing methods by learning a more compact and discriminative transformation.


Asunto(s)
Algoritmos , Análisis Discriminante , Reconocimiento de Normas Patrones Automatizadas/métodos , Bases de Datos Factuales/tendencias , Humanos , Reconocimiento de Normas Patrones Automatizadas/tendencias
12.
Neural Netw ; 127: 168-181, 2020 Jul.
Artículo en Inglés | MEDLINE | ID: mdl-32361547

RESUMEN

This paper deals with the vulnerability of machine learning models to adversarial examples and its implication for robustness and generalization properties. We propose an evolutionary algorithm that can generate adversarial examples for any machine learning model in the black-box attack scenario. This way, we can find adversarial examples without access to model's parameters, only by querying the model at hand. We have tested a range of machine learning models including deep and shallow neural networks. Our experiments have shown that the vulnerability to adversarial examples is not only the problem of deep networks, but it spreads through various machine learning architectures. Rather, it depends on the type of computational units. Local units, such as Gaussian kernels, are less vulnerable to adversarial examples.


Asunto(s)
Redes Neurales de la Computación , Reconocimiento de Normas Patrones Automatizadas/métodos , Aprendizaje Automático Supervisado , Algoritmos , Humanos , Aprendizaje Automático/tendencias , Reconocimiento de Normas Patrones Automatizadas/tendencias , Aprendizaje Automático Supervisado/tendencias
13.
Neural Netw ; 127: 182-192, 2020 Jul.
Artículo en Inglés | MEDLINE | ID: mdl-32361548

RESUMEN

The accuracy of deep learning (e.g., convolutional neural networks) for an image classification task critically relies on the amount of labeled training data. Aiming to solve an image classification task on a new domain that lacks labeled data but gains access to cheaply available unlabeled data, unsupervised domain adaptation is a promising technique to boost the performance without incurring extra labeling cost, by assuming images from different domains share some invariant characteristics. In this paper, we propose a new unsupervised domain adaptation method named Domain-Adversarial Residual-Transfer (DART) learning of deep neural networks to tackle cross-domain image classification tasks. In contrast to the existing unsupervised domain adaption approaches, the proposed DART not only learns domain-invariant features via adversarial training, but also achieves robust domain-adaptive classification via a residual-transfer strategy, all in an end-to-end training framework. We evaluate the performance of the proposed method for cross-domain image classification tasks on several well-known benchmark data sets, in which our method clearly outperforms the state-of-the-art approaches.


Asunto(s)
Redes Neurales de la Computación , Reconocimiento de Normas Patrones Automatizadas/métodos , Aprendizaje Automático no Supervisado , Aprendizaje Profundo/tendencias , Humanos , Reconocimiento de Normas Patrones Automatizadas/tendencias , Aprendizaje Automático no Supervisado/tendencias
14.
Neural Netw ; 128: 158-171, 2020 Aug.
Artículo en Inglés | MEDLINE | ID: mdl-32446193

RESUMEN

The actuator of any physical control systems is constrained by amplitude and energy, which causes the control systems to be inevitably affected by actuator saturation. In this paper, impulsive synchronization of coupled delayed neural networks with actuator saturation is presented. A new controller is designed to introduce actuator saturation term into impulsive controller. Based on sector nonlinearity model approach, impulsive controls with actuator saturation and with partial actuator saturation are studied, respectively, and some effective sufficient conditions are obtained. Numerical simulation is presented to verify the validity of the theoretical analysis results. Finally, the impulsive synchronization is applied to image encryption. The experimental results show that the proposed image encryption system has high security properties.


Asunto(s)
Redes Neurales de la Computación , Reconocimiento de Normas Patrones Automatizadas/métodos , Humanos , Dinámicas no Lineales , Reconocimiento de Normas Patrones Automatizadas/tendencias , Factores de Tiempo
15.
Neural Netw ; 127: 19-28, 2020 Jul.
Artículo en Inglés | MEDLINE | ID: mdl-32315932

RESUMEN

In recent years, research on image generation has been developing very fast. The generative adversarial network (GAN) emerges as a promising framework, which uses adversarial training to improve the generative ability of its generator. However, since GAN and most of its variants use randomly sampled noises as the input of their generators, they have to learn a mapping function from a whole random distribution to the image manifold. As the structures of the random distribution and the image manifold are generally different, this results in GAN and its variants difficult to train and converge. In this paper, we propose a novel deep model called generative adversarial networks with decoder-encoder output noises (DE-GANs), which take advantage of both the adversarial training and the variational Bayesian inference to improve GAN and its variants on image generation performances. DE-GANs use a pre-trained decoder-encoder architecture to map the random noise vectors to informative ones and feed them to the generator of the adversarial networks. Since the decoder-encoder architecture is trained with the same data set as the generator, its output vectors, as the inputs of the generator, could carry the intrinsic distribution information of the training images, which greatly improves the learnability of the generator and the quality of the generated images. Extensive experiments demonstrate the effectiveness of the proposed model, DE-GANs.


Asunto(s)
Redes Neurales de la Computación , Reconocimiento de Normas Patrones Automatizadas/métodos , Teorema de Bayes , Humanos , Procesamiento de Imagen Asistido por Computador/métodos , Procesamiento de Imagen Asistido por Computador/tendencias , Reconocimiento de Normas Patrones Automatizadas/tendencias , Distribución Aleatoria
16.
Neural Netw ; 127: 121-131, 2020 Jul.
Artículo en Inglés | MEDLINE | ID: mdl-32339807

RESUMEN

Dynamic movement primitives (DMPs) have proven to be an effective movement representation for motor skill learning. In this paper, we propose a new approach for training deep neural networks to synthesize dynamic movement primitives. The distinguishing property of our approach is that it can utilize a novel loss function that measures the physical distance between movement trajectories as opposed to measuring the distance between the parameters of DMPs that have no physical meaning. This was made possible by deriving differential equations that can be applied to compute the gradients of the proposed loss function, thus enabling an effective application of backpropagation to optimize the parameters of the underlying deep neural network. While the developed approach is applicable to any neural network architecture, it was evaluated on two different architectures based on encoder-decoder networks and convolutional neural networks. Our results show that the minimization of the proposed loss function leads to better results than when more conventional loss functions are used.


Asunto(s)
Bases de Datos Factuales , Destreza Motora , Redes Neurales de la Computación , Reconocimiento de Normas Patrones Automatizadas/métodos , Bases de Datos Factuales/tendencias , Humanos , Destreza Motora/fisiología , Movimiento , Reconocimiento de Normas Patrones Automatizadas/tendencias
17.
Neural Netw ; 127: 82-95, 2020 Jul.
Artículo en Inglés | MEDLINE | ID: mdl-32344155

RESUMEN

Field classification is a new extension of traditional classification frameworks that attempts to utilize consistent information from a group of samples (termed fields). By forgoing the independent identically distributed (i.i.d.) assumption, field classification can achieve remarkably improved accuracy compared to traditional classification methods. Most studies of field classification have been conducted on traditional machine learning methods. In this paper, we propose integration with a Bayesian framework, for the first time, in order to extend field classification to deep learning and propose two novel deep neural network architectures: the Field Deep Perceptron (FDP) and the Field Deep Convolutional Neural Network (FDCNN). Specifically, we exploit a deep perceptron structure, typically a 6-layer structure, where the first 3 layers remove (learn) a 'style' from a group of samples to map them into a more discriminative space and the last 3 layers are trained to perform classification. For the FDCNN, we modify the AlexNet framework by adding style transformation layers within the hidden layers. We derive a novel learning scheme from a Bayesian framework and design a novel and efficient learning algorithm with guaranteed convergence for training the deep networks. The whole framework is interpreted with visualization features showing that the field deep neural network can better learn the style of a group of samples. Our developed models are also able to achieve transfer learning and learn transformations for newly introduced fields. We conduct extensive comparative experiments on benchmark data (including face, speech, and handwriting data) to validate our learning approach. Experimental results demonstrate that our proposed deep frameworks achieve significant improvements over other state-of-the-art algorithms, attaining new benchmark performance.


Asunto(s)
Identificación Biométrica/métodos , Aprendizaje Profundo , Redes Neurales de la Computación , Reconocimiento de Normas Patrones Automatizadas/métodos , Algoritmos , Teorema de Bayes , Identificación Biométrica/tendencias , Aprendizaje Profundo/tendencias , Escritura Manual , Humanos , Aprendizaje Automático/tendencias , Reconocimiento de Normas Patrones Automatizadas/tendencias
18.
Neural Netw ; 125: 174-184, 2020 May.
Artículo en Inglés | MEDLINE | ID: mdl-32135353

RESUMEN

In this paper, a three-dimensional fractional-order (FO) discrete Hopfield neural network (FODHNN) in the left Caputo discrete delta's sense is proposed, the dynamic behavior and synchronization of FODHNN are studied, and the system is applied to image encryption. First, FODHNN is shown to exhibit rich nonlinear dynamics behaviors. Phase portraits, bifurcation diagrams and Lyapunov exponents are carried out to verify chaotic dynamics in this system. Moreover, by using stability theorem of FO discrete linear systems, a suitable control scheme is designed to achieve synchronization of the FODHNN. Finally, image encryption system based on the chaotic FODHNN is presented. Some security analysis and tests are given to show the effective of the encryption system.


Asunto(s)
Algoritmos , Seguridad Computacional , Redes Neurales de la Computación , Dinámicas no Lineales , Reconocimiento de Normas Patrones Automatizadas/métodos , Seguridad Computacional/tendencias , Humanos , Reconocimiento de Normas Patrones Automatizadas/tendencias
19.
Neural Netw ; 125: 142-152, 2020 May.
Artículo en Inglés | MEDLINE | ID: mdl-32088568

RESUMEN

Supervised cross-modal hashing has attracted widespread concentrations for large-scale retrieval task due to its promising retrieval performance. However, most existing works suffer from some of following issues. Firstly, most of them only leverage the pair-wise similarity matrix to learn hash codes, which may result in class information loss. Secondly, the pair-wise similarity matrix generally lead to high computing complexity and memory cost. Thirdly, most of them relax the discrete constraints during optimization, which generally results in large cumulative quantization error and consequent inferior hash codes. To address above problems, we present a Fast Discrete Cross-modal Hashing method in this paper, FDCH for short. Specifically, it firstly leverages both class labels and the pair-wise similarity matrix to learn a sharing Hamming space where the semantic consistency can be better preserved. Then we propose an asymmetric hash codes learning model to avoid the challenging issue of symmetric matrix factorization. Finally, an effective and efficient discrete optimal scheme is designed to generate discrete hash codes directly, and the computing complexity and memory cost caused by the pair-wise similarity matrix are reduced from O(n2) to O(n), where n denotes the size of training set. Extensive experiments conducted on three real world datasets highlight the superiority of FDCH compared with several cross-modal hashing methods and demonstrate its effectiveness and efficiency.


Asunto(s)
Algoritmos , Reconocimiento de Normas Patrones Automatizadas/métodos , Semántica , Aprendizaje Profundo/tendencias , Humanos , Reconocimiento de Normas Patrones Automatizadas/tendencias , Factores de Tiempo
20.
IEEE Trans Neural Netw Learn Syst ; 31(4): 1242-1254, 2020 04.
Artículo en Inglés | MEDLINE | ID: mdl-31247572

RESUMEN

The performance of convolutional neural networks (CNNs) highly relies on their architectures. In order to design a CNN with promising performance, extensive expertise in both CNNs and the investigated problem domain is required, which is not necessarily available to every interested user. To address this problem, we propose to automatically evolve CNN architectures by using a genetic algorithm (GA) based on ResNet and DenseNet blocks. The proposed algorithm is completely automatic in designing CNN architectures. In particular, neither preprocessing before it starts nor postprocessing in terms of CNNs is needed. Furthermore, the proposed algorithm does not require users with domain knowledge on CNNs, the investigated problem, or even GAs. The proposed algorithm is evaluated on the CIFAR10 and CIFAR100 benchmark data sets against 18 state-of-the-art peer competitors. Experimental results show that the proposed algorithm outperforms the state-of-the-art CNNs hand-crafted and the CNNs designed by automatic peer competitors in terms of the classification performance and achieves a competitive classification accuracy against semiautomatic peer competitors. In addition, the proposed algorithm consumes much less computational resource than most peer competitors in finding the best CNN architectures.


Asunto(s)
Algoritmos , Redes Neurales de la Computación , Reconocimiento de Normas Patrones Automatizadas/métodos , Bases de Datos Factuales/tendencias , Humanos , Reconocimiento de Normas Patrones Automatizadas/tendencias , Distribución Aleatoria
SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA