Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 20
Filtrar
Mais filtros










Base de dados
Intervalo de ano de publicação
1.
IEEE Trans Med Imaging ; 43(3): 1225-1236, 2024 Mar.
Artigo em Inglês | MEDLINE | ID: mdl-37938946

RESUMO

Breast cancer is a heterogeneous disease, where molecular subtypes of breast cancer are closely related to the treatment and prognosis. Therefore, the goal of this work is to differentiate between luminal and non-luminal subtypes of breast cancer. The hierarchical radiomics network (HRadNet) is proposed for breast cancer molecular subtypes prediction based on dynamic contrast-enhanced magnetic resonance imaging. HRadNet fuses multilayer features with the metadata of images to take advantage of conventional radiomics methods and general convolutional neural networks. A two-stage training mechanism is adopted to improve the generalization capability of the network for multicenter breast cancer data. The ablation study shows the effectiveness of each component of HRadNet. Furthermore, the influence of features from different layers and metadata fusion are also analyzed. It reveals that selecting certain layers of features for a specified domain can make further performance improvements. Experimental results on three data sets from different devices demonstrate the effectiveness of the proposed network. HRadNet also has good performance when transferring to other domains without fine-tuning.


Assuntos
Neoplasias da Mama , Humanos , Feminino , Neoplasias da Mama/diagnóstico por imagem , Neoplasias da Mama/patologia , Radiômica , Redes Neurais de Computação , Imageamento por Ressonância Magnética/métodos , Meios de Contraste , Estudos Retrospectivos
2.
Neural Netw ; 171: 320-331, 2024 Mar.
Artigo em Inglês | MEDLINE | ID: mdl-38113717

RESUMO

Domain generalization has attracted much interest in recent years due to its practical application scenarios, in which the model is trained using data from various source domains but is tested using data from an unseen target domain. Existing domain generalization methods concern all visual features, including irrelevant ones with the same priority, which easily results in poor generalization performance of the trained model. In contrast, human beings have strong generalization capabilities to distinguish images from different domains by focusing on important features while suppressing irrelevant features with respect to labels. Motivated by this observation, we propose a channel-wise and spatial-wise hybrid domain attention mechanism to force the model to focus on more important features associated with labels in this work. In addition, models with higher robustness with respect to small perturbations of inputs are expected to have higher generalization capability, which is preferable in domain generalization. Therefore, we propose to reduce the localized maximum sensitivity of the small perturbations of inputs in order to improve the network's robustness and generalization capability. Extensive experiments on PACS, VLCS, and Office-Home datasets validate the effectiveness of the proposed method.


Assuntos
Generalização Psicológica , Motivação , Humanos
3.
IEEE Trans Cybern ; PP2023 May 11.
Artigo em Inglês | MEDLINE | ID: mdl-37167035

RESUMO

Binary hashing is an effective approach for content-based image retrieval, and learning binary codes with neural networks has attracted increasing attention in recent years. However, the training of hashing neural networks is difficult due to the binary constraint on hash codes. In addition, neural networks are easily affected by input data with small perturbations. Therefore, a sensitive binary hashing autoencoder (SBHA) is proposed to handle these challenges by introducing stochastic sensitivity for image retrieval. SBHA extracts meaningful features from original inputs and maps them onto a binary space to obtain binary hash codes directly. Different from ordinary autoencoders, SBHA is trained by minimizing the reconstruction error, the stochastic sensitive error, and the binary constraint error simultaneously. SBHA reduces output sensitivity to unseen samples with small perturbations from training samples by minimizing the stochastic sensitive error, which helps to learn more robust features. Moreover, SBHA is trained with a binary constraint and outputs binary codes directly. To tackle the difficulty of optimization with the binary constraint, we train the SBHA with alternating optimization. Experimental results on three benchmark datasets show that SBHA is competitive and significantly outperforms state-of-the-art methods for binary hashing.

4.
IEEE Trans Neural Netw Learn Syst ; 34(9): 5719-5731, 2023 Sep.
Artigo em Inglês | MEDLINE | ID: mdl-34878983

RESUMO

Population-based optimization methods are widely used for hyperparameter (HP) tuning for a given specific task. In this work, we propose the population-based hyperparameter tuning with multitask collaboration (PHTMC), which is a general multitask collaborative framework with parallel and sequential phases for population-based HP tuning methods. In the parallel HP tuning phase, a shared population for all tasks is kept and the intertask relatedness is considered to both yield a better generalization ability and avoid data bias to a single task. In the sequential HP tuning phase, a surrogate model is built for each new-added task so that the metainformation from the existing tasks can be extracted and used to help the initialization for the new task. Experimental results show significant improvements in generalization abilities yielded by neural networks trained using the PHTMC and better performances achieved by multitask metalearning. Moreover, a visualization of the solution distribution and the autoencoder's reconstruction of both the PHTMC and a single-task population-based HP tuning method is compared to analyze the property with the multitask collaboration.

5.
IEEE Trans Neural Netw Learn Syst ; 34(11): 9520-9527, 2023 Nov.
Artigo em Inglês | MEDLINE | ID: mdl-35213317

RESUMO

In this brief, we investigate the problem of incremental learning under data stream with emerging new classes (SENC). In the literature, existing approaches encounter the following problems: 1) yielding high false positive for the new class; i) having long prediction time; and 3) having access to true labels for all instances, which is unrealistic and unacceptable in real-life streaming tasks. Therefore, we propose the k -Nearest Neighbor ENSemble-based method (KNNENS) to handle these problems. The KNNENS is effective to detect the new class and maintains high classification performance for known classes. It is also efficient in terms of run time and does not require true labels of new class instances for model update, which is desired in real-life streaming classification tasks. Experimental results show that the KNNENS achieves the best performance on four benchmark datasets and three real-world data streams in terms of accuracy and F1-measure and has a relatively fast run time compared to four reference methods. Codes are available at https://github.com/Ntriver/KNNENS.

6.
Int J Mach Learn Cybern ; 14(5): 1725-1738, 2023.
Artigo em Inglês | MEDLINE | ID: mdl-36474954

RESUMO

COVID-19 has resulted in a significant impact on individual lives, bringing a unique challenge for face retrieval under occlusion. In this paper, an occluded face retrieval method which consists of generator, discriminator, and deep hashing retrieval network is proposed for face retrieval in a large-scale face image dataset under variety of occlusion situations. In the proposed method, occluded face images are firstly reconstructed using a face inpainting model, in which the adversarial loss, reconstruction loss and hash bits loss are combined for training. With the trained model, hash codes of real face images and corresponding reconstructed face images are aimed to be as similar as possible. Then, a deep hashing retrieval network is used to generate compact similarity-preserving hashing codes using reconstructed face images for a better retrieval performance. Experimental results show that the proposed method can successfully generate the reconstructed face images under occlusion. Meanwhile, the proposed deep hashing retrieval network achieves better retrieval performance for occluded face retrieval than existing state-of-the-art deep hashing retrieval methods.

7.
Artigo em Inglês | MEDLINE | ID: mdl-35830397

RESUMO

The training of the standard broad learning system (BLS) concerns the optimization of its output weights via the minimization of both training mean square error (MSE) and a penalty term. However, it degrades the generalization capability and robustness of BLS when facing complex and noisy environments, especially when small perturbations or noise appear in input data. Therefore, this work proposes a broad network based on localized stochastic sensitivity (BASS) algorithm to tackle the issue of noise or input perturbations from a local perturbation perspective. The localized stochastic sensitivity (LSS) prompts an increase in the network's noise robustness by considering unseen samples located within a Q -neighborhood of training samples, which enhances the generalization capability of BASS with respect to noisy and perturbed data. Then, three incremental learning algorithms are derived to update BASS quickly when new samples arrive or the network is deemed to be expanded, without retraining the entire model. Due to the inherent superiorities of the LSS, extensive experimental results on 13 benchmark datasets show that BASS yields better accuracies on various regression and classification problems. For instance, BASS uses fewer parameters (12.6 million) to yield 1% higher Top-1 accuracy in comparison to AlexNet (60 million) on the large-scale ImageNet (ILSVRC2012) dataset.

8.
JAMA Netw Open ; 5(6): e2217447, 2022 06 01.
Artigo em Inglês | MEDLINE | ID: mdl-35708686

RESUMO

Importance: Retinopathy of prematurity (ROP) is the leading cause of childhood blindness worldwide. Prediction of ROP before onset holds great promise for reducing the risk of blindness. Objective: To develop and validate a deep learning (DL) system to predict the occurrence and severity of ROP before 45 weeks' postmenstrual age. Design, Setting, and Participants: This retrospective prognostic study included 7033 retinal photographs of 725 infants in the training set and 763 retinal photographs of 90 infants in the external validation set, along with 46 characteristics for each infant. All images of both eyes from the same infant taken at the first screening were labeled according to the final diagnosis made between the first screening and 45 weeks' postmenstrual age. The DL system was developed using retinal photographs from the first ROP screening and clinical characteristics before or at the first screening in infants born between June 3, 2017, and August 28, 2019. Exposures: Two models were specifically designed for predictions of the occurrence (occurrence network [OC-Net]) and severity (severity network [SE-Net]) of ROP. Five-fold cross-validation was applied for internal validation. Main Outcomes and Measures: Area under the receiver operating characteristic curve (AUC), accuracy, sensitivity, and specificity to evaluate the performance in ROP prediction. Results: This study included 815 infants (450 [55.2%] boys) with mean birth weight of 1.91 kg (95% CI, 1.87-1.95 kg) and mean gestational age of 33.1 weeks (95% CI, 32.9-33.3 weeks). In internal validation, mean AUC, accuracy, sensitivity, and specificity were 0.90 (95% CI, 0.88-0.92), 52.8% (95% CI, 49.2%-56.4%), 100% (95% CI, 97.4%-100%), and 37.8% (95% CI, 33.7%-42.1%), respectively, for OC-Net to predict ROP occurrence and 0.87 (95% CI, 0.82-0.91), 68.0% (95% CI, 61.2%-74.8%), 100% (95% CI, 93.2%-100%), and 46.6% (95% CI, 37.3%-56.0%), respectively, for SE-Net to predict severe ROP. In external validation, the AUC, accuracy, sensitivity, and specificity were 0.94, 33.3%, 100%, and 7.5%, respectively, for OC-Net, and 0.88, 56.0%, 100%, and 35.3%, respectively, for SE-Net. Conclusions and Relevance: In this study, the DL system achieved promising accuracy in ROP prediction. This DL system is potentially useful in identifying infants with high risk of developing ROP.


Assuntos
Aprendizado Profundo , Retinopatia da Prematuridade , Cegueira , Feminino , Humanos , Lactente , Recém-Nascido , Masculino , Retinopatia da Prematuridade/diagnóstico , Retinopatia da Prematuridade/epidemiologia , Estudos Retrospectivos , Fatores de Risco
9.
IEEE Trans Cybern ; 52(6): 4717-4727, 2022 Jun.
Artigo em Inglês | MEDLINE | ID: mdl-33270568

RESUMO

Multivariate time series (MTSs) are widely found in many important application fields, for example, medicine, multimedia, manufacturing, action recognition, and speech recognition. The accurate classification of MTS has become an important research topic. Traditional MTS classification methods do not explicitly model the temporal difference information of time series, which is, in fact, important and reflects the dynamic evolution information. In this article, the difference-guided representation learning network (DGRL-Net) is proposed to guide the representation learning of time series by dynamic evolution information. The DGRL-Net consists of a difference-guided layer and a multiscale convolutional layer. First, in the difference-guided layer, we propose a difference gating LSTM to model the time dependency and dynamic evolution of the time series to obtain feature representations of both raw and difference series. Then, these two representations are used as two input channels of the multiscale convolutional layer to extract multiscale information. Extensive experiments demonstrate that the proposed model outperforms state-of-the-art methods on 18 MTS benchmark datasets and achieves competitive results on two skeleton-based action recognition datasets. Furthermore, the ablation study and visualized analysis are designed to verify the effectiveness of the proposed model.


Assuntos
Aprendizagem , Fatores de Tempo
10.
IEEE Trans Cybern ; 52(2): 1269-1279, 2022 Feb.
Artigo em Inglês | MEDLINE | ID: mdl-32598288

RESUMO

Undersampling is a popular method to solve imbalanced classification problems. However, sometimes it may remove too many majority samples which may lead to loss of informative samples. In this article, the hashing-based undersampling ensemble (HUE) is proposed to deal with this problem by constructing diversified training subspaces for undersampling. Samples in the majority class are divided into many subspaces by a hashing method. Each subspace corresponds to a training subset which consists of most of the samples from this subspace and a few samples from surrounding subspaces. These training subsets are used to train an ensemble of classification and regression tree classifiers with all minority class samples. The proposed method is tested on 25 UCI datasets against state-of-the-art methods. Experimental results show that the HUE outperforms other methods and yields good results on highly imbalanced datasets.


Assuntos
Algoritmos , Projetos de Pesquisa
11.
IEEE Trans Cybern ; 51(5): 2748-2760, 2021 May.
Artigo em Inglês | MEDLINE | ID: mdl-31331899

RESUMO

The training of autoencoder (AE) focuses on the selection of connection weights via a minimization of both the training error and a regularized term. However, the ultimate goal of AE training is to autoencode future unseen samples correctly (i.e., good generalization). Minimizing the training error with different regularized terms only indirectly minimizes the generalization error. Moreover, the trained model may not be robust to small perturbations of inputs which may lead to a poor generalization capability. In this paper, we propose a localized stochastic sensitive AE (LiSSA) to enhance the robustness of AE with respect to input perturbations. With the local stochastic sensitivity regularization, LiSSA reduces sensitivity to unseen samples with small differences (perturbations) from training samples. Meanwhile, LiSSA preserves the local connectivity from the original input space to the representation space that learns a more robustness features (intermediate representation) for unseen samples. The classifier using these learned features yields a better generalization capability. Extensive experimental results on 36 benchmarking datasets indicate that LiSSA outperforms several classical and recent AE training methods significantly on classification tasks.

12.
IEEE Trans Cybern ; 51(10): 5184-5197, 2021 Oct.
Artigo em Inglês | MEDLINE | ID: mdl-31841431

RESUMO

Current hashing-based image retrieval methods mostly assume that the database of images is static. However, this assumption is not true in cases where the databases are constantly updated (e.g., on the Internet) and there exists the problem of concept drift. The online (also known as incremental) hashing methods have been proposed recently for image retrieval where the database is not static. However, they have not considered the concept drift problem. Moreover, they update hash functions dynamically by generating new hash codes for all accumulated data over time which is clearly uneconomical. In order to solve these two problems, concept preserving hashing (CPH) is proposed. In contrast to the existing methods, CPH preserves the original concept, that is, the set of hash codes representing a concept is preserved over time, by learning a new set of hash functions to yield the same set of hash codes for images (old and new) of a concept. The objective function of CPH learning consists of three components: 1) isomorphic similarity; 2) hash codes partition balancing; and 3) heterogeneous similarity fitness. The experimental results on 11 concept drift scenarios show that CPH yields better retrieval precisions than the existing methods and does not need to update hash codes of previously stored images.

13.
IEEE Trans Cybern ; 51(3): 1613-1625, 2021 Mar.
Artigo em Inglês | MEDLINE | ID: mdl-31217137

RESUMO

As efficient recurrent neural network (RNN) models, echo state networks (ESNs) have attracted widespread attention and been applied in many application domains in the last decade. Although they have achieved great success in modeling time series, a single ESN may have difficulty in capturing the multitimescale structures that naturally exist in temporal data. In this paper, we propose the convolutional multitimescale ESN (ConvMESN), which is a novel training-efficient model for capturing multitimescale structures and multiscale temporal dependencies of temporal data. In particular, a multitimescale memory encoder is constructed with a multireservoir structure, in which different reservoirs have recurrent connections with different skip lengths (or time spans). By collecting all past echo states in each reservoir, this multireservoir structure encodes the history of a time series as nonlinear multitimescale echo state representations (MESRs). Our visualization analysis verifies that the MESRs provide better discriminative features for time series. Finally, multiscale temporal dependencies of MESRs are learned by a convolutional layer. By leveraging the multitimescale reservoirs followed by a convolutional learner, the ConvMESN has not only efficient memory encoding ability for temporal data with multitimescale structures but also strong learning ability for complex temporal dependencies. Furthermore, the training-free reservoirs and the single convolutional layer provide high-computational efficiency for the ConvMESN to model complex temporal data. Extensive experiments on 18 multivariate time series (MTS) benchmark datasets and 3 skeleton-based action recognition datasets demonstrate that the ConvMESN captures multitimescale dynamics and outperforms existing methods.

14.
Sensors (Basel) ; 20(5)2020 Mar 08.
Artigo em Inglês | MEDLINE | ID: mdl-32182668

RESUMO

Over the past few years, the Internet of Things (IoT) has been greatly developed with one instance being smart home devices gradually entering into people's lives. To maximize the impact of such deployments, home-based activity recognition is required to initially recognize behaviors within smart home environments and to use this information to provide better health and social care services. Activity recognition has the ability to recognize people's activities from the information about their interaction with the environment collected by sensors embedded within the home. In this paper, binary data collected by anonymous binary sensors such as pressure sensors, contact sensors, passive infrared sensors etc. are used to recognize activities. A radial basis function neural network (RBFNN) with localized stochastic-sensitive autoencoder (LiSSA) method is proposed for the purposes of home-based activity recognition. An autoencoder (AE) is introduced to extract useful features from the binary sensor data by converting binary inputs into continuous inputs to extract increased levels of hidden information. The generalization capability of the proposed method is enhanced by minimizing both the training error and the stochastic sensitivity measure in an attempt to improve the ability of the classifier to tolerate uncertainties in the sensor data. Four binary home-based activity recognition datasets including OrdonezA, OrdonezB, Ulster, and activities of daily living data from van Kasteren (vanKasterenADL) are used to evaluate the effectiveness of the proposed method. Compared with well-known benchmarking approaches including support vector machine (SVM), multilayer perceptron neural network (MLPNN), random forest and an RBFNN-based method, the proposed method yielded the best performance with 98.35%, 86.26%, 96.31%, 92.31% accuracy on four datasets, respectively.


Assuntos
Atividades Humanas/classificação , Monitorização Ambulatorial/métodos , Rede Nervosa , Adulto , Serviços de Assistência Domiciliar , Humanos , Internet das Coisas , Masculino , Processos Estocásticos , Máquina de Vetores de Suporte
15.
Sensors (Basel) ; 20(1)2019 Dec 30.
Artigo em Inglês | MEDLINE | ID: mdl-31905991

RESUMO

In this paper, we focus on data-driven approaches to human activity recognition (HAR). Data-driven approaches rely on good quality data during training, however, a shortage of high quality, large-scale, and accurately annotated HAR datasets exists for recognizing activities of daily living (ADLs) within smart environments. The contributions of this paper involve improving the quality of an openly available HAR dataset for the purpose of data-driven HAR and proposing a new ensemble of neural networks as a data-driven HAR classifier. Specifically, we propose a homogeneous ensemble neural network approach for the purpose of recognizing activities of daily living within a smart home setting. Four base models were generated and integrated using a support function fusion method which involved computing an output decision score for each base classifier. The contribution of this work also involved exploring several approaches to resolving conflicts between the base models. Experimental results demonstrated that distributing data at a class level greatly reduces the number of conflicts that occur between the base models, leading to an increased performance prior to the application of conflict resolution techniques. Overall, the best HAR performance of 80.39% was achieved through distributing data at a class level in conjunction with a conflict resolution approach, which involved calculating the difference between the highest and second highest predictions per conflicting model and awarding the final decision to the model with the highest differential value.


Assuntos
Meio Ambiente , Atividades Humanas , Redes Neurais de Computação , Reconhecimento Automatizado de Padrão , Bases de Dados como Assunto , Humanos , Modelos Teóricos , Máquina de Vetores de Suporte
16.
IEEE Trans Cybern ; 49(11): 3844-3858, 2019 Nov.
Artigo em Inglês | MEDLINE | ID: mdl-29994699

RESUMO

Images are uploaded to the Internet over time which makes concept drifting and distribution change in semantic classes unavoidable. Current hashing methods being trained using a given static database may not be suitable for nonstationary semantic image retrieval problems. Moreover, directly retraining a whole hash table to update knowledge coming from new arriving image data may not be efficient. Therefore, this paper proposes a new incremental hash-bit learning method. At the arrival of new data, hash bits are selected from both existing and newly trained hash bits by an iterative maximization of a 3-component objective function. This objective function is also used to weight selected hash bits to re-rank retrieved images for better semantic image retrieval results. The three components evaluate a hash bit in three different angles: 1) information preservation; 2) partition balancing; and 3) bit angular difference. The proposed method combines knowledge retained from previously trained hash bits and new semantic knowledge learned from the new data by training new hash bits. In comparison to table-based incremental hashing, the proposed method automatically adjusts the number of bits from old data and new data according to the concept drifting in the given data via the maximization of the objective function. Experimental results show that the proposed method outperforms existing stationary hashing methods, table-based incremental hashing, and online hashing methods in 15 different simulated nonstationary data environments.

17.
IEEE Trans Cybern ; 47(11): 3814-3826, 2017 Nov.
Artigo em Inglês | MEDLINE | ID: mdl-27390201

RESUMO

A very large volume of images is uploaded to the Internet daily. However, current hashing methods for image retrieval are designed for static databases only. They fail to consider the fact that the distribution of images can change when new images are added to the database over time. The changes in the distribution of images include both discovery of a new class and a distribution of images within a class owing to concept drift. Retraining of hash tables using all images in the database requires a large computation effort. This is also biased to old data owing to the huge volume of old images which leads to a poor retrieval performance over time. In this paper, we propose the incremental hashing (ICH) method to deal with the two aforementioned types of changes in the data distribution. The ICH uses a multihashing to retain knowledge coming from images arriving over time and a weight-based ranking to make the retrieval results adaptive to the new data environment. Experimental results show that the proposed method is effective in dealing with changes in the database.

18.
IEEE Trans Neural Netw Learn Syst ; 27(5): 978-92, 2016 May.
Artigo em Inglês | MEDLINE | ID: mdl-26054075

RESUMO

The training of a multilayer perceptron neural network (MLPNN) concerns the selection of its architecture and the connection weights via the minimization of both the training error and a penalty term. Different penalty terms have been proposed to control the smoothness of the MLPNN for better generalization capability. However, controlling its smoothness using, for instance, the norm of weights or the Vapnik-Chervonenkis dimension cannot distinguish individual MLPNNs with the same number of free parameters or the same norm. In this paper, to enhance generalization capabilities, we propose a stochastic sensitivity measure (ST-SM) to realize a new penalty term for MLPNN training. The ST-SM determines the expectation of the squared output differences between the training samples and the unseen samples located within their Q -neighborhoods for a given MLPNN. It provides a direct measurement of the MLPNNs output fluctuations, i.e., smoothness. We adopt a two-phase Pareto-based multiobjective training algorithm for minimizing both the training error and the ST-SM as biobjective functions. Experiments on 20 UCI data sets show that the MLPNNs trained by the proposed algorithm yield better accuracies on testing data than several recent and classical MLPNN training methods.

19.
IEEE Trans Cybern ; 45(11): 2402-12, 2015 Nov.
Artigo em Inglês | MEDLINE | ID: mdl-25474818

RESUMO

Undersampling is a widely adopted method to deal with imbalance pattern classification problems. Current methods mainly depend on either random resampling on the majority class or resampling at the decision boundary. Random-based undersampling fails to take into consideration informative samples in the data while resampling at the decision boundary is sensitive to class overlapping. Both techniques ignore the distribution information of the training dataset. In this paper, we propose a diversified sensitivity-based undersampling method. Samples of the majority class are clustered to capture the distribution information and enhance the diversity of the resampling. A stochastic sensitivity measure is applied to select samples from both clusters of the majority class and the minority class. By iteratively clustering and sampling, a balanced set of samples yielding high classifier sensitivity is selected. The proposed method yields a good generalization capability for 14 UCI datasets.

20.
IEEE Trans Neural Netw ; 18(5): 1294-305, 2007 Sep.
Artigo em Inglês | MEDLINE | ID: mdl-18220181

RESUMO

The generalization error bounds found by current error models using the number of effective parameters of a classifier and the number of training samples are usually very loose. These bounds are intended for the entire input space. However, support vector machine (SVM), radial basis function neural network (RBFNN), and multilayer perceptron neural network (MLPNN) are local learning machines for solving problems and treat unseen samples near the training samples to be more important. In this paper, we propose a localized generalization error model which bounds from above the generalization error within a neighborhood of the training samples using stochastic sensitivity measure. It is then used to develop an architecture selection technique for a classifier with maximal coverage of unseen samples by specifying a generalization error threshold. Experiments using 17 University of California at Irvine (UCI) data sets show that, in comparison with cross validation (CV), sequential learning, and two other ad hoc methods, our technique consistently yields the best testing classification accuracy with fewer hidden neurons and less training time.


Assuntos
Algoritmos , Modelos Estatísticos , Redes Neurais de Computação , Reconhecimento Automatizado de Padrão/métodos , Simulação por Computador , Reprodutibilidade dos Testes , Sensibilidade e Especificidade
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA
...