Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 32
Filtrar
1.
Int J Med Inform ; 186: 105425, 2024 Jun.
Artículo en Inglés | MEDLINE | ID: mdl-38554589

RESUMEN

OBJECTIVE: For patients in the Intensive Care Unit (ICU), the timing of intubation has a significant association with patients' outcomes. However, accurate prediction of the timing of intubation remains an unsolved challenge due to the noisy, sparse, heterogeneous, and unbalanced nature of ICU data. In this study, our objective is to develop a workflow for pre-processing ICU data and to develop a customized deep learning model to predict the need for intubation. METHODS: To improve the prediction accuracy, we transform the intubation prediction task into a time series classification task. We carefully design a sequence of data pre-processing steps to handle the multimodal noisy data. Firstly, we discretize the sequential data and address missing data using interpolation. Next, we employ a sampling strategy to address data imbalance and standardize the data to facilitate faster model convergence. Furthermore, we employ the feature selection technique and propose an ensemble model to combine features learned by different deep learning models. RESULTS: The performance is evaluated on Medical Information Mart for Intensive Care (MIMIC)-III, an ICU dataset. Our proposed Deep Feature Fusion method achieves an area under the curve (AUC) of the receiver operating curve (ROC) of 0.8953, surpassing the performance of other deep learning and traditional machine learning models. CONCLUSION: Our proposed Deep Feature Fusion method proves to be a viable approach for predicting intubation and outperforms other deep learning and classical machine learning models. The study confirms that high-frequency time-varying indicators, particularly Mean Blood Pressure (MeanBP) and peripheral oxygen saturation (SpO2), are significant risk factors for predicting intubation.


Asunto(s)
Aprendizaje Profundo , Humanos , Curva ROC , Cuidados Críticos , Unidades de Cuidados Intensivos , Aprendizaje Automático
2.
Artículo en Inglés | MEDLINE | ID: mdl-38512732

RESUMEN

Self-supervised learning aims to learn representation that can be effectively generalized to downstream tasks. Many self-supervised approaches regard two views of an image as both the input and the self-supervised signals, assuming that either view contains the same task-relevant information and the shared information is (approximately) sufficient for predicting downstream tasks. Recent studies show that discarding superfluous information not shared between the views can improve generalization. Hence, the ideal representation is sufficient for downstream tasks and contains minimal superfluous information, termed minimal sufficient representation. One can learn this representation by maximizing the mutual information between the representation and the supervised view while eliminating superfluous information. Nevertheless, the computation of mutual information is notoriously intractable. In this work, we propose an objective termed multi-view entropy bottleneck (MVEB) to learn minimal sufficient representation effectively. MVEB simplifies the minimal sufficient learning to maximizing both the agreement between the embeddings of two views and the differential entropy of the embedding distribution. Our experiments confirm that MVEB significantly improves performance. For example, it achieves top-1 accuracy of 76.9% on ImageNet with a vanilla ResNet-50 backbone on linear evaluation. To the best of our knowledge, this is the new state-of-the-art result with ResNet-50.

3.
Artículo en Inglés | MEDLINE | ID: mdl-38393839

RESUMEN

Few-shot classification aims to adapt classifiers trained on base classes to novel classes with a few shots. However, the limited amount of training data is often inadequate to represent the intraclass variations in novel classes. This can result in biased estimation of the feature distribution, which in turn results in inaccurate decision boundaries, especially when the support data are outliers. To address this issue, we propose a feature enhancement method called CORrelation-guided feature Enrichment that generates improved features for novel classes using weak supervision from the base classes. The proposed CORrelation-guided feature Enhancement (CORE) method utilizes an autoencoder (AE) architecture but incorporates classification information into its latent space. This design allows the CORE to generate more discriminative features while discarding irrelevant content information. After being trained on base classes, CORE's generative ability can be transferred to novel classes that are similar to those in the base classes. By using these generative features, we can reduce the estimation bias of the class distribution, which makes few-shot learning (FSL) less sensitive to the selection of support data. Our method is generic and flexible and can be used with any feature extractor and classifier. It can be easily integrated into existing FSL approaches. Experiments with different backbones and classifiers show that our proposed method consistently outperforms existing methods on various widely used benchmarks.

4.
Neural Netw ; 167: 706-714, 2023 Oct.
Artículo en Inglés | MEDLINE | ID: mdl-37729786

RESUMEN

Adversarial training is considered one of the most effective methods to improve the adversarial robustness of deep neural networks. Despite the success, it still suffers from unsatisfactory performance and overfitting. Considering the intrinsic mechanism of adversarial training, recent studies adopt the idea of curriculum learning to alleviate overfitting. However, this also introduces new issues, that is, lacking the quantitative criterion for attacks' strength and catastrophic forgetting. To mitigate such issues, we propose the self-paced adversarial training (SPAT), which explicitly builds the learning process of adversarial training based on adversarial examples of the whole dataset. Specifically, our model is first trained with "easy" adversarial examples, and then is continuously enhanced by gradually adding "complex" adversarial examples. This way strengthens the ability to fit "complex" adversarial examples while holding in mind "easy" adversarial samples. To balance adversarial examples between classes, we determine the difficulty of the adversarial examples locally in each class. Notably, this learning paradigm can also be incorporated into other advanced methods for further boosting adversarial robustness. Experimental results show the effectiveness of our proposed model against various attacks on widely-used benchmarks. Especially, on CIFAR100, SPAT provides a boost of 1.7% (relatively 5.4%) in robust accuracy on the PGD10 attack and 3.9% (relatively 7.2%) in natural accuracy for AWP.


Asunto(s)
Benchmarking , Aprendizaje , Redes Neurales de la Computación
5.
Neural Netw ; 168: 313-325, 2023 Nov.
Artículo en Inglés | MEDLINE | ID: mdl-37776616

RESUMEN

Recent Transformer-based networks have shown impressive performance on single image denoising tasks. While the Transformer model promotes the interaction of long-range features, it generally involves high computational complexity. In this paper, we propose a feature-enhanced denoising network (FEDNet) by combining CNN architectures with Transformers. Specifically, we propose an effective cross-channel attention to boost the interaction of channel information and enhance channel features. In order to fully exploit image features, we incorporate Transformer blocks into minimum-scale layers of the network, which can not only capture the long-distance dependency of low-resolution features but also reduce the multiplier-accumulator operations (MACs). Meanwhile, a structure-preserving block is designed to enhance the structural feature extraction. Experimental results on both synthetic and real-world datasets demonstrate that our model can achieve the state-of-the-art denoising performance with low computational costs.

6.
IEEE Trans Neural Netw Learn Syst ; 34(11): 8174-8194, 2023 Nov.
Artículo en Inglés | MEDLINE | ID: mdl-35302941

RESUMEN

Semi-supervised learning (SSL) has tremendous value in practice due to the utilization of both labeled and unlabelled data. An essential class of SSL methods, referred to as graph-based semi-supervised learning (GSSL) methods in the literature, is to first represent each sample as a node in an affinity graph, and then, the label information of unlabeled samples can be inferred based on the structure of the constructed graph. GSSL methods have demonstrated their advantages in various domains due to their uniqueness of structure, the universality of applications, and their scalability to large-scale data. Focusing on GSSL methods only, this work aims to provide both researchers and practitioners with a solid and systematic understanding of relevant advances as well as the underlying connections among them. The concentration on one class of SSL makes this article distinct from recent surveys that cover a more general and broader picture of SSL methods yet often neglect the fundamental understanding of GSSL methods. In particular, a significant contribution of this article lies in a newly generalized taxonomy for GSSL under the unified framework, with the most up-to-date references and valuable resources such as codes, datasets, and applications. Furthermore, we present several potential research directions as future work with our insights into this rapidly growing field.

7.
IEEE Trans Neural Netw Learn Syst ; 34(11): 9562-9567, 2023 Nov.
Artículo en Inglés | MEDLINE | ID: mdl-35333722

RESUMEN

The ResNet and its variants have achieved remarkable successes in various computer vision tasks. Despite its success in making gradient flow through building blocks, the information communication of intermediate layers of blocks is ignored. To address this issue, in this brief, we propose to introduce a regulator module as a memory mechanism to extract complementary features of the intermediate layers, which are further fed to the ResNet. In particular, the regulator module is composed of convolutional recurrent neural networks (RNNs) [e.g., convolutional long short-term memories (LSTMs) or convolutional gated recurrent units (GRUs)], which are shown to be good at extracting spatio-temporal information. We named the new regulated network as regulated residual network (RegNet). The regulator module can be easily implemented and appended to any ResNet architecture. Experimental results on three image classification datasets have demonstrated the promising performance of the proposed architecture compared with the standard ResNet, squeeze-and-excitation ResNet, and other state-of-the-art architectures.

8.
Artículo en Inglés | MEDLINE | ID: mdl-36279339

RESUMEN

Real-world data usually present long-tailed distributions. Training on imbalanced data tends to render neural networks perform well on head classes while much worse on tail classes. The severe sparseness of training instances for the tail classes is the main challenge, which results in biased distribution estimation during training. Plenty of efforts have been devoted to ameliorating the challenge, including data resampling and synthesizing new training instances for tail classes. However, no prior research has exploited the transferable knowledge from head classes to tail classes for calibrating the distribution of tail classes. In this article, we suppose that tail classes can be enriched by similar head classes and propose a novel distribution calibration (DC) approach named as label-aware DC (). transfers the statistics from relevant head classes to infer the distribution of tail classes. Sampling from calibrated distribution further facilitates rebalancing the classifier. Experiments on both image and text long-tailed datasets demonstrate that significantly outperforms existing methods. The visualization also shows that provides a more accurate distribution estimation.

9.
Neural Netw ; 155: 360-368, 2022 Nov.
Artículo en Inglés | MEDLINE | ID: mdl-36115162

RESUMEN

Convolutional Neural Networks (CNNs) have achieved tremendous success in a number of learning tasks including image classification. Residual-like networks, such as ResNets, mainly focus on the skip connection to avoid gradient vanishing. However, the skip connection mechanism limits the utilization of intermediate features due to simple iterative updates. To mitigate the redundancy of residual-like networks, we design Attentive Feature Integration (AFI) modules, which are widely applicable to most residual-like network architectures, leading to new architectures named AFI-Nets. AFI-Nets explicitly model the correlations among different levels of features and selectively transfer features with a little overhead. AFI-ResNet-152 obtains a 1.24% relative improvement on the ImageNet dataset while decreases the FLOPs by about 10% and the number of parameters by about 9.2% compared to ResNet-152.


Asunto(s)
Procesamiento de Imagen Asistido por Computador , Redes Neurales de la Computación , Procesamiento de Imagen Asistido por Computador/métodos
10.
Artículo en Inglés | MEDLINE | ID: mdl-36136919

RESUMEN

Multiview graph clustering has emerged as an important yet challenging technique due to the difficulty of exploiting the similarity relationships among multiple views. Typically, the similarity graph for each view learned by these methods is easily corrupted because of the unavoidable noise or diversity among views. To recover a clean graph, existing methods mainly focus on the diverse part within each graph yet overlook the diversity across multiple graphs. In this article, instead of merely considering the sparsity of diversity within a graph as previous methods do, we incline to a more suitable consideration that the diversity should be sparse across graphs. It is intuitive that the divergent parts are supposed to be inconsistent with each other, otherwise it would contradict the definition of diversity. By simultaneously and explicitly detecting the multiview consistency and cross-graph diversity, a pure graph for each view can be expected. The multiple pure graphs are further fused to the structured consensus graph with exactly r connected components where r is the number of clusters. Once the consensus graph is obtained, the cluster label to each instance can be directly allocated as each connected component precisely corresponds to an individual cluster. An alternating iterative algorithm is designed to optimize the subtasks of learning the similarity graphs adaptively, detecting the consistency as well as cross-graph diversity, fusing the multiple pure graphs, and assigning cluster label to each instance in a mutual reinforcement manner. Extensive experimental results on several benchmark multiview datasets demonstrate the effectiveness of our model, in comparison to several state-of-the-art algorithms.

11.
Int J Mol Sci ; 23(7)2022 Mar 31.
Artículo en Inglés | MEDLINE | ID: mdl-35409258

RESUMEN

Single cell RNA sequencing (scRNA-seq) allows researchers to explore tissue heterogeneity, distinguish unusual cell identities, and find novel cellular subtypes by providing transcriptome profiling for individual cells. Clustering analysis is usually used to predict cell class assignments and infer cell identities. However, the performance of existing single-cell clustering methods is extremely sensitive to the presence of noise data and outliers. Existing clustering algorithms can easily fall into local optimal solutions. There is still no consensus on the best performing method. To address this issue, we introduce a single cell self-paced clustering (scSPaC) method with F-norm based nonnegative matrix factorization (NMF) for scRNA-seq data and a sparse single cell self-paced clustering (sscSPaC) method with l21-norm based nonnegative matrix factorization for scRNA-seq data. We gradually add single cells from simple to complex to our model until all cells are selected. In this way, the influences of noisy data and outliers can be significantly reduced. The proposed method achieved the best performance on both simulation data and real scRNA-seq data. A case study about human clara cells and ependymal cells scRNA-seq data clustering shows that scSPaC is more advantageous near the clustering dividing line.


Asunto(s)
Análisis de la Célula Individual , Transcriptoma , Algoritmos , Análisis por Conglomerados , Perfilación de la Expresión Génica/métodos , Humanos , Análisis de Secuencia de ARN/métodos , Análisis de la Célula Individual/métodos
12.
IEEE Trans Neural Netw Learn Syst ; 33(11): 6346-6359, 2022 Nov.
Artículo en Inglés | MEDLINE | ID: mdl-34029195

RESUMEN

Semisupervised learning (SSL) has been extensively studied in related literature. Despite its success, many existing learning algorithms for semisupervised problems require specific distributional assumptions, such as "cluster assumption" and "low-density assumption," and thus, it is often hard to verify them in practice. We are interested in quantifying the effect of SSL based on kernel methods under a misspecified setting. The misspecified setting means that the target function is not contained in a hypothesis space under which some specific learning algorithm works. Practically, this assumption is mild and standard for various kernel-based approaches. Under this misspecified setting, this article makes an attempt to provide a theoretical justification on when and how the unlabeled data can be exploited to improve inference of a learning task. Our theoretical justification is indicated from the viewpoint of the asymptotic variance of our proposed two-step estimation. It is shown that the proposed pointwise nonparametric estimator has a smaller asymptotic variance than the supervised estimator using the labeled data alone. Several simulated experiments are implemented to support our theoretical results.

13.
Int J Med Inform ; 155: 104570, 2021 11.
Artículo en Inglés | MEDLINE | ID: mdl-34547624

RESUMEN

BACKGROUND: It is a great challenge for emergency physicians to early detect the patient's deterioration and prevent unexpected death through a large amount of clinical data, which requires sufficient experience and keen insight. OBJECTIVE: To evaluate the performance of machine learning models in quantifying the severity of emergency department (ED) patients and identifying high-risk patients. METHODS: Using routinely-available demographics, vital signs and laboratory tests extracted from electronic health records (EHRs), a framework based on machine learning and feature engineering was proposed for mortality prediction. Patients who had one complete record of vital signs and laboratory tests in ED were included. The following patients were excluded: pediatric patients aged < 18 years, pregnant woman, and patients died or were discharged or hospitalized within 12 h after admission. Based on 76 original features extracted, 9 machine learning models were adopted to validate our proposed framework. Their optimal hyper-parameters were fine-tuned using the grid search method. The prediction results were evaluated on performance metrics (i.e., accuracy, area under the curve (AUC), recall and precision) with repeated 5-fold cross-validation (CV). The time window from patient admission to the prediction was analyzed at 12 h, 24 h, 48 h, and entire stay. RESULTS: We studied a total of 1114 ED patients with 71.54% (797/1114) survival and 28.46% (317/1114) death in the hospital. The results revealed a more complete time window leads to better prediction performance. Using the entire stay records, the LightGBM model with refined feature engineering demonstrated high discrimination and achieved 93.6% (±0.008) accuracy, 97.6% (±0.003) AUC, 97.1% (±0.008) recall, and 94.2% (±0.006) precision, even if no diagnostic information was utilized. CONCLUSIONS: This study quantifies the criticality of ED patients and appears to have significant potential as a clinical decision support tool in assisting physicians in their clinical routine. While the model requires validation before use elsewhere, the same methodology could be used to create a strong model for the new hospital.


Asunto(s)
Servicio de Urgencia en Hospital , Aprendizaje Automático , Niño , Registros Electrónicos de Salud , Femenino , Humanos , Admisión del Paciente , Alta del Paciente
14.
IEEE Trans Image Process ; 30: 5252-5263, 2021.
Artículo en Inglés | MEDLINE | ID: mdl-34033539

RESUMEN

Auto-Encoder (AE)-based deep subspace clustering (DSC) methods have achieved impressive performance due to the powerful representation extracted using deep neural networks while prioritizing categorical separability. However, self-reconstruction loss of an AE ignores rich useful relation information and might lead to indiscriminative representation, which inevitably degrades the clustering performance. It is also challenging to learn high-level similarity without feeding semantic labels. Another unsolved problem facing DSC is the huge memory cost due to n×n similarity matrix, which is incurred by the self-expression layer between an encoder and decoder. To tackle these problems, we use pairwise similarity to weigh the reconstruction loss to capture local structure information, while a similarity is learned by the self-expression layer. Pseudo-graphs and pseudo-labels, which allow benefiting from uncertain knowledge acquired during network training, are further employed to supervise similarity learning. Joint learning and iterative training facilitate to obtain an overall optimal solution. Extensive experiments on benchmark datasets demonstrate the superiority of our approach. By combining with the k -nearest neighbors algorithm, we further show that our method can address the large-scale and out-of-sample problems. The source code of our method is available: https://github.com/sckangz/SelfsupervisedSC.

15.
Neural Netw ; 141: 385-394, 2021 Sep.
Artículo en Inglés | MEDLINE | ID: mdl-33992974

RESUMEN

Code retrieval is a common practice for programmers to reuse existing code snippets in the open-source repositories. Given a user query (i.e., a natural language description), code retrieval aims at searching the most relevant ones from a set of code snippets. The main challenge of effective code retrieval lies in mitigating the semantic gap between natural language descriptions and code snippets. With the ever-increasing amount of available open-source code, recent studies resort to neural networks to learn the semantic matching relationships between the two sources. The statement-level dependency information, which highlights the dependency relations among the program statements during the execution, reflects the structural importance of one statement in the code, which is favorable for accurately capturing the code semantics but has never been explored for the code retrieval task. In this paper, we propose CRaDLe, a novel approach for Code Retrieval based on statement-level semantic Dependency Learning. Specifically, CRaDLe distills code representations through fusing both the dependency and semantic information at the statement level, and then learns a unified vector representation for each code and description pair for modeling the matching relationship. Comprehensive experiments and analysis on real-world datasets show that the proposed approach can accurately retrieve code snippets for a given query and significantly outperform the state-of-the-art approaches on the task.


Asunto(s)
Semántica , Humanos , Aprendizaje Automático , Procesamiento de Lenguaje Natural , Redes Neurales de la Computación , Programas Informáticos
16.
Neural Netw ; 132: 461-476, 2020 Dec.
Artículo en Inglés | MEDLINE | ID: mdl-33039785

RESUMEN

Generative adversarial networks have been extensively studied in recent years and powered a wide range of applications, ranging from image generation, image-to-image translation, to text-to-image generation, and visual recognition. These methods typically model the mapping from latent space to image with single or multiple generators. However, they have obvious drawbacks: (i) ignoring the multi-modal structure of images, and (ii) lacking model interpretability. Importantly, the existing methods mostly assume one or more generators can cover all image modes even if we do not know the structure of data. Thus, mode dropping and collapse often take place along with GANs training. Despite the importance of exploring the data structure in generation, it has been almost unexplored. In this work, aiming at generating multi-modal images and interpreting model explicitly, we explore the theory on how to integrate GANs with data structure prior, and propose latent Dirichlet allocation based generative adversarial networks (LDAGAN). This framework is extended to combine with a variety of state-of-the-art single-generator GANs and achieves improved performance. Extensive experiments on synthetic and real datasets demonstrate the efficacy of LDAGAN for multi-modal image generation. An implementation of LDAGAN is available at https://github.com/Sumching/LDAGAN.


Asunto(s)
Procesamiento de Imagen Asistido por Computador/métodos , Redes Neurales de la Computación
17.
Neural Netw ; 131: 93-102, 2020 Nov.
Artículo en Inglés | MEDLINE | ID: mdl-32763763

RESUMEN

Deep auto-encoders (DAEs) have achieved great success in learning data representations via the powerful representability of neural networks. But most DAEs only focus on the most dominant structures which are able to reconstruct the data from a latent space and neglect rich latent structural information. In this work, we propose a new representation learning method that explicitly models and leverages sample relations, which in turn is used as supervision to guide the representation learning. Different from previous work, our framework well preserves the relations between samples. Since the prediction of pairwise relations themselves is a fundamental problem, our model adaptively learns them from data. This provides much flexibility to encode real data manifold. The important role of relation and representation learning is evaluated on the clustering task. Extensive experiments on benchmark data sets demonstrate the superiority of our approach. By seeking to embed samples into subspace, we further show that our method can address the large-scale and out-of-sample problem. Our source code is publicly available at: https://github.com/nbShawnLu/RGRL.


Asunto(s)
Aprendizaje Automático , Análisis por Conglomerados
18.
Neural Netw ; 129: 138-148, 2020 Sep.
Artículo en Inglés | MEDLINE | ID: mdl-32516696

RESUMEN

Leveraging on the underlying low-dimensional structure of data, low-rank and sparse modeling approaches have achieved great success in a wide range of applications. However, in many applications the data can display structures beyond simply being low-rank or sparse. Fully extracting and exploiting hidden structure information in the data is always desirable and favorable. To reveal more underlying effective manifold structure, in this paper, we explicitly model the data relation. Specifically, we propose a structure learning framework that retains the pairwise similarities between the data points. Rather than just trying to reconstruct the original data based on self-expression, we also manage to reconstruct the kernel matrix, which functions as similarity preserving. Consequently, this technique is particularly suitable for the class of learning problems that are sensitive to sample similarity, e.g., clustering and semisupervised classification. To take advantage of representation power of deep neural network, a deep auto-encoder architecture is further designed to implement our model. Extensive experiments on benchmark data sets demonstrate that our proposed framework can consistently and significantly improve performance on both evaluation tasks. We conclude that the quality of structure learning can be enhanced if similarity information is incorporated.


Asunto(s)
Aprendizaje Profundo/normas , Benchmarking
19.
Neural Netw ; 130: 11-21, 2020 Oct.
Artículo en Inglés | MEDLINE | ID: mdl-32589587

RESUMEN

Deep neural networks (DNNs) have achieved outstanding performance in a wide range of applications, e.g., image classification, natural language processing, etc. Despite the good performance, the huge number of parameters in DNNs brings challenges to efficient training of DNNs and also their deployment in low-end devices with limited computing resources. In this paper, we explore the correlations in the weight matrices, and approximate the weight matrices with the low-rank block-term tensors. We name the new corresponding structure as block-term tensor layers (BT-layers), which can be easily adapted to neural network models, such as CNNs and RNNs. In particular, the inputs and the outputs in BT-layers are reshaped into low-dimensional high-order tensors with a similar or improved representation power. Sufficient experiments have demonstrated that BT-layers in CNNs and RNNs can achieve a very large compression ratio on the number of parameters while preserving or improving the representation power of the original DNNs.


Asunto(s)
Procesamiento de Lenguaje Natural , Redes Neurales de la Computación , Compresión de Datos/métodos
20.
Neural Netw ; 127: 182-192, 2020 Jul.
Artículo en Inglés | MEDLINE | ID: mdl-32361548

RESUMEN

The accuracy of deep learning (e.g., convolutional neural networks) for an image classification task critically relies on the amount of labeled training data. Aiming to solve an image classification task on a new domain that lacks labeled data but gains access to cheaply available unlabeled data, unsupervised domain adaptation is a promising technique to boost the performance without incurring extra labeling cost, by assuming images from different domains share some invariant characteristics. In this paper, we propose a new unsupervised domain adaptation method named Domain-Adversarial Residual-Transfer (DART) learning of deep neural networks to tackle cross-domain image classification tasks. In contrast to the existing unsupervised domain adaption approaches, the proposed DART not only learns domain-invariant features via adversarial training, but also achieves robust domain-adaptive classification via a residual-transfer strategy, all in an end-to-end training framework. We evaluate the performance of the proposed method for cross-domain image classification tasks on several well-known benchmark data sets, in which our method clearly outperforms the state-of-the-art approaches.


Asunto(s)
Redes Neurales de la Computación , Reconocimiento de Normas Patrones Automatizadas/métodos , Aprendizaje Automático no Supervisado , Aprendizaje Profundo/tendencias , Humanos , Reconocimiento de Normas Patrones Automatizadas/tendencias , Aprendizaje Automático no Supervisado/tendencias
SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA
...