Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 66
Filtrar
1.
Phys Rev Lett ; 128(11): 110501, 2022 Mar 18.
Artículo en Inglés | MEDLINE | ID: mdl-35363009

RESUMEN

The recognition of entanglement states is a notoriously difficult problem when no prior information is available. Here, we propose an efficient quantum adversarial bipartite entanglement detection scheme to address this issue. Our proposal reformulates the bipartite entanglement detection as a two-player zero-sum game completed by parameterized quantum circuits, where a two-outcome measurement can be used to query a classical binary result about whether the input state is bipartite entangled or not. In principle, for an N-qubit quantum state, the runtime complexity of our proposal is O(poly(N)T) with T being the number of iterations. We experimentally implement our protocol on a linear optical network and exhibit its effectiveness to accomplish the bipartite entanglement detection for 5-qubit quantum pure states and 2-qubit quantum mixed states. Our work paves the way for using near-term quantum machines to tackle entanglement detection on multipartite entangled quantum systems.

2.
Pattern Recognit ; 124: 108499, 2022 Apr.
Artículo en Inglés | MEDLINE | ID: mdl-34924632

RESUMEN

There is an urgent need for automated methods to assist accurate and effective assessment of COVID-19. Radiology and nucleic acid test (NAT) are complementary COVID-19 diagnosis methods. In this paper, we present an end-to-end multitask learning (MTL) framework (COVID-MTL) that is capable of automated and simultaneous detection (against both radiology and NAT) and severity assessment of COVID-19. COVID-MTL learns different COVID-19 tasks in parallel through our novel random-weighted loss function, which assigns learning weights under Dirichlet distribution to prevent task dominance; our new 3D real-time augmentation algorithm (Shift3D) introduces space variances for 3D CNN components by shifting low-level feature representations of volumetric inputs in three dimensions; thereby, the MTL framework is able to accelerate convergence and improve joint learning performance compared to single-task models. By only using chest CT scans, COVID-MTL was trained on 930 CT scans and tested on separate 399 cases. COVID-MTL achieved AUCs of 0.939 and 0.846, and accuracies of 90.23% and 79.20% for detection of COVID-19 against radiology and NAT, respectively, which outperformed the state-of-the-art models. Meanwhile, COVID-MTL yielded AUC of 0.800 ± 0.020 and 0.813 ± 0.021 (with transfer learning) for classifying control/suspected, mild/regular, and severe/critically-ill cases. To decipher the recognition mechanism, we also identified high-throughput lung features that were significantly related (P < 0.001) to the positivity and severity of COVID-19.

3.
Neural Comput ; 28(10): 2213-49, 2016 10.
Artículo en Inglés | MEDLINE | ID: mdl-27391679

RESUMEN

The k-dimensional coding schemes refer to a collection of methods that attempt to represent data using a set of representative k-dimensional vectors and include nonnegative matrix factorization, dictionary learning, sparse coding, k-means clustering, and vector quantization as special cases. Previous generalization bounds for the reconstruction error of the k-dimensional coding schemes are mainly dimensionality-independent. A major advantage of these bounds is that they can be used to analyze the generalization error when data are mapped into an infinite- or high-dimensional feature space. However, many applications use finite-dimensional data features. Can we obtain dimensionality-dependent generalization bounds for k-dimensional coding schemes that are tighter than dimensionality-independent bounds when data are in a finite-dimensional feature space? Yes. In this letter, we address this problem and derive a dimensionality-dependent generalization bound for k-dimensional coding schemes by bounding the covering number of the loss function class induced by the reconstruction error. The bound is of order [Formula: see text], where m is the dimension of features, k is the number of the columns in the linear implementation of coding schemes, and n is the size of sample, [Formula: see text] when n is finite and [Formula: see text] when n is infinite. We show that our bound can be tighter than previous results because it avoids inducing the worst-case upper bound on k of the loss function. The proposed generalization bound is also applied to some specific coding schemes to demonstrate that the dimensionality-dependent bound is an indispensable complement to the dimensionality-independent generalization bounds.

4.
Chem Commun (Camb) ; 60(9): 1176-1179, 2024 Jan 25.
Artículo en Inglés | MEDLINE | ID: mdl-38193594

RESUMEN

We present here that visible-light-induced electron transfer from an excited dye to an in situ generated Pt cocatalyst can be promoted by employing water-soluble fullerenol (C60(OH)24) as an electron mediator, and as a result, the fullerenol-based molecular system shows a 3 times higher H2 evolution activity than C60(OH)24-free system.

5.
Artículo en Inglés | MEDLINE | ID: mdl-38607713

RESUMEN

Learning from crowds describes that the annotations of training data are obtained with crowd-sourcing services. Multiple annotators each complete their own small part of the annotations, where labeling mistakes that depend on annotators occur frequently. Modeling the label-noise generation process by the noise transition matrix is a powerful tool to tackle the label noise. In real-world crowd-sourcing scenarios, noise transition matrices are both annotator- and instance-dependent. However, due to the high complexity of annotator- and instance-dependent transition matrices (AIDTM), annotation sparsity, which means each annotator only labels a tiny part of instances, makes modeling AIDTM very challenging. Without prior knowledge, existing works simplify the problem by assuming the transition matrix is instance-independent or using simple parametric ways, which lose modeling generality. Motivated by this, we target a more realistic problem, estimating general AIDTM in practice. Without losing modeling generality, we parameterize AIDTM with deep neural networks. To alleviate the modeling challenge, we suppose every annotator shares its noise pattern with similar annotators, and estimate AIDTM via knowledge transfer. We hence first model the mixture of noise patterns by all annotators, and then transfer this modeling to individual annotators. Furthermore, considering that the transfer from the mixture of noise patterns to individuals may cause two annotators with highly different noise generations to perturb each other, we employ the knowledge transfer between identified neighboring annotators to calibrate the modeling. Theoretical analyses are derived to demonstrate that both the knowledge transfer from global to individuals and the knowledge transfer between neighboring individuals can effectively help mitigate the challenge of modeling general AIDTM. Experiments confirm the superiority of the proposed approach on synthetic and real-world crowd-sourcing data. The implementation is available at https://github.com/tmllab/TAIDTM.

6.
IEEE Trans Pattern Anal Mach Intell ; 46(7): 4830-4842, 2024 Jul.
Artículo en Inglés | MEDLINE | ID: mdl-38300782

RESUMEN

Many machine learning algorithms are known to be fragile on simple instance-independent noisy labels. However, noisy labels in real-world data are more devastating since they are produced by more complicated mechanisms in an instance-dependent manner. In this paper, we target this practical challenge of Instance-Dependent Noisy Labels by jointly training (1) a model reversely engineering the noise generating mechanism, which produces an instance-dependent mapping between the clean label posterior and the observed noisy label and (2) a robust classifier that produces clean label posteriors. Compared to previous methods, the former model is novel and enables end-to-end learning of the latter directly from noisy labels. An extensive empirical study indicates that the time-consistency of data is critical to the success of training both models and motivates us to develop a curriculum selecting training data based on their dynamics on the two models' outputs over the course of training. We show that the curriculum-selected data provide both clean labels and high-quality input-output pairs for training the two models. Therefore, it leads to promising and robust classification performance even in notably challenging settings of instance-dependent noisy labels where many SoTA methods could easily fail. Extensive experimental comparisons and ablation studies further demonstrate the advantages and significance of the time-consistency curriculum in learning from instance-dependent noisy labels on multiple benchmark datasets.

7.
IEEE Trans Pattern Anal Mach Intell ; 46(5): 3522-3536, 2024 May.
Artículo en Inglés | MEDLINE | ID: mdl-38153827

RESUMEN

The sample selection approach is very popular in learning with noisy labels. As deep networks "learn pattern first", prior methods built on sample selection share a similar training procedure: the small-loss examples can be regarded as clean examples and used for helping generalization, while the large-loss examples are treated as mislabeled ones and excluded from network parameter updates. However, such a procedure is arguably debatable from two folds: (a) it does not consider the bad influence of noisy labels in selected small-loss examples; (b) it does not make good use of the discarded large-loss examples, which may be clean or have meaningful information for generalization. In this paper, we propose regularly truncated M-estimators (RTME) to address the above two issues simultaneously. Specifically, RTME can alternately switch modes between truncated M-estimators and original M-estimators. The former can adaptively select small-losses examples without knowing the noise rate and reduce the side-effects of noisy labels in them. The latter makes the possibly clean examples but with large losses involved to help generalization. Theoretically, we demonstrate that our strategies are label-noise-tolerant. Empirically, comprehensive experimental results show that our method can outperform multiple baselines and is robust to broad noise types and levels.

8.
Artículo en Inglés | MEDLINE | ID: mdl-38546996

RESUMEN

Given data with noisy labels, over-parameterized deep networks suffer overfitting mislabeled data, resulting in poor generalization. The memorization effect of deep networks shows that although the networks have the ability to memorize all noisy data, they would first memorize clean training data, and then gradually memorize mislabeled training data. A simple and effective method that exploits the memorization effect to combat noisy labels is early stopping. However, early stopping cannot distinguish the memorization of clean data and mislabeled data, resulting in the network still inevitably overfitting mislabeled data in the early training stage. In this paper, to decouple the memorization of clean data and mislabeled data, and further reduce the side effect of mislabeled data, we perform additive decomposition on network parameters. Namely, all parameters are additively decomposed into two groups, i.e., parameters w are decomposed as [Formula: see text]. Afterward, the parameters [Formula: see text] are considered to memorize clean data, while the parameters [Formula: see text] are considered to memorize mislabeled data. Benefiting from the memorization effect, the updates of the parameters [Formula: see text] are encouraged to fully memorize clean data in early training, and then discouraged with the increase of training epochs to reduce interference of mislabeled data. The updates of the parameters [Formula: see text] are the opposite. In testing, only the parameters [Formula: see text] are employed to enhance generalization. Extensive experiments on both simulated and real-world benchmarks confirm the superior performance of our method.

9.
IEEE Trans Pattern Anal Mach Intell ; 46(6): 4398-4409, 2024 Jun.
Artículo en Inglés | MEDLINE | ID: mdl-38236681

RESUMEN

Label-noise learning (LNL) aims to increase the model's generalization given training data with noisy labels. To facilitate practical LNL algorithms, researchers have proposed different label noise types, ranging from class-conditional to instance-dependent noises. In this paper, we introduce a novel label noise type called BadLabel, which can significantly degrade the performance of existing LNL algorithms by a large margin. BadLabel is crafted based on the label-flipping attack against standard classification, where specific samples are selected and their labels are flipped to other labels so that the loss values of clean and noisy labels become indistinguishable. To address the challenge posed by BadLabel, we further propose a robust LNL method that perturbs the labels in an adversarial manner at each epoch to make the loss values of clean and noisy labels again distinguishable. Once we select a small set of (mostly) clean labeled data, we can apply the techniques of semi-supervised learning to train the model accurately. Empirically, our experimental results demonstrate that existing LNL algorithms are vulnerable to the newly introduced BadLabel noise type, while our proposed robust LNL method can effectively improve the generalization performance of the model under various types of label noise. The new dataset of noisy labels and the source codes of robust LNL algorithms are available at https://github.com/zjfheart/BadLabels.

10.
IEEE Trans Neural Netw Learn Syst ; 34(6): 2842-2853, 2023 Jun.
Artículo en Inglés | MEDLINE | ID: mdl-34554918

RESUMEN

Deep neural networks (DNNs) have achieved state-of-the-art performance in various learning tasks, such as computer vision, natural language processing, and speech recognition. However, the fundamental theory of generalization still remains obscure in deep learning-why DNN models can generalize well, despite that they are heavily overparametrized in both depth and width? Recently, some work shows that traditional theory of analyzing the generalization error of learning models fails to explain the generalization of DNNs. The failure is mainly because of one simple fact that the worse case analysis of generalization error for learning models would be too loose for models with large parameter space, such as DNNs. In this work, we propose a new analysis of generalization in DNNs from an optimal transport perspective. Unlike traditional worse-case uniform convergence analysis in learning theory, our analysis of generalization error is dependent on both the learning algorithm and the data distribution and is the average-case analysis. Thus, our theory can be more practical and accurate to describe the generalization behavior of DNNs. More specifically, in this article, we try to answer a fundamental yet unsolved question in deep learning-why deeper models can generalize well than shallow models? The main contribution of this article can be summarized in four aspects. First, under a general learning framework, we derive upper bounds on the generalization error of learning algorithms by their algorithmic transport cost: the expected Wasserstein distance between the output hypothesis and the output hypothesis conditioned on an input example. We further provide several upper bounds on the algorithmic transport cost in terms of total variation distance, relative entropy, and Vapnik-Chervonenkis (VC) dimension. Moreover, we also study different conditions for loss functions under which the generalization error of a learning algorithm can be upper bounded by different probability metrics between distributions relating to the output hypothesis and/or the input data. Finally, under our established framework, we obtain our main results, showing that the generalization error in DNNs decreases exponentially to zero as the number of layers increases.

11.
Artículo en Inglés | MEDLINE | ID: mdl-37585328

RESUMEN

Deep learning has transformed computer vision, natural language processing, and speech recognition. However, two critical questions remain obscure: 1) why do deep neural networks (DNNs) generalize better than shallow networks and 2) does it always hold that a deeper network leads to better performance? In this article, we first show that the expected generalization error of neural networks (NNs) can be upper bounded by the mutual information between the learned features in the last hidden layer and the parameters of the output layer. This bound further implies that as the number of layers increases in the network, the expected generalization error will decrease under mild conditions. Layers with strict information loss, such as the convolutional or pooling layers, reduce the generalization error for the whole network; this answers the first question. However, algorithms with zero expected generalization error do not imply a small test error. This is because the expected training error is large when the information for fitting the data is lost as the number of layers increases. This suggests that the claim "the deeper the better" is conditioned on a small training error. Finally, we show that deep learning satisfies a weak notion of stability and provides some generalization error bounds for noisy stochastic gradient decent (SGD) and binary classification in DNNs.

12.
IEEE Trans Neural Netw Learn Syst ; 34(9): 5828-5840, 2023 Sep.
Artículo en Inglés | MEDLINE | ID: mdl-34890343

RESUMEN

Deep learning algorithms have led to a series of breakthroughs in computer vision, acoustical signal processing, and others. However, they have only been popularized recently due to the groundbreaking techniques developed for training deep architectures. Understanding the training techniques is important if we want to further improve them. Through extensive experimentation, Erhan et al. (2010) empirically illustrated that unsupervised pretraining has an effect of regularization for deep learning algorithms. However, theoretical justifications for the observation remain elusive. In this article, we provide theoretical supports by analyzing how unsupervised pretraining regularizes deep learning algorithms. Specifically, we interpret deep learning algorithms as the traditional Tikhonov-regularized batch learning algorithms that simultaneously learn predictors in the input feature spaces and the parameters of the neural networks to produce the Tikhonov matrices. We prove that unsupervised pretraining helps in learning meaningful Tikhonov matrices, which will make the deep learning algorithms uniformly stable and the learned predictor will generalize fast w.r.t. the sample size. Unsupervised pretraining, therefore, can be interpreted as to have the function of regularization.

13.
Chem Commun (Camb) ; 59(63): 9607-9610, 2023 Aug 03.
Artículo en Inglés | MEDLINE | ID: mdl-37458706

RESUMEN

We report that the biomass-derived lignosulfonate (LS) can function as a quasi-homogenous electron mediator to efficiently promote the electron transfer from the excited erythrosin B (ErB) to the in situ generated Pt cocatalyst under visible light, thus enhancing the photocatalytic H2 evolution activity by over 10 fold as compared to the LS-free system.

14.
IEEE Trans Pattern Anal Mach Intell ; 45(8): 9846-9861, 2023 Aug.
Artículo en Inglés | MEDLINE | ID: mdl-37819830

RESUMEN

This paper studies a practical domain adaptive (DA) semantic segmentation problem where only pseudo-labeled target data is accessible through a black-box model. Due to the domain gap and label shift between two domains, pseudo-labeled target data contains mixed closed-set and open-set label noises. In this paper, we propose a simplex noise transition matrix (SimT) to model the mixed noise distributions in DA semantic segmentation, and leverage SimT to handle open-set label noise and enable novel target recognition. When handling open-set noises, we formulate the problem as estimation of SimT. By exploiting computational geometry analysis and properties of segmentation, we design four complementary regularizers, i.e., volume regularization, anchor guidance, convex guarantee, and semantic constraint, to approximate the true SimT. Specifically, volume regularization minimizes the volume of simplex formed by rows of the non-square SimT, ensuring outputs of model to fit into the ground truth label distribution. To compensate for the lack of open-set knowledge, anchor guidance, convex guarantee, and semantic constraint are devised to enable the modeling of open-set noise distribution. The estimated SimT is utilized to correct noise issues in pseudo labels and promote the generalization ability of segmentation model on target domain data. In the task of novel target recognition, we first propose closed-to-open label correction (C2OLC) to explicitly derive the supervision signal for open-set classes by exploiting the estimated SimT, and then advance a semantic relation (SR) loss that harnesses the inter-class relation to facilitate the open-set class sample recognition in target domain. Extensive experimental results demonstrate that the proposed SimT can be flexibly plugged into existing DA methods to boost both closed-set and open-set class performance.

15.
IEEE Trans Neural Netw Learn Syst ; 34(1): 15-27, 2023 Jan.
Artículo en Inglés | MEDLINE | ID: mdl-34181555

RESUMEN

Textbook question answering (TQA) is a task that one should answer non-diagram and diagram questions accurately, given a large context which consists of abundant diagrams and essays. Although lots of studies have made significant progress in the natural image question answering (QA), they are not applicable to comprehending diagrams and reasoning over the long multimodal context. To address the above issues, we propose a relation-aware fine-grained reasoning (RAFR) network that performs fine-grained reasoning over the nodes of relation-based diagram graphs. Our method uses semantic dependencies and relative positions between nodes in the diagram to construct relation graphs and applies graph attention networks to learn diagram representations. To extract and reason over the multimodal knowledge, we first extract the text that is the most relevant to questions, options, and the instructional diagram which is the most relevant to question diagrams at the word-sentence level and the node-diagram level, respectively. Then, we apply instructional-diagram-guided attention and question-guided attention to reason over the node of question diagrams, respectively. The experimental results show that our proposed method achieves the best performance on the TQA dataset compared with baselines. We also conduct extensive ablation studies to comprehensively analyze the proposed method.

16.
IEEE Trans Pattern Anal Mach Intell ; 45(3): 3047-3058, 2023 Mar.
Artículo en Inglés | MEDLINE | ID: mdl-35675234

RESUMEN

The noise transition matrix T, reflecting the probabilities that true labels flip into noisy ones, is of vital importance to model label noise and build statistically consistent classifiers. The traditional transition matrix is limited to model closed-set label noise, where noisy training data have true class labels within the noisy label set. It is unfitted to employ such a transition matrix to model open-set label noise, where some true class labels are outside the noisy label set. Therefore, when considering a more realistic situation, i.e., both closed-set and open-set label noises occur, prior works will give unbelievable solutions. Besides, the traditional transition matrix is mostly limited to model instance-independent label noise, which may not perform well in practice. In this paper, we focus on learning with the mixed closed-set and open-set noisy labels. We address the aforementioned issues by extending the traditional transition matrix to be able to model mixed label noise, and further to the cluster-dependent transition matrix to better combat the instance-dependent label noise in real-world applications. We term the proposed transition matrix as the cluster-dependent extended transition matrix. An unbiased estimator (i.e., extended T-estimator) has been designed to estimate the cluster-dependent extended transition matrix by only exploiting the noisy data. Comprehensive experiments validate that our method can better cope with realistic label noise, following its more robust performance than the prior state-of-the-art label-noise learning methods.

17.
IEEE Trans Pattern Anal Mach Intell ; 45(12): 14055-14068, 2023 Dec.
Artículo en Inglés | MEDLINE | ID: mdl-37540612

RESUMEN

In label-noise learning, estimating the transition matrix is a hot topic as the matrix plays an important role in building statistically consistent classifiers. Traditionally, the transition from clean labels to noisy labels (i.e., clean-label transition matrix (CLTM)) has been widely exploited on class-dependent label-noise (wherein all samples in a clean class share the same label transition matrix). However, the CLTM cannot handle the more common instance-dependent label-noise well (wherein the clean-to-noisy label transition matrix needs to be estimated at the instance level by considering the input quality). Motivated by the fact that classifiers mostly output Bayes optimal labels for prediction, in this paper, we study to directly model the transition from Bayes optimal labels to noisy labels (i.e., Bayes-Label Transition Matrix (BLTM)) and learn a classifier to predict Bayes optimal labels. Note that given only noisy data, it is ill-posed to estimate either the CLTM or the BLTM. But favorably, Bayes optimal labels have no uncertainty compared with the clean labels, i.e., the class posteriors of Bayes optimal labels are one-hot vectors while those of clean labels are not. This enables two advantages to estimate the BLTM, i.e., (a) a set of examples with theoretically guaranteed Bayes optimal labels can be collected out of noisy data; (b) the feasible solution space is much smaller. By exploiting the advantages, this work proposes a parametrical model for estimating the instance-dependent label-noise transition matrix by employing a deep neural network, leading to better generalization and superior classification performance.

18.
Neural Netw ; 167: 559-571, 2023 Oct.
Artículo en Inglés | MEDLINE | ID: mdl-37696073

RESUMEN

Graph Neural Networks (GNNs) have been successfully applied to graph-level tasks in various fields such as biology, social networks, computer vision, and natural language processing. For the graph-level representations learning of GNNs, graph pooling plays an essential role. Among many pooling techniques, node drop pooling has garnered significant attention and is considered as a leading approach. However, existing node drop pooling methods, which typically retain the top-k nodes based on their significance scores, often overlook the diversity inherent in node features and graph structures. This limitation leads to suboptimal graph-level representations. To overcome this, we introduce a groundbreaking plug-and-play score scheme, termed MID. MID comprises a Multidimensional score space and two key operations: flIpscore and Dropscore. The multidimensional score space depicts the significance of nodes by multiple criteria; the flipscore process promotes the preservation of distinct node features; the dropscore compels the model to take into account a range of graph structures rather than focusing on local structures. To evaluate the effectiveness of our proposed MID, we have conducted extensive experiments by integrating it with a broad range of recent node drop pooling methods, such as TopKPool, SAGPool, GSAPool, and ASAP. In particular, MID has proven to bring a significant average improvement of approximately 2.8% over the four aforementioned methods when tested on 17 real-world graph classification datasets. Code is available at https://github.com/whuchuang/mid.


Asunto(s)
Aprendizaje , Procesamiento de Lenguaje Natural , Redes Neurales de la Computación , Red Social
19.
iScience ; 26(5): 106633, 2023 May 19.
Artículo en Inglés | MEDLINE | ID: mdl-37192969

RESUMEN

Cardiovascular disease remains a leading cause of mortality with an estimated half a billion people affected in 2019. However, detecting signals between specific pathophysiology and coronary plaque phenotypes using complex multi-omic discovery datasets remains challenging due to the diversity of individuals and their risk factors. Given the complex cohort heterogeneity present in those with coronary artery disease (CAD), we illustrate several different methods, both knowledge-guided and data-driven approaches, for identifying subcohorts of individuals with subclinical CAD and distinct metabolomic signatures. We then demonstrate that utilizing these subcohorts can improve the prediction of subclinical CAD and can facilitate the discovery of novel biomarkers of subclinical disease. Analyses acknowledging cohort heterogeneity through identifying and utilizing these subcohorts may be able to advance our understanding of CVD and provide more effective preventative treatments to reduce the burden of this disease in individuals and in society as a whole.

20.
IEEE Trans Pattern Anal Mach Intell ; 45(10): 12321-12340, 2023 Oct.
Artículo en Inglés | MEDLINE | ID: mdl-37126624

RESUMEN

Quantum computers are next-generation devices that hold promise to perform calculations beyond the reach of classical computers. A leading method towards achieving this goal is through quantum machine learning, especially quantum generative learning. Due to the intrinsic probabilistic nature of quantum mechanics, it is reasonable to postulate that quantum generative learning models (QGLMs) may surpass their classical counterparts. As such, QGLMs are receiving growing attention from the quantum physics and computer science communities, where various QGLMs that can be efficiently implemented on near-term quantum machines with potential computational advantages are proposed. In this paper, we review the current progress of QGLMs from the perspective of machine learning. Particularly, we interpret these QGLMs, covering quantum circuit Born machines, quantum generative adversarial networks, quantum Boltzmann machines, and quantum variational autoencoders, as the quantum extension of classical generative learning models. In this context, we explore their intrinsic relations and their fundamental differences. We further summarize the potential applications of QGLMs in both conventional machine learning tasks and quantum physics. Last, we discuss the challenges and further research directions for QGLMs.

SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA