Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 90
Filtrar
Más filtros

Banco de datos
Tipo del documento
Intervalo de año de publicación
1.
Sensors (Basel) ; 22(16)2022 Aug 18.
Artículo en Inglés | MEDLINE | ID: mdl-36015952

RESUMEN

Deep learning techniques have shown their capabilities to discover knowledge from massive unstructured data, providing data-driven solutions for representation and decision making [...].


Asunto(s)
Aprendizaje Profundo , Diagnóstico por Imagen
2.
Philos Trans A Math Phys Eng Sci ; 379(2207): 20200362, 2021 Oct 04.
Artículo en Inglés | MEDLINE | ID: mdl-34398647

RESUMEN

Symbiotic autonomous systems (SAS) are advanced intelligent and cognitive systems that exhibit autonomous collective intelligence enabled by coherent symbiosis of human-machine interactions in hybrid societies. Basic research in the emerging field of SAS has triggered advanced general-AI technologies that either function without human intervention or synergize humans and intelligent machines in coherent cognitive systems. This work presents a theoretical framework of SAS underpinned by the latest advances in intelligence, cognition, computer, and system sciences. SAS are characterized by the composition of autonomous and symbiotic systems that adopt bio-brain-social-inspired and heterogeneously synergized structures and autonomous behaviours. This paper explores the cognitive and mathematical foundations of SAS. The challenges to seamless human-machine interactions in a hybrid environment are addressed. SAS-based collective intelligence is explored in order to augment human capability by autonomous machine intelligence towards the next generation of general AI, cognitive computers, and trustworthy mission-critical intelligent systems. Emerging paradigms and engineering applications of SAS are elaborated via autonomous knowledge learning systems that symbiotically work between humans and cognitive robots. This article is part of the theme issue 'Towards symbiotic autonomous systems'.

3.
Neural Comput ; 32(8): 1531-1562, 2020 08.
Artículo en Inglés | MEDLINE | ID: mdl-32521214

RESUMEN

Sparsity is a desirable property in many nonnegative matrix factorization (NMF) applications. Although some level of sparseness of NMF solutions can be achieved by using regularization, the resulting sparsity depends highly on the regularization parameter to be valued in an ad hoc way. In this letter we formulate sparse NMF as a mixed-integer optimization problem with sparsity as binary constraints. A discrete-time projection neural network is developed for solving the formulated problem. Sufficient conditions for its stability and convergence are analytically characterized by using Lyapunov's method. Experimental results on sparse feature extraction are discussed to substantiate the superiority of this approach to extracting highly sparse features.


Asunto(s)
Redes Neurales de la Computación , Reconocimiento de Normas Patrones Automatizadas/métodos , Humanos
4.
Bioinformatics ; 33(11): 1696-1702, 2017 Jun 01.
Artículo en Inglés | MEDLINE | ID: mdl-28158419

RESUMEN

MOTIVATION: The exponential growth of biological network database has increasingly rendered the global network similarity search (NSS) computationally intensive. Given a query network and a network database, it aims to find out the top similar networks in the database against the query network based on a topological similarity measure of interest. With the advent of big network data, the existing search methods may become unsuitable since some of them could render queries unsuccessful by returning empty answers or arbitrary query restrictions. Therefore, the design of NSS algorithm remains challenging under the dilemma between accuracy and efficiency. RESULTS: We propose a global NSS method based on regression, denotated as NSSRF, which boosts the search speed without any significant sacrifice in practical performance. As motivated from the nature, subgraph signatures are heavily involved. Two phases are proposed in NSSRF: offline model building phase and similarity query phase. In the offline model building phase, the subgraph signatures and cosine similarity scores are used for efficient random forest regression (RFR) model training. In the similarity query phase, the trained regression model is queried to return similar networks. We have extensively validated NSSRF on biological pathways and molecular structures; NSSRF demonstrates competitive performance over the state-of-the-arts. Remarkably, NSSRF works especially well for large networks, which indicates that the proposed approach can be promising in the era of big data. Case studies have proven the efficiencies and uniqueness of NSSRF which could be missed by the existing state-of-the-arts. AVAILABILITY AND IMPLEMENTATION: The source code of two versions of NSSRF are freely available for downloading at https://github.com/zhangjiaobxy/nssrfBinary and https://github.com/zhangjiaobxy/nssrfPackage . CONTACT: kc.w@cityu.edu.hk. SUPPLEMENTARY INFORMATION: Supplementary data are available at Bioinformatics online.


Asunto(s)
Biología Computacional/métodos , Modelos Teóricos , Programas Informáticos , Animales , Humanos , Redes y Vías Metabólicas , Conformación Proteica
5.
IEEE Trans Image Process ; 33: 2502-2513, 2024.
Artículo en Inglés | MEDLINE | ID: mdl-38526904

RESUMEN

Residual coding has gained prevalence in lossless compression, where a lossy layer is initially employed and the reconstruction errors (i.e., residues) are then losslessly compressed. The underlying principle of the residual coding revolves around the exploration of priors based on context modeling. Herein, we propose a residual coding framework for 3D medical images, involving the off-the-shelf video codec as the lossy layer and a Bilateral Context Modeling based Network (BCM-Net) as the residual layer. The BCM-Net is proposed to achieve efficient lossless compression of residues through exploring intra-slice and inter-slice bilateral contexts. In particular, a symmetry-based intra-slice context extraction (SICE) module is proposed to mine bilateral intra-slice correlations rooted in the inherent anatomical symmetry of 3D medical images. Moreover, a bi-directional inter-slice context extraction (BICE) module is designed to explore bilateral inter-slice correlations from bi-directional references, thereby yielding representative inter-slice context. Experiments on popular 3D medical image datasets demonstrate that the proposed method can outperform existing state-of-the-art methods owing to efficient redundancy reduction. Our code will be available on GitHub for future research.


Asunto(s)
Compresión de Datos , Compresión de Datos/métodos , Imagenología Tridimensional/métodos
6.
IEEE Trans Cybern ; 54(9): 5205-5216, 2024 Sep.
Artículo en Inglés | MEDLINE | ID: mdl-38498757

RESUMEN

The development of data sensing technology has generated a vast amount of high-dimensional data, posing great challenges for machine learning models. Over the past decades, despite demonstrating its effectiveness in data classification, genetic programming (GP) has still encountered three major challenges when dealing with high-dimensional data: 1) solution diversity; 2) multiclass imbalance; and 3) large feature space. In this article, we have developed a problem-specific multiobjective GP framework (PS-MOGP) for handling classification tasks with high-dimensional data. To reduce the large solution space caused by high dimensionality, we incorporate the recursive feature elimination strategy based on mining the archive of evolved GP solutions. A progressive domination Pareto archive evolution strategy (PD-PAES), which optimizes the objectives in a specific order according to their objectives, is proposed to evaluate the GP individuals and maintain a better diversity of solutions. Besides, to address the seriously imbalanced class issue caused by traditional binary decomposition (BD) one versus rest (OVR) for multiclass classification problems, we design a method named BD with a similar positive and negative class size (BD-SPNCS) to generate a set of auxiliary classifiers. Experimental results on benchmark and real-world datasets demonstrate that our proposed PS-MOGP outperforms state-of-the-art traditional and evolutionary classification methods in the context of high-dimensional data classification.

7.
IEEE Trans Image Process ; 33: 3075-3089, 2024.
Artículo en Inglés | MEDLINE | ID: mdl-38656839

RESUMEN

In this paper, we propose a graph-represented image distribution similarity (GRIDS) index for full-reference (FR) image quality assessment (IQA), which can measure the perceptual distance between distorted and reference images by assessing the disparities between their distribution patterns under a graph-based representation. First, we transform the input image into a graph-based representation, which is proven to be a versatile and effective choice for capturing visual perception features. This is achieved through the automatic generation of a vision graph from the given image content, leading to holistic perceptual associations for irregular image regions. Second, to reflect the perceived image distribution, we decompose the undirected graph into cliques and then calculate the product of the potential functions for the cliques to obtain the joint probability distribution of the undirected graph. Finally, we compare the distances between the graph feature distributions of the distorted and reference images at different stages; thus, we combine the distortion distribution measurements derived from different graph model depths to determine the perceived quality of the distorted images. The empirical results obtained from an extensive array of experiments underscore the competitive nature of our proposed method, which achieves performance on par with that of the state-of-the-art methods, demonstrating its exceptional predictive accuracy and ability to maintain consistent and monotonic behaviour in image quality prediction tasks. The source code is publicly available at the following website https://github.com/Land5cape/GRIDS.

8.
IEEE Trans Image Process ; 33: 3227-3241, 2024.
Artículo en Inglés | MEDLINE | ID: mdl-38691435

RESUMEN

The statistical regularities of natural images, referred to as natural scene statistics, play an important role in no-reference image quality assessment. However, it has been widely acknowledged that screen content images (SCIs), which are typically computer generated, do not hold such statistics. Here we make the first attempt to learn the statistics of SCIs, based upon which the quality of SCIs can be effectively determined. The underlying mechanism of the proposed approach is based upon the mild assumption that the SCIs, which are not physically acquired, still obey certain statistics that could be understood in a learning fashion. We empirically show that the statistics deviation could be effectively leveraged in quality assessment, and the proposed method is superior when evaluated in different settings. Extensive experimental results demonstrate the Deep Feature Statistics based SCI Quality Assessment (DFSS-IQA) model delivers promising performance compared with existing NR-IQA models and shows a high generalization capability in the cross-dataset settings. The implementation of our method is publicly available at https://github.com/Baoliang93/DFSS-IQA.

9.
IEEE Trans Cybern ; 54(8): 4749-4762, 2024 Aug.
Artículo en Inglés | MEDLINE | ID: mdl-38145521

RESUMEN

The quality of videos is the primary concern of video service providers. Built upon deep neural networks, video quality assessment (VQA) has rapidly progressed. Although existing works have introduced the knowledge of the human visual system (HVS) into VQA, there are still some limitations that hinder the full exploitation of HVS, including incomplete modeling with few HVS characteristics and insufficient connection among these characteristics. In this article, we present a novel spatial-temporal VQA method termed HVS-5M, wherein we design five modules to simulate five characteristics of HVS and create a bioinspired connection among these modules in a cooperative manner. Specifically, on the side of the spatial domain, the visual saliency module first extracts a saliency map. Then, the content-dependency and the edge masking modules extract the content and edge features, respectively, which are both weighted by the saliency map to highlight those regions that human beings may be interested in. On the other side of the temporal domain, the motion perception module extracts the dynamic temporal features. Besides, the temporal hysteresis module simulates the memory mechanism of human beings and comprehensively evaluates the video quality according to the fusion features from the spatial and temporal domains. Extensive experiments show that our HVS-5M outperforms the state-of-the-art VQA methods. Ablation studies are further conducted to verify the effectiveness of each module toward the proposed method. The source code is available at https://github.com/GZHU-DVL/HVS-5M.

10.
IEEE Trans Cybern ; PP2024 Aug 06.
Artículo en Inglés | MEDLINE | ID: mdl-39106131

RESUMEN

The extraction of spatiotemporal neuron activity from calcium imaging videos plays a crucial role in unraveling the coding properties of neurons. While existing neuron extraction approaches have shown promising results, disturbing and scattering background and unused depth still impede their performance. To address these limitations, we develop an automatic and accurate neuron extraction paradigm, dubbed as decomposition-estimation-reconstruction (DER), consisting of D-procedure, E-procedure, and R-procedure. Specifically, the D-procedure first decomposes the raw data into a low-rank background and a sparse neuron signal, and regularizes L0 -norm priors of intensity and gradient of the neuron signal to suppress blurring and artifact effects. Then, the E-procedure estimates the depth-dependent transmission of the neuron signal based on its bright and dark channel priors. The R-procedure finally integrates the depth estimation of the neuron signal as a content-importance weight into a constrained non-negative matrix decomposition framework, which facilitates accurate neuron locations to boost the quality of extracted neurons. These three procedures are coupled in a cascade manner, where the former copes with calcium imaging data to facilitate the subsequent one. Comprehensive experiments on neuron extraction from calcium imaging videos demonstrate the superiority of our DER paradigm in both qualitative results and quantitative assessments over state-of-the-art methods.

11.
IEEE Trans Image Process ; 33: 4044-4059, 2024.
Artículo en Inglés | MEDLINE | ID: mdl-38941202

RESUMEN

This study aims to develop advanced and training-free full-reference image quality assessment (FR-IQA) models based on deep neural networks. Specifically, we investigate measures that allow us to perceptually compare deep network features and reveal their underlying factors. We find that distribution measures enjoy advanced perceptual awareness and test the Wasserstein distance (WSD), Jensen-Shannon divergence (JSD), and symmetric Kullback-Leibler divergence (SKLD) measures when comparing deep features acquired from various pretrained deep networks, including the Visual Geometry Group (VGG) network, SqueezeNet, MobileNet, and EfficientNet. The proposed FR-IQA models exhibit superior alignment with subjective human evaluations across diverse image quality assessment (IQA) datasets without training, demonstrating the advanced perceptual relevance of distribution measures when comparing deep network features. Additionally, we explore the applicability of deep distribution measures in image super-resolution enhancement tasks, highlighting their potential for guiding perceptual enhancements. The code is available on website. (https://github.com/Buka-Xing/Deep-network-based-distribution-measures-for-full-reference-image-quality-assessment).

12.
Artículo en Inglés | MEDLINE | ID: mdl-39325596

RESUMEN

Deep CNNs have achieved impressive improvements for night-time self-supervised depth estimation form a monocular image. However, the performance degrades considerably compared to day-time depth estimation due to significant domain gaps, low visibility, and varying illuminations between day and night images. To address these challenges, we propose a novel night-time self-supervised monocular depth estimation framework with structure regularization, i.e., SRNSD, which incorporates three aspects of constraints for better performance, including feature and depth domain adaptation, image perspective constraint, and cropped multi-scale consistency loss. Specifically, we utilize adaptations of both feature and depth output spaces for better night-time feature extraction and depth map prediction, along with high- and low-frequency decoupling operations for better depth structure and texture recovery. Meanwhile, we employ an image perspective constraint to enhance the smoothness and obtain better depth maps in areas where the luminosity jumps change. Furthermore, we introduce a simple yet effective cropped multi-scale consistency loss that utilizes consistency among different scales of depth outputs for further optimization, refining the detailed textures and structures of predicted depth. Experimental results on different benchmarks with depth ranges of 40m and 60m, including Oxford RobotCar dataset, nuScenes dataset and CARLA-EPE dataset, demonstrate the superiority of our approach over state-of-the-art night-time self-supervised depth estimation approaches across multiple metrics, proving our effectiveness.

13.
IEEE Trans Cybern ; PP2023 Nov 09.
Artículo en Inglés | MEDLINE | ID: mdl-37943655

RESUMEN

Salient instance segmentation (SIS) is an emerging field that evolves from salient object detection (SOD), aiming at identifying individual salient instances using segmentation maps. Inspired by the success of dynamic convolutions in segmentation tasks, this article introduces a keypoints-based SIS network (KepSalinst). It employs multiple keypoints, that is, the center and several peripheral points of an instance, as effective geometrical guidance for dynamic convolutions. The features at peripheral points can help roughly delineate the spatial extent of the instance and complement the information inside the central features. To fully exploit the complementary components within these features, we design a differentiated patterns fusion (DPF) module. This ensures that the resulting dynamic convolutional filters formed by these features are sufficiently comprehensive for precise segmentation. Furthermore, we introduce a high-level semantic guided saliency (HSGS) module. This module enhances the perception of saliency by predicting a map for the input image to estimate a saliency score for each segmented instance. On four SIS datasets (ILSO, SOC, SIS10K, and COME15K), our KepSalinst outperforms all previous models qualitatively and quantitatively.

14.
IEEE Trans Cybern ; 53(11): 7162-7173, 2023 Nov.
Artículo en Inglés | MEDLINE | ID: mdl-36264736

RESUMEN

So far, researchers have proposed many forensics tools to protect the authenticity and integrity of digital information. However, with the explosive development of machine learning, existing forensics tools may compromise against new attacks anytime. Hence, it is always necessary to investigate anti-forensics to expose the vulnerabilities of forensics tools. It is beneficial for forensics researchers to develop new tools as countermeasures. To date, one of the potential threats is the generative adversarial networks (GANs), which could be employed for fabricating or forging falsified data to attack forensics detectors. In this article, we investigate the anti-forensics performance of GANs by proposing a novel model, the ExS-GAN, which features an extra supervision system. After training, the proposed model could launch anti-forensics attacks on various manipulated images. Evaluated by experiments, the proposed method could achieve high anti-forensics performance while preserving satisfying image quality. We also justify the proposed extra supervision via an ablation study.

15.
IEEE Trans Neural Netw Learn Syst ; 34(5): 2338-2352, 2023 May.
Artículo en Inglés | MEDLINE | ID: mdl-34543206

RESUMEN

The performance of a convolutional neural network (CNN) heavily depends on its hyperparameters. However, finding a suitable hyperparameters configuration is difficult, challenging, and computationally expensive due to three issues, which are 1) the mixed-variable problem of different types of hyperparameters; 2) the large-scale search space of finding optimal hyperparameters; and 3) the expensive computational cost for evaluating candidate hyperparameters configuration. Therefore, this article focuses on these three issues and proposes a novel estimation of distribution algorithm (EDA) for efficient hyperparameters optimization, with three major contributions in the algorithm design. First, a hybrid-model EDA is proposed to efficiently deal with the mixed-variable difficulty. The proposed algorithm uses a mixed-variable encoding scheme to encode the mixed-variable hyperparameters and adopts an adaptive hybrid-model learning (AHL) strategy to efficiently optimize the mixed-variables. Second, an orthogonal initialization (OI) strategy is proposed to efficiently deal with the challenge of large-scale search space. Third, a surrogate-assisted multi-level evaluation (SME) method is proposed to reduce the expensive computational cost. Based on the above, the proposed algorithm is named s urrogate-assisted hybrid-model EDA (SHEDA). For experimental studies, the proposed SHEDA is verified on widely used classification benchmark problems, and is compared with various state-of-the-art methods. Moreover, a case study on aortic dissection (AD) diagnosis is carried out to evaluate its performance. Experimental results show that the proposed SHEDA is very effective and efficient for hyperparameters optimization, which can find a satisfactory hyperparameters configuration for the CIFAR10, CIFAR100, and AD diagnosis with only 0.58, 0.97, and 1.18 GPU days, respectively.

16.
IEEE Trans Image Process ; 32: 2827-2842, 2023.
Artículo en Inglés | MEDLINE | ID: mdl-37186533

RESUMEN

Convolutional Neural Networks (CNNs) dominate image processing but suffer from local inductive bias, which is addressed by the transformer framework with its inherent ability to capture global context through self-attention mechanisms. However, how to inherit and integrate their advantages to improve compressed sensing is still an open issue. This paper proposes CSformer, a hybrid framework to explore the representation capacity of local and global features. The proposed approach is well-designed for end-to-end compressive image sensing, composed of adaptive sampling and recovery. In the sampling module, images are measured block-by-block by the learned sampling matrix. In the reconstruction stage, the measurements are projected into an initialization stem, a CNN stem, and a transformer stem. The initialization stem mimics the traditional reconstruction of compressive sensing but generates the initial reconstruction in a learnable and efficient manner. The CNN stem and transformer stem are concurrent, simultaneously calculating fine-grained and long-range features and efficiently aggregating them. Furthermore, we explore a progressive strategy and window-based transformer block to reduce the parameters and computational complexity. The experimental results demonstrate the effectiveness of the dedicated transformer-based architecture for compressive sensing, which achieves superior performance compared to state-of-the-art methods on different datasets. Our codes is available at: https://github.com/Lineves7/CSformer.

17.
Artículo en Inglés | MEDLINE | ID: mdl-37018573

RESUMEN

Salient object detection (SOD) aims to determine the most visually attractive objects in an image. With the development of virtual reality (VR) technology, 360 ° omnidirectional image has been widely used, but the SOD task in 360 ° omnidirectional image is seldom studied due to its severe distortions and complex scenes. In this article, we propose a multi-projection fusion and refinement network (MPFR-Net) to detect the salient objects in 360 ° omnidirectional image. Different from the existing methods, the equirectangular projection (EP) image and four corresponding cube-unfolding (CU) images are embedded into the network simultaneously as inputs, where the CU images not only provide supplementary information for EP image but also ensure the object integrity of cube-map projection. In order to make full use of these two projection modes, a dynamic weighting fusion (DWF) module is designed to adaptively integrate the features of different projections in a complementary and dynamic manner from the perspective of inter and intrafeatures. Furthermore, in order to fully explore the way of interaction between encoder and decoder features, a filtration and refinement (FR) module is designed to suppress the redundant information of the feature itself and between the features. Experimental results on two omnidirectional datasets demonstrate that the proposed approach outperforms the state-of-the-art methods both qualitatively and quantitatively. The code and results can be found from the link of https://rmcong.github.io/proj_MPFRNet.html.

18.
IEEE Trans Cybern ; 53(3): 1460-1474, 2023 Mar.
Artículo en Inglés | MEDLINE | ID: mdl-34516383

RESUMEN

The job-shop scheduling problem (JSSP) is a challenging scheduling and optimization problem in the industry and engineering, which relates to the work efficiency and operational costs of factories. The completion time of all jobs is the most commonly considered optimization objective in the existing work. However, factories focus on both time and cost objectives, including completion time, total tardiness, advance time, production cost, and machine loss. Therefore, this article first time proposes a many-objective JSSP that considers all these five objectives to make the model more practical to reflect the various demands of factories. To optimize these five objectives simultaneously, a novel multiple populations for multiple objectives (MPMO) framework-based genetic algorithm (GA) approach, called MPMOGA, is proposed. First, MPMOGA employs five populations to optimize the five objectives, respectively. Second, to avoid each population only focusing on its corresponding single objective, an archive sharing technique (AST) is proposed to store the elite solutions collected from the five populations so that the populations can obtain optimization information about the other objectives from the archive. This way, MPMOGA can approximate different parts of the entire Pareto front (PF). Third, an archive update strategy (AUS) is proposed to further improve the quality of the solutions in the archive. The test instances in the widely used test sets are adopted to evaluate the performance of MPMOGA. The experimental results show that MPMOGA outperforms the compared state-of-the-art algorithms on most of the test instances.

19.
Int J Mach Learn Cybern ; 14(5): 1725-1738, 2023.
Artículo en Inglés | MEDLINE | ID: mdl-36474954

RESUMEN

COVID-19 has resulted in a significant impact on individual lives, bringing a unique challenge for face retrieval under occlusion. In this paper, an occluded face retrieval method which consists of generator, discriminator, and deep hashing retrieval network is proposed for face retrieval in a large-scale face image dataset under variety of occlusion situations. In the proposed method, occluded face images are firstly reconstructed using a face inpainting model, in which the adversarial loss, reconstruction loss and hash bits loss are combined for training. With the trained model, hash codes of real face images and corresponding reconstructed face images are aimed to be as similar as possible. Then, a deep hashing retrieval network is used to generate compact similarity-preserving hashing codes using reconstructed face images for a better retrieval performance. Experimental results show that the proposed method can successfully generate the reconstructed face images under occlusion. Meanwhile, the proposed deep hashing retrieval network achieves better retrieval performance for occluded face retrieval than existing state-of-the-art deep hashing retrieval methods.

20.
IEEE Trans Image Process ; 32: 4472-4485, 2023.
Artículo en Inglés | MEDLINE | ID: mdl-37335801

RESUMEN

Due to the light absorption and scattering induced by the water medium, underwater images usually suffer from some degradation problems, such as low contrast, color distortion, and blurring details, which aggravate the difficulty of downstream underwater understanding tasks. Therefore, how to obtain clear and visually pleasant images has become a common concern of people, and the task of underwater image enhancement (UIE) has also emerged as the times require. Among existing UIE methods, Generative Adversarial Networks (GANs) based methods perform well in visual aesthetics, while the physical model-based methods have better scene adaptability. Inheriting the advantages of the above two types of models, we propose a physical model-guided GAN model for UIE in this paper, referred to as PUGAN. The entire network is under the GAN architecture. On the one hand, we design a Parameters Estimation subnetwork (Par-subnet) to learn the parameters for physical model inversion, and use the generated color enhancement image as auxiliary information for the Two-Stream Interaction Enhancement sub-network (TSIE-subnet). Meanwhile, we design a Degradation Quantization (DQ) module in TSIE-subnet to quantize scene degradation, thereby achieving reinforcing enhancement of key regions. On the other hand, we design the Dual-Discriminators for the style-content adversarial constraint, promoting the authenticity and visual aesthetics of the results. Extensive experiments on three benchmark datasets demonstrate that our PUGAN outperforms state-of-the-art methods in both qualitative and quantitative metrics. The code and results can be found from the link of https://rmcong.github.io/proj_PUGAN.html.

SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA