Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 14 de 14
Filtrar
1.
ArXiv ; 2023 Oct 05.
Artículo en Inglés | MEDLINE | ID: mdl-36994156

RESUMEN

In the 1950s, Barlow and Attneave hypothesised a link between biological vision and information maximisation. Following Shannon, information was defined using the probability of natural images. A number of physiological and psychophysical phenomena have been derived ever since from principles like info-max, efficient coding, or optimal denoising. However, it remains unclear how this link is expressed in mathematical terms from image probability. First, classical derivations were subjected to strong assumptions on the probability models and on the behaviour of the sensors. Moreover, the direct evaluation of the hypothesis was limited by the inability of the classical image models to deliver accurate estimates of the probability. In this work we directly evaluate image probabilities using an advanced generative model for natural images, and we analyse how probability-related factors can be combined to predict human perception via sensitivity of state-of-the-art subjective image quality metrics. We use information theory and regression analysis to find a combination of just two probability-related factors that achieves 0.8 correlation with subjective metrics. This probability-based sensitivity is psychophysically validated by reproducing the basic trends of the Contrast Sensitivity Function, its suprathreshold variation, and trends of the Weber-law and masking.

2.
Annu Int Conf IEEE Eng Med Biol Soc ; 2022: 2084-2087, 2022 07.
Artículo en Inglés | MEDLINE | ID: mdl-36086174

RESUMEN

The number of studies in the medical field that uses machine learning and deep learning techniques has been increasing in the last years. However, these techniques require a huge amount of data that can be difficult and expensive to obtain. This specially happens with cardiac magnetic resonance (MR) images. One solution to the problem is raise the dataset size by generating synthetic data. Convolutional Variational Autoencoder (CVAe) is a deep learning technique which allows to generate synthetic images, but sometimes the synthetic images can be slightly blurred. We propose the combination of the CVAe technique combined with Style Transfer technique to generate synthetic realistic cardiac MR images. Clinical Relevance-The current work presents a tool to increase in a simple easy and fast way the cardiac magnetic resonance images dataset with which perform machine learning and deep learning studies.


Asunto(s)
Algoritmos , Imagen por Resonancia Magnética , Corazón/diagnóstico por imagen , Aprendizaje Automático
3.
PLoS One ; 16(2): e0246775, 2021.
Artículo en Inglés | MEDLINE | ID: mdl-33534865

RESUMEN

[This corrects the article DOI: 10.1371/journal.pone.0235885.].

4.
Biochim Biophys Acta Bioenerg ; 1862(2): 148351, 2021 02 01.
Artículo en Inglés | MEDLINE | ID: mdl-33285101

RESUMEN

Carotenoids (Cars) regulate the energy flow towards the reaction centres in a versatile way whereby the switch between energy harvesting and dissipation is strongly modulated by the operation of the xanthophyll cycles. However, the cascade of molecular mechanisms during the change from light harvesting to energy dissipation remains spectrally poorly understood. By characterizing the in vivo absorbance changes (ΔA) of leaves from four species in the 500-600 nm range through a Gaussian decomposition, while measuring passively simultaneous Chla fluorescence (F) changes, we present a direct observation of the quick antenna adjustments during a 3-min dark-to-high-light induction. Underlying spectral behaviours of the 500-600 nm ΔA feature can be characterized by a minimum set of three Gaussians distinguishing very quick dynamics during the first minute. Our results show the parallel trend of two Gaussian components and the prompt Chla F quenching. Further, we observe similar quick kinetics between the relative behaviour of these components and the in vivo formations of antheraxanthin (Ant) and zeaxanthin (Zea), in parallel with the dynamic quenching of singlet excited chlorophyll a (1Chla*) states. After these simultaneous quick kinetical behaviours of ΔA and F during the first minute, the 500-600 nm feature continues to increase, indicating a further enhanced absorption driven by the centrally located Gaussian until 3 min after sudden light exposure. Observing these precise underlying kinetic trends of the spectral behaviour in the 500-600 nm region shows the large potential of in vivo leaf spectroscopy to bring new insights on the quick redistribution and relaxation of excitation energy, indicating a key role for both Ant and Zea.


Asunto(s)
Clorofila A/química , Fluorescencia , Xantófilas/química
5.
PLoS One ; 15(10): e0235885, 2020.
Artículo en Inglés | MEDLINE | ID: mdl-33119617

RESUMEN

Kernel methods are powerful machine learning techniques which use generic non-linear functions to solve complex tasks. They have a solid mathematical foundation and exhibit excellent performance in practice. However, kernel machines are still considered black-box models as the kernel feature mapping cannot be accessed directly thus making the kernels difficult to interpret. The aim of this work is to show that it is indeed possible to interpret the functions learned by various kernel methods as they can be intuitive despite their complexity. Specifically, we show that derivatives of these functions have a simple mathematical formulation, are easy to compute, and can be applied to various problems. The model function derivatives in kernel machines is proportional to the kernel function derivative and we provide the explicit analytic form of the first and second derivatives of the most common kernel functions with regard to the inputs as well as generic formulas to compute higher order derivatives. We use them to analyze the most used supervised and unsupervised kernel learning methods: Gaussian Processes for regression, Support Vector Machines for classification, Kernel Entropy Component Analysis for density estimation, and the Hilbert-Schmidt Independence Criterion for estimating the dependency between random variables. For all cases we expressed the derivative of the learned function as a linear combination of the kernel function derivative. Moreover we provide intuitive explanations through illustrative toy examples and show how these same kernel methods can be applied to applications in the context of spatio-temporal Earth system data cubes. This work reflects on the observation that function derivatives may play a crucial role in kernel methods analysis and understanding.


Asunto(s)
Simulación por Computador , Ciencias de la Tierra , Aprendizaje Automático , Máquina de Vectores de Soporte , Entropía , Humanos , Distribución Normal
6.
J Opt Soc Am A Opt Image Sci Vis ; 34(9): 1511-1525, 2017 Sep 01.
Artículo en Inglés | MEDLINE | ID: mdl-29036154

RESUMEN

We develop a framework for rendering photographic images by directly optimizing their perceptual similarity to the original visual scene. Specifically, over the set of all images that can be rendered on a given display, we minimize the normalized Laplacian pyramid distance (NLPD), a measure of perceptual dissimilarity that is derived from a simple model of the early stages of the human visual system. When rendering images acquired with a higher dynamic range than that of the display, we find that the optimization boosts the contrast of low-contrast features without introducing significant artifacts, yielding results of comparable visual quality to current state-of-the-art methods, but without manual intervention or parameter adjustment. We also demonstrate the effectiveness of the framework for a variety of other display constraints, including limitations on minimum luminance (black point), mean luminance (as a proxy for energy consumption), and quantized luminance levels (halftoning). We show that the method may generally be used to enhance details and contrast, and, in particular, can be used on images degraded by optical scattering (e.g., fog). Finally, we demonstrate the necessity of each of the NLPD components-an initial power function, a multiscale transform, and local contrast gain control-in achieving these results and we show that NLPD is competitive with the current state-of-the-art image quality metrics.

7.
IEEE Trans Neural Netw Learn Syst ; 28(6): 1466-1472, 2017 06.
Artículo en Inglés | MEDLINE | ID: mdl-26930695

RESUMEN

This brief addresses two main issues of the standard kernel entropy component analysis (KECA) algorithm: the optimization of the kernel decomposition and the optimization of the Gaussian kernel parameter. KECA roughly reduces to a sorting of the importance of kernel eigenvectors by entropy instead of variance, as in the kernel principal components analysis. In this brief, we propose an extension of the KECA method, named optimized KECA (OKECA), that directly extracts the optimal features retaining most of the data entropy by means of compacting the information in very few features (often in just one or two). The proposed method produces features which have higher expressive power. In particular, it is based on the independent component analysis framework, and introduces an extra rotation to the eigen decomposition, which is optimized via gradient-ascent search. This maximum entropy preservation suggests that OKECA features are more efficient than KECA features for density estimation. In addition, a critical issue in both the methods is the selection of the kernel parameter, since it critically affects the resulting performance. Here, we analyze the most common kernel length-scale selection criteria. The results of both the methods are illustrated in different synthetic and real problems. Results show that OKECA returns projections with more expressive power than KECA, the most successful rule for estimating the kernel parameter is based on maximum likelihood, and OKECA is more robust to the selection of the length-scale parameter in kernel density estimation.

8.
Front Hum Neurosci ; 9: 557, 2015.
Artículo en Inglés | MEDLINE | ID: mdl-26528165

RESUMEN

When adapted to a particular scenery our senses may fool us: colors are misinterpreted, certain spatial patterns seem to fade out, and static objects appear to move in reverse. A mere empirical description of the mechanisms tuned to color, texture, and motion may tell us where these visual illusions come from. However, such empirical models of gain control do not explain why these mechanisms work in this apparently dysfunctional manner. Current normative explanations of aftereffects based on scene statistics derive gain changes by (1) invoking decorrelation and linear manifold matching/equalization, or (2) using nonlinear divisive normalization obtained from parametric scene models. These principled approaches have different drawbacks: the first is not compatible with the known saturation nonlinearities in the sensors and it cannot fully accomplish information maximization due to its linear nature. In the second, gain change is almost determined a priori by the assumed parametric image model linked to divisive normalization. In this study we show that both the response changes that lead to aftereffects and the nonlinear behavior can be simultaneously derived from a single statistical framework: the Sequential Principal Curves Analysis (SPCA). As opposed to mechanistic models, SPCA is not intended to describe how physiological sensors work, but it is focused on explaining why they behave as they do. Nonparametric SPCA has two key advantages as a normative model of adaptation: (i) it is better than linear techniques as it is a flexible equalization that can be tuned for more sensible criteria other than plain decorrelation (either full information maximization or error minimization); and (ii) it makes no a priori functional assumption regarding the nonlinearity, so the saturations emerge directly from the scene data and the goal (and not from the assumed function). It turns out that the optimal responses derived from these more sensible criteria and SPCA are consistent with dysfunctional behaviors such as aftereffects.

9.
Int J Neural Syst ; 24(7): 1440007, 2014 Nov.
Artículo en Inglés | MEDLINE | ID: mdl-25164247

RESUMEN

This paper presents a new framework for manifold learning based on a sequence of principal polynomials that capture the possibly nonlinear nature of the data. The proposed Principal Polynomial Analysis (PPA) generalizes PCA by modeling the directions of maximal variance by means of curves, instead of straight lines. Contrarily to previous approaches, PPA reduces to performing simple univariate regressions, which makes it computationally feasible and robust. Moreover, PPA shows a number of interesting analytical properties. First, PPA is a volume-preserving map, which in turn guarantees the existence of the inverse. Second, such an inverse can be obtained in closed form. Invertibility is an important advantage over other learning methods, because it permits to understand the identified features in the input domain where the data has physical meaning. Moreover, it allows to evaluate the performance of dimensionality reduction in sensible (input-domain) units. Volume preservation also allows an easy computation of information theoretic quantities, such as the reduction in multi-information after the transform. Third, the analytical nature of PPA leads to a clear geometrical interpretation of the manifold: it allows the computation of Frenet-Serret frames (local features) and of generalized curvatures at any point of the space. And fourth, the analytical Jacobian allows the computation of the metric induced by the data, thus generalizing the Mahalanobis distance. These properties are demonstrated theoretically and illustrated experimentally. The performance of PPA is evaluated in dimensionality and redundancy reduction, in both synthetic and real datasets from the UCI repository.


Asunto(s)
Inteligencia Artificial , Modelos Estadísticos , Algoritmos , Redes Neurales de la Computación , Dinámicas no Lineales , Análisis de Componente Principal/métodos , Análisis de Regresión
10.
PLoS One ; 9(2): e86481, 2014.
Artículo en Inglés | MEDLINE | ID: mdl-24533049

RESUMEN

Independent component and canonical correlation analysis are two general-purpose statistical methods with wide applicability. In neuroscience, independent component analysis of chromatic natural images explains the spatio-chromatic structure of primary cortical receptive fields in terms of properties of the visual environment. Canonical correlation analysis explains similarly chromatic adaptation to different illuminations. But, as we show in this paper, neither of the two methods generalizes well to explain both spatio-chromatic processing and adaptation at the same time. We propose a statistical method which combines the desirable properties of independent component and canonical correlation analysis: It finds independent components in each data set which, across the two data sets, are related to each other via linear or higher-order correlations. The new method is as widely applicable as canonical correlation analysis, and also to more than two data sets. We call it higher-order canonical correlation analysis. When applied to chromatic natural images, we found that it provides a single (unified) statistical framework which accounts for both spatio-chromatic processing and adaptation. Filters with spatio-chromatic tuning properties as in the primary visual cortex emerged and corresponding-colors psychophysics was reproduced reasonably well. We used the new method to make a theory-driven testable prediction on how the neural response to colored patterns should change when the illumination changes. We predict shifts in the responses which are comparable to the shifts reported for chromatic contrast habituation.


Asunto(s)
Color , Procesamiento de Imagen Asistido por Computador , Modelos Estadísticos , Algoritmos , Inteligencia Artificial , Percepción de Color/fisiología , Simulación por Computador , Humanos , Luz , Neurociencias/métodos , Estimulación Luminosa/métodos , Probabilidad , Psicofísica , Corteza Visual/fisiología
11.
Neural Comput ; 24(10): 2751-88, 2012 Oct.
Artículo en Inglés | MEDLINE | ID: mdl-22845821

RESUMEN

Mechanisms of human color vision are characterized by two phenomenological aspects: the system is nonlinear and adaptive to changing environments. Conventional attempts to derive these features from statistics use separate arguments for each aspect. The few statistical explanations that do consider both phenomena simultaneously follow parametric formulations based on empirical models. Therefore, it may be argued that the behavior does not come directly from the color statistics but from the convenient functional form adopted. In addition, many times the whole statistical analysis is based on simplified databases that disregard relevant physical effects in the input signal, as, for instance, by assuming flat Lambertian surfaces. In this work, we address the simultaneous statistical explanation of the nonlinear behavior of achromatic and chromatic mechanisms in a fixed adaptation state and the change of such behavior (i.e., adaptation) under the change of observation conditions. Both phenomena emerge directly from the samples through a single data-driven method: the sequential principal curves analysis (SPCA) with local metric. SPCA is a new manifold learning technique to derive a set of sensors adapted to the manifold using different optimality criteria. Here sequential refers to the fact that sensors (curvilinear dimensions) are designed one after the other, and not to the particular (eventually iterative) method to draw a single principal curve. Moreover, in order to reproduce the empirical adaptation reported under D65 and A illuminations, a new database of colorimetrically calibrated images of natural objects under these illuminants was gathered, thus overcoming the limitations of available databases. The results obtained by applying SPCA show that the psychophysical behavior on color discrimination thresholds, discount of the illuminant, and corresponding pairs in asymmetric color matching emerge directly from realistic data regularities, assuming no a priori functional form. These results provide stronger evidence for the hypothesis of a statistically driven organization of color sensors. Moreover, the obtained results suggest that the nonuniform resolution of color sensors at this low abstraction level may be guided by an error-minimization strategy rather than by an information-maximization goal.


Asunto(s)
Adaptación Fisiológica , Percepción de Color/fisiología , Visión de Colores/fisiología , Modelos Biológicos , Dinámicas no Lineales , Simulación por Computador , Humanos , Aprendizaje , Estimulación Luminosa , Análisis de Componente Principal , Psicofísica
12.
IEEE Trans Neural Netw ; 22(4): 537-49, 2011 Apr.
Artículo en Inglés | MEDLINE | ID: mdl-21349790

RESUMEN

Most signal processing problems involve the challenging task of multidimensional probability density function (PDF) estimation. In this paper, we propose a solution to this problem by using a family of rotation-based iterative Gaussianization (RBIG) transforms. The general framework consists of the sequential application of a univariate marginal Gaussianization transform followed by an orthonormal transform. The proposed procedure looks for differentiable transforms to a known PDF so that the unknown PDF can be estimated at any point of the original domain. In particular, we aim at a zero-mean unit-covariance Gaussian for convenience. RBIG is formally similar to classical iterative projection pursuit algorithms. However, we show that, unlike in PP methods, the particular class of rotations used has no special qualitative relevance in this context, since looking for interestingness is not a critical issue for PDF estimation. The key difference is that our approach focuses on the univariate part (marginal Gaussianization) of the problem rather than on the multivariate part (rotation). This difference implies that one may select the most convenient rotation suited to each practical application. The differentiability, invertibility, and convergence of RBIG are theoretically and experimentally analyzed. Relation to other methods, such as radial Gaussianization, one-class support vector domain description, and deep neural networks is also pointed out. The practical performance of RBIG is successfully illustrated in a number of multidimensional problems such as image synthesis, classification, denoising, and multi-information estimation.


Asunto(s)
Redes Neurales de la Computación , Distribución Normal , Análisis de Componente Principal , Algoritmos , Simulación por Computador , Humanos , Rotación , Análisis de Ondículas
13.
Neural Comput ; 22(12): 3179-206, 2010 Dec.
Artículo en Inglés | MEDLINE | ID: mdl-20858127

RESUMEN

The conventional approach in computational neuroscience in favor of the efficient coding hypothesis goes from image statistics to perception. It has been argued that the behavior of the early stages of biological visual processing (e.g., spatial frequency analyzers and their nonlinearities) may be obtained from image samples and the efficient coding hypothesis using no psychophysical or physiological information. In this work we address the same issue in the opposite direction: from perception to image statistics. We show that psychophysically fitted image representation in V1 has appealing statistical properties, for example, approximate PDF factorization and substantial mutual information reduction, even though no statistical information is used to fit the V1 model. These results are complementary evidence in favor of the efficient coding hypothesis.


Asunto(s)
Modelos Neurológicos , Neuronas/fisiología , Corteza Visual/fisiología , Percepción Visual/fisiología
14.
J Opt Soc Am A Opt Image Sci Vis ; 27(4): 852-64, 2010 Apr 01.
Artículo en Inglés | MEDLINE | ID: mdl-20360827

RESUMEN

Structural similarity metrics and information-theory-based metrics have been proposed as completely different alternatives to the traditional metrics based on error visibility and human vision models. Three basic criticisms were raised against the traditional error visibility approach: (1) it is based on near-threshold performance, (2) its geometric meaning may be limited, and (3) stationary pooling strategies may not be statistically justified. These criticisms and the good performance of structural and information-theory-based metrics have popularized the idea of their superiority over the error visibility approach. In this work we experimentally or analytically show that the above criticisms do not apply to error visibility metrics that use a general enough divisive normalization masking model. Therefore, the traditional divisive normalization metric 1 is not intrinsically inferior to the newer approaches. In fact, experiments on a number of databases including a wide range of distortions show that divisive normalization is fairly competitive with the newer approaches, robust, and easy to interpret in linear terms. These results suggest that, despite the criticisms of the traditional error visibility approach, divisive normalization masking models should be considered in the image quality discussion.

SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA