Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Resultados 1 - 12 de 12
Filtrar
1.
Entropy (Basel) ; 23(4)2021 Mar 27.
Artículo en Inglés | MEDLINE | ID: mdl-33801733

RESUMEN

Transfer learning seeks to improve the generalization performance of a target task by exploiting the knowledge learned from a related source task. Central questions include deciding what information one should transfer and when transfer can be beneficial. The latter question is related to the so-called negative transfer phenomenon, where the transferred source information actually reduces the generalization performance of the target task. This happens when the two tasks are sufficiently dissimilar. In this paper, we present a theoretical analysis of transfer learning by studying a pair of related perceptron learning tasks. Despite the simplicity of our model, it reproduces several key phenomena observed in practice. Specifically, our asymptotic analysis reveals a phase transition from negative transfer to positive transfer as the similarity of the two tasks moves past a well-defined threshold.

2.
Proc IEEE Inst Electr Electron Eng ; 106(8): 1293-1310, 2018 Aug.
Artículo en Inglés | MEDLINE | ID: mdl-30828106

RESUMEN

For many modern applications in science and engineering, data are collected in a streaming fashion carrying time-varying information, and practitioners need to process them with a limited amount of memory and computational resources in a timely manner for decision making. This often is coupled with the missing data problem, such that only a small fraction of data attributes are observed. These complications impose significant, and unconventional, constraints on the problem of streaming Principal Component Analysis (PCA) and subspace tracking, which is an essential building block for many inference tasks in signal processing and machine learning. This survey article reviews a variety of classical and recent algorithms for solving this problem with low computational and memory complexities, particularly those applicable in the big data regime with missing data. We illustrate that streaming PCA and subspace tracking algorithms can be understood through algebraic and geometric perspectives, and they need to be adjusted carefully to handle missing data. Both asymptotic and non-asymptotic convergence guarantees are reviewed. Finally, we benchmark the performance of several competitive algorithms in the presence of missing data for both well-conditioned and ill-conditioned systems.

3.
Neuroimage ; 125: 587-600, 2016 Jan 15.
Artículo en Inglés | MEDLINE | ID: mdl-26481679

RESUMEN

Motivated by recent progress in signal processing on graphs, we have developed a matched signal detection (MSD) theory for signals with intrinsic structures described by weighted graphs. First, we regard graph Laplacian eigenvalues as frequencies of graph-signals and assume that the signal is in a subspace spanned by the first few graph Laplacian eigenvectors associated with lower eigenvalues. The conventional matched subspace detector can be applied to this case. Furthermore, we study signals that may not merely live in a subspace. Concretely, we consider signals with bounded variation on graphs and more general signals that are randomly drawn from a prior distribution. For bounded variation signals, the test is a weighted energy detector. For the random signals, the test statistic is the difference of signal variations on associated graphs, if a degenerate Gaussian distribution specified by the graph Laplacian is adopted. We evaluate the effectiveness of the MSD on graphs both with simulated and real data sets. Specifically, we apply MSD to the brain imaging data classification problem of Alzheimer's disease (AD) based on two independent data sets: 1) positron emission tomography data with Pittsburgh compound-B tracer of 30 AD and 40 normal control (NC) subjects, and 2) resting-state functional magnetic resonance imaging (R-fMRI) data of 30 early mild cognitive impairment and 20 NC subjects. Our results demonstrate that the MSD approach is able to outperform the traditional methods and help detect AD at an early stage, probably due to the success of exploiting the manifold structure of the data.


Asunto(s)
Enfermedad de Alzheimer/diagnóstico , Mapeo Encefálico/métodos , Encéfalo/patología , Interpretación de Imagen Asistida por Computador/métodos , Modelos Neurológicos , Algoritmos , Humanos , Aprendizaje Automático , Imagen por Resonancia Magnética , Modelos Teóricos , Tomografía de Emisión de Positrones
4.
J Neurophysiol ; 115(1): 39-59, 2016 Jan 01.
Artículo en Inglés | MEDLINE | ID: mdl-26467513

RESUMEN

Perceptual decision making is fundamental to a broad range of fields including neurophysiology, economics, medicine, advertising, law, etc. Although recent findings have yielded major advances in our understanding of perceptual decision making, decision making as a function of time and frequency (i.e., decision-making dynamics) is not well understood. To limit the review length, we focus most of this review on human findings. Animal findings, which are extensively reviewed elsewhere, are included when beneficial or necessary. We attempt to put these various findings and data sets, which can appear to be unrelated in the absence of a formal dynamic analysis, into context using published models. Specifically, by adding appropriate dynamic mechanisms (e.g., high-pass filters) to existing models, it appears that a number of otherwise seemingly disparate findings from the literature might be explained. One hypothesis that arises through this dynamic analysis is that decision making includes phasic (high pass) neural mechanisms, an evidence accumulator and/or some sort of midtrial decision-making mechanism (e.g., peak detector and/or decision boundary).


Asunto(s)
Encéfalo/fisiología , Toma de Decisiones , Percepción , Animales , Humanos , Umbral Sensorial
5.
Proc Natl Acad Sci U S A ; 110(30): 12186-91, 2013 Jul 23.
Artículo en Inglés | MEDLINE | ID: mdl-23776236

RESUMEN

Imagine that you are blindfolded inside an unknown room. You snap your fingers and listen to the room's response. Can you hear the shape of the room? Some people can do it naturally, but can we design computer algorithms that hear rooms? We show how to compute the shape of a convex polyhedral room from its response to a known sound, recorded by a few microphones. Geometric relationships between the arrival times of echoes enable us to "blindfoldedly" estimate the room geometry. This is achieved by exploiting the properties of Euclidean distance matrices. Furthermore, we show that under mild conditions, first-order echoes provide a unique description of convex polyhedral rooms. Our algorithm starts from the recorded impulse responses and proceeds by learning the correct assignment of echoes to walls. In contrast to earlier methods, the proposed algorithm reconstructs the full 3D geometry of the room from a single sound emission, and with an arbitrary geometry of the microphone array. As long as the microphones can hear the echoes, we can position them as we want. Besides answering a basic question about the inverse problem of room acoustics, our results find applications in areas such as architectural acoustics, indoor localization, virtual reality, and audio forensics.

6.
IEEE Trans Image Process ; 16(4): 918-31, 2007 Apr.
Artículo en Inglés | MEDLINE | ID: mdl-17405426

RESUMEN

In 1992, Bamberger and Smith proposed the directional filter bank (DFB) for an efficient directional decomposition of 2-D signals. Due to the nonseparable nature of the system, extending the DFB to higher dimensions while still retaining its attractive features is a challenging and previously unsolved problem. We propose a new family of filter banks, named NDFB, that can achieve the directional decomposition of arbitrary N-dimensional (N > or =2) signals with a simple and efficient tree-structured construction. In 3-D, the ideal passbands of the proposed NDFB are rectangular-based pyramids radiating out from the origin at different orientations and tiling the entire frequency space. The proposed NDFB achieves perfect reconstruction via an iterated filter bank with a redundancy factor of N in N-D. The angular resolution of the proposed NDFB can be iteratively refined by invoking more levels of decomposition through a simple expansion rule. By combining the NDFB with a new multiscale pyramid, we propose the surfacelet transform, which can be used to efficiently capture and represent surface-like singularities in multidimensional data.


Asunto(s)
Algoritmos , Aumento de la Imagen/métodos , Interpretación de Imagen Asistida por Computador/métodos , Imagenología Tridimensional/métodos , Almacenamiento y Recuperación de la Información/métodos , Reproducibilidad de los Resultados , Sensibilidad y Especificidad
7.
IEEE Trans Image Process ; 26(11): 5107-5121, 2017 Nov.
Artículo en Inglés | MEDLINE | ID: mdl-28742038

RESUMEN

Many patch-based image denoising algorithms can be formulated as applying a smoothing filter to the noisy image. Expressed as matrices, the smoothing filters must be row normalized, so that each row sums to unity. Surprisingly, if we apply a column normalization before the row normalization, the performance of the smoothing filter can often be significantly improved. Prior works showed that such performance gain is related to the Sinkhorn-Knopp balancing algorithm, an iterative procedure that symmetrizes a row-stochastic matrix to a doubly stochastic matrix. However, a complete understanding of the performance gain phenomenon is still lacking. In this paper, we study the performance gain phenomenon from a statistical learning perspective. We show that Sinkhorn-Knopp is equivalent to an expectation-maximization (EM) algorithm of learning a Gaussian mixture model of the image patches. By establishing the correspondence between the steps of Sinkhorn-Knopp and the EM algorithm, we provide a geometrical interpretation of the symmetrization process. This observation allows us to develop a new denoising algorithm called Gaussian mixture model symmetric smoothing filter (GSF). GSF is an extension of the Sinkhorn-Knopp and is a generalization of the original smoothing filters. Despite its simple formulation, GSF outperforms many existing smoothing filters and has a similar performance compared with several state-of-the-art denoising algorithms.

8.
PLoS One ; 10(5): e0128136, 2015.
Artículo en Inglés | MEDLINE | ID: mdl-26024224

RESUMEN

Understanding network features of brain pathology is essential to reveal underpinnings of neurodegenerative diseases. In this paper, we introduce a novel graph regression model (GRM) for learning structural brain connectivity of Alzheimer's disease (AD) measured by amyloid-ß deposits. The proposed GRM regards 11C-labeled Pittsburgh Compound-B (PiB) positron emission tomography (PET) imaging data as smooth signals defined on an unknown graph. This graph is then estimated through an optimization framework, which fits the graph to the data with an adjustable level of uniformity of the connection weights. Under the assumed data model, results based on simulated data illustrate that our approach can accurately reconstruct the underlying network, often with better reconstruction than those obtained by both sample correlation and ℓ1-regularized partial correlation estimation. Evaluations performed upon PiB-PET imaging data of 30 AD and 40 elderly normal control (NC) subjects demonstrate that the connectivity patterns revealed by the GRM are easy to interpret and consistent with known pathology. Moreover, the hubs of the reconstructed networks match the cortical hubs given by functional MRI. The discriminative network features including both global connectivity measurements and degree statistics of specific nodes discovered from the AD and NC amyloid-beta networks provide new potential biomarkers for preclinical and clinical AD.


Asunto(s)
Enfermedad de Alzheimer/patología , Encéfalo/patología , Modelos Biológicos , Análisis de Regresión , Anciano , Enfermedad de Alzheimer/metabolismo , Péptidos beta-Amiloides/metabolismo , Compuestos de Anilina , Encéfalo/metabolismo , Femenino , Análisis de Fourier , Humanos , Masculino , Tomografía de Emisión de Positrones/métodos , Valores de Referencia , Tiazoles
9.
IEEE Trans Image Process ; 23(8): 3711-25, 2014 Aug.
Artículo en Inglés | MEDLINE | ID: mdl-25122743

RESUMEN

We propose a randomized version of the nonlocal means (NLM) algorithm for large-scale image filtering. The new algorithm, called Monte Carlo nonlocal means (MCNLM), speeds up the classical NLM by computing a small subset of image patch distances, which are randomly selected according to a designed sampling pattern. We make two contributions. First, we analyze the performance of the MCNLM algorithm and show that, for large images or large external image databases, the random outcomes of MCNLM are tightly concentrated around the deterministic full NLM result. In particular, our error probability bounds show that, at any given sampling ratio, the probability for MCNLM to have a large deviation from the original NLM solution decays exponentially as the size of the image or database grows. Second, we derive explicit formulas for optimal sampling patterns that minimize the error probability bound by exploiting partial knowledge of the pairwise similarity weights. Numerical experiments show that MCNLM is competitive with other state-of-the-art fast NLM algorithms for single-image denoising. When applied to denoising images using an external database containing ten billion patches, MCNLM returns a randomized solution that is within 0.2 dB of the full NLM solution while reducing the runtime by three orders of magnitude.


Asunto(s)
Algoritmos , Artefactos , Aumento de la Imagen/métodos , Interpretación de Imagen Asistida por Computador/métodos , Modelos Estadísticos , Simulación por Computador , Método de Montecarlo , Análisis Numérico Asistido por Computador , Reproducibilidad de los Resultados , Tamaño de la Muestra , Sensibilidad y Especificidad , Procesamiento de Señales Asistido por Computador
10.
Inf Process Med Imaging ; 23: 1-12, 2013.
Artículo en Inglés | MEDLINE | ID: mdl-24683953

RESUMEN

We develop a matched signal detection (MSD) theory for signals with an intrinsic structure described by a weighted graph. Hypothesis tests are formulated under different signal models. In the simplest scenario, we assume that the signal is deterministic with noise in a subspace spanned by a subset of eigenvectors of the graph Laplacian. The conventional matched subspace detection can be easily extended to this case. Furthermore, we study signals with certain level of smoothness. The test turns out to be a weighted energy detector, when the noise variance is negligible. More generally, we presume that the signal follows a prior distribution, which could be learnt from training data. The test statistic is then the difference of signal variations on associated graph structures, if an Ising model is adopted. Effectiveness of the MSD on graph is evaluated both by simulation and real data. We apply it to the network classification problem of Alzheimer's disease (AD) particularly. The preliminary results demonstrate that our approach is able to exploit the sub-manifold structure of the data, and therefore achieve a better performance than the traditional principle component analysis (PCA).


Asunto(s)
Enfermedad de Alzheimer/diagnóstico por imagen , Mapeo Encefálico/métodos , Encéfalo/diagnóstico por imagen , Conectoma/métodos , Red Nerviosa/diagnóstico por imagen , Reconocimiento de Normas Patrones Automatizadas/métodos , Tomografía de Emisión de Positrones/métodos , Algoritmos , Enfermedad de Alzheimer/metabolismo , Compuestos de Anilina , Benzotiazoles/farmacocinética , Encéfalo/metabolismo , Humanos , Aumento de la Imagen/métodos , Interpretación de Imagen Asistida por Computador/métodos , Vías Nerviosas/diagnóstico por imagen , Reproducibilidad de los Resultados , Sensibilidad y Especificidad , Tiazoles , Distribución Tisular
11.
IEEE Trans Image Process ; 21(4): 1421-36, 2012 Apr.
Artículo en Inglés | MEDLINE | ID: mdl-22180507

RESUMEN

We study a new image sensor that is reminiscent of a traditional photographic film. Each pixel in the sensor has a binary response, giving only a 1-bit quantized measurement of the local light intensity. To analyze its performance, we formulate the oversampled binary sensing scheme as a parameter estimation problem based on quantized Poisson statistics. We show that, with a single-photon quantization threshold and large oversampling factors, the Cramér-Rao lower bound (CRLB) of the estimation variance approaches that of an ideal unquantized sensor, i.e., as if there were no quantization in the sensor measurements. Furthermore, the CRLB is shown to be asymptotically achievable by the maximum-likelihood estimator (MLE). By showing that the log-likelihood function of our problem is concave, we guarantee the global optimality of iterative algorithms in finding the MLE. Numerical results on both synthetic data and images taken by a prototype sensor verify our theoretical analysis and demonstrate the effectiveness of our image reconstruction algorithm. They also suggest the potential application of the oversampled binary sensing scheme in high dynamic range photography.


Asunto(s)
Interpretación de Imagen Asistida por Computador/métodos , Fotograbar/instrumentación , Fotometría/instrumentación , Semiconductores , Procesamiento de Señales Asistido por Computador/instrumentación , Transductores , Diseño Asistido por Computadora , Interpretación Estadística de Datos , Diseño de Equipo , Análisis de Falla de Equipo , Aumento de la Imagen/instrumentación , Aumento de la Imagen/métodos , Interpretación de Imagen Asistida por Computador/instrumentación , Proyectos Piloto , Distribución de Poisson , Reproducibilidad de los Resultados , Tamaño de la Muestra , Sensibilidad y Especificidad
12.
IEEE Trans Image Process ; 19(8): 2085-98, 2010 Aug.
Artículo en Inglés | MEDLINE | ID: mdl-20236886

RESUMEN

Color image demosaicking is a key process in the digital imaging pipeline. In this paper, we study a well-known and influential demosaicking algorithm based upon alternating projections (AP), proposed by Gunturk, Altunbasak and Mersereau in 2002. Since its publication, the AP algorithm has been widely cited and compared against in a series of more recent papers in the demosaicking literature. Despite good performances, a limitation of the AP algorithm is its high computational complexity. We provide three main contributions in this paper. First, we present a rigorous analysis of the convergence property of the AP demosaicking algorithm, showing that it is a contraction mapping, with a unique fixed point. Second, we show that this fixed point is in fact the solution to a constrained quadratic minimization problem, thus, establishing the optimality of the AP algorithm. Finally, using the tool of polyphase representation, we show how to obtain the results of the AP algorithm in a single step, implemented as linear filtering in the polyphase domain. Replacing the original iterative procedure by the proposed one-step solution leads to substantial computational savings, by about an order of magnitude in our experiments.


Asunto(s)
Algoritmos , Color , Colorimetría/métodos , Aumento de la Imagen/métodos , Interpretación de Imagen Asistida por Computador/métodos , Reconocimiento de Normas Patrones Automatizadas/métodos , Análisis Numérico Asistido por Computador , Reproducibilidad de los Resultados , Sensibilidad y Especificidad , Procesamiento de Señales Asistido por Computador
SELECCIÓN DE REFERENCIAS
Detalles de la búsqueda