Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 13 de 13
Filtrar
1.
Sci Rep ; 8(1): 8027, 2018 05 23.
Artigo em Inglês | MEDLINE | ID: mdl-29795277

RESUMO

Local interneurons (LNs) in the Drosophila olfactory system exhibit neuronal diversity and variability, yet it is still unknown how these features impact information encoding capacity and reliability in a complex LN network. We employed two strategies to construct a diverse excitatory-inhibitory neural network beginning with a ring network structure and then introduced distinct types of inhibitory interneurons and circuit variability to the simulated network. The continuity of activity within the node ensemble (oscillation pattern) was used as a readout to describe the temporal dynamics of network activity. We found that inhibitory interneurons enhance the encoding capacity by protecting the network from extremely short activation periods when the network wiring complexity is very high. In addition, distinct types of interneurons have differential effects on encoding capacity and reliability. Circuit variability may enhance the encoding reliability, with or without compromising encoding capacity. Therefore, we have described how circuit variability of interneurons may interact with excitatory-inhibitory diversity to enhance the encoding capacity and distinguishability of neural networks. In this work, we evaluate the effects of different types and degrees of connection diversity on a ring model, which may simulate interneuron networks in the Drosophila olfactory system or other biological systems.

2.
IEEE Trans Neural Netw Learn Syst ; 29(2): 377-391, 2018 02.
Artigo em Inglês | MEDLINE | ID: mdl-27913361

RESUMO

The blind source separation (BSS) problem extracts unknown sources from observations of their unknown mixtures. A current trend in BSS is the semiblind approach, which incorporates prior information on sources or how the sources are mixed. The constrained independent component analysis (ICA) approach has been studied to impose constraints on the famous ICA framework. We introduced an alternative approach based on the null space component (NCA) framework and referred to the approach as the c-NCA approach. We also presented the c-NCA algorithm that uses signal-dependent semidefinite operators, which is a bilinear mapping, as signatures for operator design in the c-NCA approach. Theoretically, we showed that the source estimation of the c-NCA algorithm converges with a convergence rate dependent on the decay of the sequence, obtained by applying the estimated operators on corresponding sources. The c-NCA can be formulated as a deterministic constrained optimization method, and thus, it can take advantage of solvers developed in optimization society for solving the BSS problem. As examples, we demonstrated electroencephalogram interference rejection problems can be solved by the c-NCA with proximal splitting algorithms by incorporating a sparsity-enforcing separation model and considering the case when reference signals are available.

3.
IEEE Trans Image Process ; 22(4): 1277-90, 2013 Apr.
Artigo em Inglês | MEDLINE | ID: mdl-23014750

RESUMO

From the perspective of the Bayesian approach, the denoising problem is essentially a prior probability modeling and estimation task. In this paper, we propose an approach that exploits a hidden Bayesian network, constructed from wavelet coefficients, to model the prior probability of the original image. Then, we use the belief propagation (BP) algorithm, which estimates a coefficient based on all the coefficients of an image, as the maximum-a-posterior (MAP) estimator to derive the denoised wavelet coefficients. We show that if the network is a spanning tree, the standard BP algorithm can perform MAP estimation efficiently. Our experiment results demonstrate that, in terms of the peak-signal-to-noise-ratio and perceptual quality, the proposed approach outperforms state-of-the-art algorithms on several images, particularly in the textured regions, with various amounts of white Gaussian noise.

4.
IEEE Trans Image Process ; 21(5): 2592-606, 2012 May.
Artigo em Inglês | MEDLINE | ID: mdl-22155961

RESUMO

In this paper, we present a theoretical analysis of the distortion in multilayer coding structures. Specifically, we analyze the prediction structure used to achieve temporal, spatial, and quality scalability of scalable video coding (SVC) and show that the average peak signal-to-noise ratio (PSNR) of SVC is a weighted combination of the bit rates assigned to all the streams. Our analysis utilizes the end user's preference for certain resolutions. We also propose a rate-distortion (R-D) optimization algorithm and compare its performance with that of a state-of-the-art scalable bit allocation algorithm. The reported experiment results demonstrate that the R-D algorithm significantly outperforms the compared approach in terms of the average PSNR.


Assuntos
Artefatos , Aumento da Imagem/métodos , Interpretação de Imagem Assistida por Computador/métodos , Armazenamento e Recuperação da Informação/métodos , Reconhecimento Automatizado de Padrão/métodos , Fotografação/métodos , Gravação em Vídeo/métodos , Algoritmos , Reprodutibilidade dos Testes , Sensibilidade e Especificidade , Processamento de Sinais Assistido por Computador
5.
IEEE Trans Biomed Eng ; 59(2): 531-41, 2012 Feb.
Artigo em Inglês | MEDLINE | ID: mdl-22084042

RESUMO

Typical mosaicing schemes assume that to-be-combined images are equally informative; thus, the images are processed in a similar manner. However, the new imaging technique for confocal fluorescence images has revealed a problem when two asymmetrically informative biological images are stitched during microscope image mosaicing. The latter process is widely used in biological studies to generate a higher resolution image by combining multiple images taken at different times and angles. To resolve the earlier problem, we propose a multiresolution optimization approach that evaluates the blending coefficients based on the relative importance of the overlapping regions of the to-be-combined image pair. The blending coefficients are the optimal solution obtained by a quadratic programming algorithm with constraints that are enforced by the biological requirements. We demonstrate the efficacy of the proposed approach on several confocal microscope fluorescence images and compare the results with those derived by other methods.


Assuntos
Algoritmos , Processamento de Imagem Assistida por Computador/métodos , Microscopia Confocal/métodos , Animais , Encéfalo/ultraestrutura , Drosophila , Camundongos , Pâncreas/ultraestrutura
6.
IEEE Trans Image Process ; 19(5): 1307-18, 2010 May.
Artigo em Inglês | MEDLINE | ID: mdl-20051341

RESUMO

A texture representation should corroborate various functions of a texture. In this paper, we present a novel approach that incorporates texture features for retrieval in an examplar-based texture compaction and synthesis algorithm. The original texture is compacted and compressed in the encoder to obtain a thumbnail texture, which the decoder then synthesizes to obtain a perceptually high quality texture. We propose using a probabilistic framework based on the generalized EM algorithm to analyze the solutions of the approach. Our experiment results show that a high quality synthesized texture can be generated in the decoder from a compressed thumbnail texture. The number of bits in the compressed thumbnail is 400 times lower than that in the original texture and 50 times lower than that needed to compress the original texture using JPEG2000. We also show that, in terms of retrieval and synthesization, our compressed and compacted textures perform better than compressed cropped textures and compressed compacted textures derived by the patchwork algorithm.


Assuntos
Algoritmos , Aumento da Imagem/métodos , Interpretação de Imagem Assistida por Computador/métodos , Imageamento Tridimensional/métodos , Armazenamento e Recuperação da Informação/métodos , Reconhecimento Automatizado de Padrão/métodos , Reprodutibilidade dos Testes , Sensibilidade e Especificidade
7.
IEEE Trans Image Process ; 18(1): 52-62, 2009 Jan.
Artigo em Inglês | MEDLINE | ID: mdl-19095518

RESUMO

Performing optimal bit-allocation with 3-D wavelet coding methods is difficult because energy is not conserved after applying the motion-compensated temporal filtering (MCTF) process and the spatial wavelet transform. The problem cannot be solved by extending the 2-D wavelet coefficients weighting method directly and then applying the result to 3-D wavelet coefficients, since this approach does not consider the complicated pixel connectivity that results from the lifting-based MCTF process. In this paper, we propose a novel weighting method, which takes account of the pixel connectivity, to solve the problem and derive the effect of the quantization error of a subband on the reconstruction error of a group of pictures. We employ the proposed method on a 2-D + t structure with different temporal filters, namely the 5-3 filter and the 9-7 filter. Experiments on various coding parameters and sequences show that the proposed approach improves the bit-allocation performance over that obtained by using the weightings derived without considering the pixel connectivity in the MCTF process.


Assuntos
Algoritmos , Compressão de Dados/métodos , Aumento da Imagem/métodos , Interpretação de Imagem Assistida por Computador/métodos , Imageamento Tridimensional/métodos , Reconhecimento Automatizado de Padrão/métodos , Processamento de Sinais Assistido por Computador , Gravação em Vídeo/métodos , Reprodutibilidade dos Testes , Sensibilidade e Especificidade
8.
IEEE Trans Med Imaging ; 27(6): 847-57, 2008 Jun.
Artigo em Inglês | MEDLINE | ID: mdl-18541491

RESUMO

Inspired by Paragious and Deriche's work, which unifies boundary-based and region-based image partition approaches, we integrate the snake model and the Fisher criterion to capture, respectively, the boundary information and region information of microarray images. We then use the proposed algorithm to segment the spots in the microarray images, and compare our results with those obtained by commercial software. Our algorithm is automatic because the parameters are adaptively estimated from the data without human intervention.


Assuntos
Algoritmos , Inteligência Artificial , Interpretação de Imagem Assistida por Computador/métodos , Microscopia de Fluorescência/métodos , Análise de Sequência com Séries de Oligonucleotídeos/métodos , Reconhecimento Automatizado de Padrão/métodos , Simulação por Computador , Aumento da Imagem/métodos , Modelos Teóricos , Reprodutibilidade dos Testes , Sensibilidade e Especificidade
9.
IEEE Trans Image Process ; 16(4): 1022-35, 2007 Apr.
Artigo em Inglês | MEDLINE | ID: mdl-17405434

RESUMO

We propose a new framework for multiple scalable bitstream video communications over lossy channels. The major feature of the framework is that the encoder estimates the effects of postprocessing concealment and includes those effects in the rate-distortion analysis. Based on the framework, we develop a rate-distortion optimization algorithm to generate multiple scalable bitstreams. The algorithm maximizes the expected peak signal-to-noise ratio by optimally assigning forward error control codes and transmission schemes in a constrained bandwidth. The framework is a general approach motivated by previous methods that perform concealment in the decoder, as in our special case. Simulations show that the proposed approach can be implemented efficiently and that it outperforms previous methods by more than 2 dB.


Assuntos
Algoritmos , Artefatos , Redes de Comunicação de Computadores , Compressão de Dados/métodos , Aumento da Imagem/métodos , Interpretação de Imagem Assistida por Computador/métodos , Processamento de Sinais Assistido por Computador , Gravação em Vídeo/métodos , Reprodutibilidade dos Testes , Sensibilidade e Especificidade
10.
IEEE Trans Syst Man Cybern B Cybern ; 36(3): 649-59, 2006 Jun.
Artigo em Inglês | MEDLINE | ID: mdl-16761817

RESUMO

A prototype is representative of a set of similar objects. This paper proposes an approach that formulates the problem of prototype generation as finding the mean from a given set of objects, where the prototype solution must satisfy certain constraints. These constraints describe the important perceptual features of the sample shapes that the proposed prototype must retain. The contour prototype generated from a set of planar objects was used as an example of the approach, and the corners were used as the perceptual features to be preserved in the proposed prototype shape. However, finding a prototype solution for more than two contours is computationally intractable. A tree-based approach is therefore proposed in which an efficient greedy random algorithm is used to obtain a good approximation of the proposed prototype and analyze the expected complexity of the algorithm. The proposed prototype-generation process for hand-drawn patterns is described and discussed in this paper.


Assuntos
Algoritmos , Inteligência Artificial , Interpretação de Imagem Assistida por Computador/métodos , Armazenamento e Recuperação da Informação/métodos , Pinturas , Reconhecimento Automatizado de Padrão/métodos
11.
IEEE Trans Image Process ; 15(2): 342-53, 2006 Feb.
Artigo em Inglês | MEDLINE | ID: mdl-16479804

RESUMO

We use an optimization technique to accurately locate a distorted grid structure in a microarray image. By assuming that spot centers deviate smoothly from a checkerboard grid structure, we show that the process of gridding spot centers can be formulated as a constrained optimization problem. The constraint is equal to the variations of the transform parameter. We demonstrate the accuracy of our algorithm on two sets of microarray images. One set consists of some images from the Stanford Microarray Database; we compare our centers with those annotated in the Database. The other set consists of oligonucleotide images, and we compare our results with those obtained by GenePix Pro 5.0. Our experiments were performed completely automatically.


Assuntos
Artefatos , Aumento da Imagem/métodos , Interpretação de Imagem Assistida por Computador/métodos , Hibridização in Situ Fluorescente/métodos , Microscopia de Fluorescência/métodos , Análise de Sequência com Séries de Oligonucleotídeos/métodos , Reconhecimento Automatizado de Padrão/métodos , Algoritmos , Inteligência Artificial , Armazenamento e Recuperação da Informação/métodos , Reprodutibilidade dos Testes , Sensibilidade e Especificidade
12.
IEEE Trans Image Process ; 13(7): 952-9, 2004 Jul.
Artigo em Inglês | MEDLINE | ID: mdl-15648861

RESUMO

Image mosaicing is the act of combining two or more images and is used in many applications in computer vision, image processing, and computer graphics. It aims to combine images such that no obstructive boundaries exist around overlapped regions and to create a mosaic image that exhibits as little distortion as possible from the original images. In the proposed technique, the to-be-combined images are first projected into wavelet subspaces. The images projected into the same wavelet space are then blended. Our blending function is derived from an energy minimization model which balances the smoothness around the overlapped region and the fidelity of the blended image to the original images. Experiment results and subjective comparison with other methods are given.


Assuntos
Algoritmos , Inteligência Artificial , Aumento da Imagem/métodos , Interpretação de Imagem Assistida por Computador/métodos , Armazenamento e Recuperação da Informação/métodos , Reconhecimento Automatizado de Padrão/métodos , Técnica de Subtração , Gráficos por Computador , Análise Numérica Assistida por Computador , Reprodutibilidade dos Testes , Sensibilidade e Especificidade , Processamento de Sinais Assistido por Computador , Interface Usuário-Computador
13.
IEEE Trans Image Process ; 11(7): 771-82, 2002.
Artigo em Inglês | MEDLINE | ID: mdl-18244673

RESUMO

The watermarking method has emerged as an important tool for content tracing, authentication, and data hiding in multimedia applications. We propose a watermarking strategy in which the watermark of a host is selected from the robust features of the estimated forged images of the host. The forged images are obtained from Monte Carlo simulations of potential pirate attacks on the host image. The solution of applying an optimization technique to the second-order statistics of the features of the forged images gives two orthogonal spaces. One of them characterizes most of the variations in the modifications of the host. Our watermark is embedded in the other space that most potential pirate attacks do not touch. Thus, the embedded watermark is robust. Our watermarking method uses the same framework for watermark detection with a reference and blind detection. We demonstrate the performance of our method under various levels of attacks.

SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA
...