Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 35
Filtrar
Mais filtros

Bases de dados
Tipo de documento
Intervalo de ano de publicação
1.
Entropy (Basel) ; 23(4)2021 Apr 09.
Artigo em Inglês | MEDLINE | ID: mdl-33918984

RESUMO

We sincerely apologize for the inconvenience of updating the authorship [...].

2.
Entropy (Basel) ; 22(4)2020 Apr 19.
Artigo em Inglês | MEDLINE | ID: mdl-33286239

RESUMO

Alzheimer's disease has been extensively studied using undirected graphs to represent the correlations of BOLD signals in different anatomical regions through functional magnetic resonance imaging (fMRI). However, there has been relatively little analysis of this kind of data using directed graphs, which potentially offer the potential to capture asymmetries in the interactions between different anatomical brain regions. The detection of these asymmetries is relevant to detect the disease in an early stage. For this reason, in this paper, we analyze data extracted from fMRI images using the net4Lap algorithm to infer a directed graph from the available BOLD signals, and then seek to determine asymmetries between the left and right hemispheres of the brain using a directed version of the Return Random Walk (RRW). Experimental evaluation of this method reveals that it leads to the identification of anatomical brain regions known to be implicated in the early development of Alzheimer's disease in clinical studies.

3.
Entropy (Basel) ; 20(10)2018 Oct 02.
Artigo em Inglês | MEDLINE | ID: mdl-33265848

RESUMO

The problem of how to represent networks, and from this representation, derive succinct characterizations of network structure and in particular how this structure evolves with time, is of central importance in complex network analysis. This paper tackles the problem by proposing a thermodynamic framework to represent the structure of time-varying complex networks. More importantly, such a framework provides a powerful tool for better understanding the network time evolution. Specifically, the method uses a recently-developed approximation of the network von Neumann entropy and interprets it as the thermodynamic entropy for networks. With an appropriately-defined internal energy in hand, the temperature between networks at consecutive time points can be readily derived, which is computed as the ratio of change of entropy and change in energy. It is critical to emphasize that one of the main advantages of the proposed method is that all these thermodynamic variables can be computed in terms of simple network statistics, such as network size and degree statistics. To demonstrate the usefulness of the thermodynamic framework, the paper uses real-world network data, which are extracted from time-evolving complex systems in the financial and biological domains. The experimental results successfully illustrate that critical events, including abrupt changes and distinct periods in the evolution of complex networks, can be effectively characterized.

4.
IEEE Trans Neural Netw Learn Syst ; 34(4): 1651-1665, 2023 Apr.
Artigo em Inglês | MEDLINE | ID: mdl-33048762

RESUMO

The structure of networks can be efficiently represented using motifs, which are those subgraphs that recur most frequently. One route to understanding the motif structure of a network is to study the distribution of subgraphs using statistical mechanics. In this article, we address the use of motifs as network primitives using the cluster expansion from statistical physics. By mapping the network motifs to clusters in the gas model, we derive the partition function for a network, and this allows us to calculate global thermodynamic quantities, such as energy and entropy. We present analytical expressions for the number of certain types of motifs, and compute their associated entropy. We conduct numerical experiments for synthetic and real-world data sets and evaluate the qualitative and quantitative characterizations of the motif entropy derived from the partition function. We find that the motif entropy for real-world networks, such as financial stock market networks, is sensitive to the variance in network structure. This is in line with recent evidence that network motifs can be regarded as basic elements with well-defined information-processing functions.

5.
IEEE Trans Neural Netw Learn Syst ; 34(4): 1808-1822, 2023 Apr.
Artigo em Inglês | MEDLINE | ID: mdl-32692680

RESUMO

Network representations are powerful tools to modeling the dynamic time-varying financial complex systems consisting of multiple co-evolving financial time series, e.g., stock prices. In this work, we develop a novel framework to compute the kernel-based similarity measure between dynamic time-varying financial networks. Specifically, we explore whether the proposed kernel can be employed to understand the structural evolution of the financial networks with time associated with standard kernel machines. For a set of time-varying financial networks with each vertex representing the individual time series of a different stock and each edge between a pair of time series representing the absolute value of their Pearson correlation, our start point is to compute the commute time (CT) matrix associated with the weighted adjacency matrix of the network structures, where each element of the matrix can be seen as the enhanced correlation value between pairwise stocks. For each network, we show how the CT matrix allows us to identify a reliable set of dominant correlated time series as well as an associated dominant probability distribution of the stock belonging to this set. Furthermore, we represent each original network as a discrete dominant Shannon entropy time series computed from the dominant probability distribution. With the dominant entropy time series for each pair of financial networks to hand, we develop an entropic dynamic time warping kernels through the classical dynamic time warping framework, for analyzing the financial time-varying networks. We show that the proposed kernel bridges the gap between graph kernels and the classical dynamic time warping framework for multiple financial time series analysis. Experiments on time-varying networks extracted through New York Stock Exchange (NYSE) database demonstrate that the effectiveness of the proposed method.

6.
IEEE Trans Cybern ; 53(8): 5226-5239, 2023 Aug.
Artigo em Inglês | MEDLINE | ID: mdl-35976829

RESUMO

Recently, deep neural networks have achieved promising performance for in-filling large missing regions in image inpainting tasks. They have usually adopted the standard convolutional architecture over the corrupted image, leading to meaningless contents, such as color discrepancy, blur, and other artifacts. Moreover, most inpainting approaches cannot handle well the case of a large contiguous missing area. To address these problems, we propose a generic inpainting framework capable of handling incomplete images with both contiguous and discontiguous large missing areas. We pose this in an adversarial manner, deploying regionwise operations in both the generator and discriminator to separately handle the different types of regions, namely, existing regions and missing ones. Moreover, a correlation loss is introduced to capture the nonlocal correlations between different patches, and thus, guide the generator to obtain more information during inference. With the help of regionwise generative adversarial mechanism, our framework can restore semantically reasonable and visually realistic images for both discontiguous and contiguous large missing areas. Extensive experiments on three widely used datasets for image inpainting task have been conducted, and both qualitative and quantitative experimental results demonstrate that the proposed model significantly outperforms the state-of-the-art approaches, on the large contiguous and discontiguous missing areas.

7.
Artigo em Inglês | MEDLINE | ID: mdl-35167481

RESUMO

Graph neural networks (GNNs) are recently proposed neural network structures for the processing of graph-structured data. Due to their employed neighbor aggregation strategy, existing GNNs focus on capturing node-level information and neglect high-level information. Existing GNNs, therefore, suffer from representational limitations caused by the local permutation invariance (LPI) problem. To overcome these limitations and enrich the features captured by GNNs, we propose a novel GNN framework, referred to as the two-level GNN (TL-GNN). This merges subgraph-level information with node-level information. Moreover, we provide a mathematical analysis of the LPI problem, which demonstrates that subgraph-level information is beneficial to overcoming the problems associated with LPI. A subgraph counting method based on the dynamic programming algorithm is also proposed, and this has the time complexity of O(n³), where n is the number of nodes of a graph. Experiments show that TL-GNN outperforms existing GNNs and achieves state-of-the-art performance.

8.
IEEE Trans Pattern Anal Mach Intell ; 44(2): 783-798, 2022 Feb.
Artigo em Inglês | MEDLINE | ID: mdl-32750832

RESUMO

In this paper, we develop a novel backtrackless aligned-spatial graph convolutional network (BASGCN) model to learn effective features for graph classification. Our idea is to transform arbitrary-sized graphs into fixed-sized backtrackless aligned grid structures and define a new spatial graph convolution operation associated with the grid structures. We show that the proposed BASGCN model not only reduces the problems of information loss and imprecise information representation arising in existing spatially-based graph convolutional network (GCN) models, but also bridges the theoretical gap between traditional convolutional neural network (CNN) models and spatially-based GCN models. Furthermore, the proposed BASGCN model can both adaptively discriminate the importance between specified vertices during the convolution process and reduce the notorious tottering problem of existing spatially-based GCNs related to the Weisfeiler-Lehman algorithm, explaining the effectiveness of the proposed model. Experiments on standard graph datasets demonstrate the effectiveness of the proposed model.

9.
IEEE Trans Pattern Anal Mach Intell ; 44(9): 5747-5760, 2022 Sep.
Artigo em Inglês | MEDLINE | ID: mdl-33956625

RESUMO

In this paper we present methods for estimating shape from polarisation and shading information, i.e. photo-polarimetric shape estimation, under varying, but unknown, illumination, i.e. in an uncalibrated scenario. We propose several alternative photo-polarimetric constraints that depend upon the partial derivatives of the surface and show how to express them in a unified system of partial differential equations of which previous work is a special case. By careful combination and manipulation of the constraints, we show how to eliminate non-linearities such that a discrete version of the problem can be solved using linear least squares. We derive a minimal, combinatorial approach for two source illumination estimation which we use with RANSAC for robust light direction and intensity estimation. We also introduce a new method for estimating a polarisation image from multichannel data and provide methods for estimating albedo and refractive index. We evaluate lighting, shape, albedo and refractive index estimation methods on both synthetic and real-world data showing improvements over existing state-of-the-art.

10.
Artigo em Inglês | MEDLINE | ID: mdl-34890333

RESUMO

Graph convolutional networks (GCNs) are powerful tools for graph structure data analysis. One main drawback arising in most existing GCN models is that of the oversmoothing problem, i.e., the vertex features abstracted from the existing graph convolution operation have previously tended to be indistinguishable if the GCN model has many convolutional layers (e.g., more than two layers). To address this problem, in this article, we propose a family of aligned vertex convolutional network (AVCN) models that focus on learning multiscale features from local-level vertices for graph classification. This is done by adopting a transitive vertex alignment algorithm to transform arbitrary-sized graphs into fixed-size grid structures. Furthermore, we define a new aligned vertex convolution operation that can effectively learn multiscale vertex characteristics by gradually aggregating local-level neighboring aligned vertices residing on the original grid structures into a new packed aligned vertex. With the new vertex convolution operation to hand, we propose two architectures for the AVCN models to extract different hierarchical multiscale vertex feature representations for graph classification. We show that the proposed models can avoid iteratively propagating redundant information between specific neighboring vertices, restricting the notorious oversmoothing problem arising in most spatial-based GCN models. Experimental evaluations on benchmark datasets demonstrate the effectiveness.

11.
IEEE Trans Cybern ; 50(3): 1264-1277, 2020 Mar.
Artigo em Inglês | MEDLINE | ID: mdl-31295131

RESUMO

We develop a novel method for measuring the similarity between complete weighted graphs, which are probed by means of the discrete-time quantum walks. Directly probing complete graphs using discrete-time quantum walks is intractable due to the cost of simulating the quantum walk. We overcome this problem by extracting a commute time minimum spanning tree from the complete weighted graph. The spanning tree is probed by a discrete-time quantum walk which is initialized using a weighted version of the Perron-Frobenius operator. This naturally encapsulates the edge weight information for the spanning tree extracted from the original graph. For each pair of complete weighted graphs to be compared, we simulate a discrete-time quantum walk on each of the corresponding commute time minimum spanning trees and, then, compute the associated density matrices for the quantum walks. The probability of the walk visiting each edge of the spanning tree is given by the diagonal elements of the density matrices. The similarity between each pair of graphs is then computed using either: 1) the inner product or 2) the negative exponential of the Jensen-Shannon divergence between the probability distributions. We show that in both cases the resulting similarity measure is positive definite and, therefore, corresponds to a kernel on the graphs. We perform a series of experiments on publicly available graph datasets from a variety of different domains, together with time-varying financial networks extracted from data for the New York Stock Exchange. Our experiments demonstrate the effectiveness of the proposed similarity measures.

12.
Cereb Cortex ; 18(2): 364-70, 2008 Feb.
Artigo em Inglês | MEDLINE | ID: mdl-17507454

RESUMO

The aim of this study was to determine the extent to which the neural representation of faces in visual cortex is viewpoint dependent or viewpoint invariant. Magnetoencephalography was used to measure evoked responses to faces during an adaptation paradigm. Using familiar and unfamiliar faces, we compared the amplitude of the M170 response to repeated images of the same face with images of different faces. We found a reduction in the M170 amplitude to repeated presentations of the same face image compared with images of different faces when shown from the same viewpoint. To establish if this adaptation to the identity of a face was invariant to changes in viewpoint, we varied the viewing angle of the face within a block. We found a reduction in response was no longer evident when images of the same face were shown from different viewpoints. This viewpoint-dependent pattern of results was the same for both familiar and unfamiliar faces. These results imply that either the face-selective M170 response reflects an early stage of face processing or that the computations underlying face recognition depend on a viewpoint-dependent neuronal representation.


Assuntos
Cognição/fisiologia , Potenciais Evocados Visuais/fisiologia , Face , Memória/fisiologia , Reconhecimento Visual de Modelos/fisiologia , Reconhecimento Psicológico/fisiologia , Córtex Visual/fisiologia , Adulto , Sinais (Psicologia) , Feminino , Humanos , Masculino , Estimulação Luminosa/métodos
13.
IEEE Trans Image Process ; 28(5): 2187-2199, 2019 May.
Artigo em Inglês | MEDLINE | ID: mdl-30507505

RESUMO

Facial pose variation is one of the major factors making face recognition (FR) a challenging task. One popular solution is to convert non-frontal faces to frontal ones on which FR is performed. Rotating faces causes facial pixel value changes. Therefore, existing CNN-based methods learn to synthesize frontal faces in color space. However, this learning problem in a color space is highly non-linear, causing the synthetic frontal faces to lose fine facial textures. In this paper, we take the view that the nonfrontal-frontal pixel changes are essentially caused by geometric transformations (rotation, translation, and so on) in space. Therefore, we aim to learn the nonfrontal-frontal facial conversion in the spatial domain rather than the color domain to ease the learning task. To this end, we propose an appearance-flow-based face frontalization convolutional neural network (A3F-CNN). Specifically, A3F-CNN learns to establish the dense correspondence between the non-frontal and frontal faces. Once the correspondence is built, frontal faces are synthesized by explicitly "moving" pixels from the non-frontal one. In this way, the synthetic frontal faces can preserve fine facial textures. To improve the convergence of training, an appearance-flow-guided learning strategy is proposed. In addition, generative adversarial network loss is applied to achieve a more photorealistic face, and a face mirroring method is introduced to handle the self-occlusion problem. Extensive experiments are conducted on face synthesis and pose invariant FR. Results show that our method can synthesize more photorealistic faces than the existing methods in both the controlled and uncontrolled lighting environments. Moreover, we achieve a very competitive FR performance on the Multi-PIE, LFW and IJB-A databases.

14.
IEEE Trans Pattern Anal Mach Intell ; 29(11): 1873-90, 2007 Nov.
Artigo em Inglês | MEDLINE | ID: mdl-17848771

RESUMO

This paper exploits the properties of the commute time between nodes of a graph for the purposes of clustering and embedding, and explores its applications to image segmentation and multi-body motion tracking. Our starting point is the lazy random walk on the graph, which is determined by the heatkernel of the graph and can be computed from the spectrum of the graph Laplacian. We characterize the random walk using the commute time (i.e. the expected time taken for a random walk to travel between two nodes and return) and show how this quantity may be computed from the Laplacian spectrum using the discrete Green's function. Our motivation is that the commute time can be anticipated to be a more robust measure of the proximity of data than the raw proximity matrix. In this paper, we explore two applications of the commute time. The first is to develop a method for image segmentation using the eigenvector corresponding to the smallest eigenvalue of the commute time matrix. We show that our commute time segmentation method has the property of enhancing the intra-group coherence while weakening inter-group coherence and is superior to the normalized cut. The second application is to develop a robust multi-body motion tracking method using an embedding based on the commute time. Our embedding procedure preserves commute time, and is closely akin to kernel PCA, the Laplacian eigenmap and the diffusion map. We illustrate the results both on synthetic image sequences and real world video sequences, and compare our results with several alternative methods.


Assuntos
Algoritmos , Inteligência Artificial , Análise por Conglomerados , Interpretação de Imagem Assistida por Computador/métodos , Armazenamento e Recuperação da Informação/métodos , Reconhecimento Automatizado de Padrão/métodos , Aumento da Imagem/métodos , Reprodutibilidade dos Testes , Sensibilidade e Especificidade
15.
IEEE Trans Pattern Anal Mach Intell ; 29(11): 2001-17, 2007 Nov.
Artigo em Inglês | MEDLINE | ID: mdl-17848780

RESUMO

This paper presents a novel method for 3D surface reconstruction that uses polarization and shading information from two views. The method relies on polarization data acquired using a standard digital camera and a linear polarizer. Fresnel theory is used to process the raw images and to obtain initial estimates of surface normals, assuming that the reflection type is diffuse. Based on this idea, the paper presents two novel contributions to the problem of surface reconstruction. The first is a technique to enhance the surface normal estimates by incorporating shading information into the method. This is done using robust statistics to estimate how the measured pixel brightnesses depend on the surface orientation. This gives an estimate of the object material reflectance function, which is used to refine the estimates of the surface normals. The second contribution is to use the refined estimates to establish correspondence between two views of an object. To do this, a set of patches are extracted from each view and are aligned by minimizing an energy functional based on the surface normal estimates and local topographic properties. The optimum alignment parameters for different patch pairs are then used to establish stereo correspondence. This process results in an unambiguous field of surface normals, which can be integrated to recover the surface depth. Our technique is most suited to smooth, non-metallic surfaces. It complements existing stereo algorithms since it does not require salient surface features to obtain correspondences. An extensive set of experiments, yielding reconstructed objects and reflectance functions, are presented and compared to ground truth.


Assuntos
Algoritmos , Inteligência Artificial , Interpretação de Imagem Assistida por Computador/métodos , Iluminação/métodos , Reconhecimento Automatizado de Padrão/métodos , Refratometria/métodos , Aumento da Imagem/métodos , Reprodutibilidade dos Testes , Sensibilidade e Especificidade
16.
IEEE Trans Image Process ; 16(1): 7-21, 2007 Jan.
Artigo em Inglês | MEDLINE | ID: mdl-17283761

RESUMO

This paper offers two new directions to shape-from-shading, namely the use of the heat equation to smooth the field of surface normals and the recovery of surface height using a low-dimensional embedding. Turning our attention to the first of these contributions, we pose the problem of surface normal recovery as that of solving the steady state heat equation subject to the hard constraint that Lambert's law is satisfied. We perform our analysis on a plane perpendicular to the light source direction, where the z component of the surface normal is equal to the normalized image brightness. The x - y or azimuthal component of the surface normal is found by computing the gradient of a scalar field that evolves with time subject to the heat equation. We solve the heat equation for the scalar potential and, hence, recover the azimuthal component of the surface normal from the average image brightness, making use of a simple finite difference method. The second contribution is to pose the problem of recovering the surface height function as that of embedding the field of surface normals on a manifold so as to preserve the pattern of surface height differences and the lattice footprint of the surface normals. We experiment with the resulting method on a variety of real-world image data, where it produces qualitatively good reconstructed surfaces.


Assuntos
Algoritmos , Aumento da Imagem/métodos , Interpretação de Imagem Assistida por Computador/métodos , Imageamento Tridimensional/métodos , Iluminação/métodos , Reconhecimento Automatizado de Padrão/métodos , Fotometria/métodos , Armazenamento e Recuperação da Informação/métodos , Termodinâmica
17.
IEEE Trans Image Process ; 16(4): 1139-51, 2007 Apr.
Artigo em Inglês | MEDLINE | ID: mdl-17405444

RESUMO

We focus on the problem of developing a coupled statistical model that can be used to recover facial shape from brightness images of faces. We study three alternative representations for facial shape. These are the surface height function, the surface gradient, and a Fourier basis representation. We jointly capture variations in intensity and the surface shape representations using a coupled statistical model. The model is constructed by performing principal components analysis on sets of parameters describing the contents of the intensity images and the facial shape representations. By fitting the coupled model to intensity data, facial shape is implicitly recovered from the shape parameters. Experiments show that the coupled model is able to generate accurate shape from out-of-training-sample intensity images.


Assuntos
Face/anatomia & histologia , Aumento da Imagem/métodos , Interpretação de Imagem Assistida por Computador/métodos , Imageamento Tridimensional/métodos , Modelos Biológicos , Reconhecimento Automatizado de Padrão/métodos , Fotometria/métodos , Algoritmos , Inteligência Artificial , Biometria/métodos , Simulação por Computador , Humanos , Iluminação/métodos , Modelos Estatísticos , Reprodutibilidade dos Testes , Sensibilidade e Especificidade
18.
IEEE Trans Pattern Anal Mach Intell ; 28(6): 954-67, 2006 Jun.
Artigo em Inglês | MEDLINE | ID: mdl-16724589

RESUMO

This paper poses the problem of tree-clustering as that of fitting a mixture of tree unions to a set of sample trees. The tree-unions are structures from which the individual data samples belonging to a cluster can be obtained by edit operations. The distribution of observed tree nodes in each cluster sample is assumed to be governed by a Bernoulli distribution. The clustering method is designed to operate when the correspondences between nodes are unknown and must be inferred as part of the learning process. We adopt a minimum description length approach to the problem of fitting the mixture model to data. We make maximum-likelihood estimates of the Bernoulli parameters. The tree-unions and the mixing proportions are sought so as to minimize the description length criterion. This is the sum of the negative logarithm of the Bernoulli distribution, and a message-length criterion that encodes both the complexity of the union-trees and the number of mixture components. We locate node correspondences by minimizing the edit distance with the current tree unions, and show that the edit distance is linked to the description length criterion. The method can be applied to both unweighted and weighted trees. We illustrate the utility of the resulting algorithm on the problem of classifying 2D shapes using a shock graph representation.


Assuntos
Algoritmos , Inteligência Artificial , Interpretação de Imagem Assistida por Computador/métodos , Armazenamento e Recuperação da Informação/métodos , Reconhecimento Automatizado de Padrão/métodos
19.
IEEE Trans Pattern Anal Mach Intell ; 28(12): 1914-30, 2006 Dec.
Artigo em Inglês | MEDLINE | ID: mdl-17108367

RESUMO

In this paper, we show how a statistical model of facial shape can be embedded within a shape-from-shading algorithm. We describe how facial shape can be captured using a statistical model of variations in surface normal direction. To construct this model, we make use of the azimuthal equidistant projection to map the distribution of surface normals from the polar representation on a unit sphere to Cartesian points on a local tangent plane. The distribution of surface normal directions is captured using the covariance matrix for the projected point positions. The eigenvectors of the covariance matrix define the modes of shape-variation in the fields of transformed surface normals. We show how this model can be trained using surface normal data acquired from range images and how to fit the model to intensity images of faces using constraints on the surface normal direction provided by Lambert's law. We demonstrate that the combination of a global statistical constraint and local irradiance constraint yields an efficient and accurate approach to facial shape recovery and is capable of recovering fine local surface details. We assess the accuracy of the technique on a variety of images with ground truth and real-world images.


Assuntos
Inteligência Artificial , Biometria/métodos , Face/anatomia & histologia , Interpretação de Imagem Assistida por Computador/métodos , Imageamento Tridimensional/métodos , Modelos Biológicos , Reconhecimento Automatizado de Padrão/métodos , Algoritmos , Simulação por Computador , Humanos , Aumento da Imagem/métodos , Armazenamento e Recuperação da Informação/métodos , Modelos Estatísticos , Reprodutibilidade dos Testes , Sensibilidade e Especificidade
20.
IEEE Trans Image Process ; 15(6): 1653-64, 2006 Jun.
Artigo em Inglês | MEDLINE | ID: mdl-16764289

RESUMO

When unpolarized light is reflected from a smooth dielectric surface, it becomes partially polarized. This is due to the orientation of dipoles induced in the reflecting medium and applies to both specular and diffuse reflection. This paper is concerned with exploiting polarization by surface reflection, using images of smooth dielectric objects, to recover surface normals and, hence, height. This paper presents the underlying physics of polarization by reflection, starting with the Fresnel equations. These equations are used to interpret images taken with a linear polarizer and digital camera, revealing the shape of the objects. Experimental results are presented that illustrate that the technique is accurate near object limbs, as the theory predicts, with less precise, but still useful, results elsewhere. A detailed analysis of the accuracy of the technique for a variety of materials is presented. A method for estimating refractive indices using a laser and linear polarizer is also given.


Assuntos
Algoritmos , Aumento da Imagem/métodos , Interpretação de Imagem Assistida por Computador/métodos , Imageamento Tridimensional/métodos , Refratometria/métodos , Difusão , Armazenamento e Recuperação da Informação/métodos , Propriedades de Superfície
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA