Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 25
Filtrar
Mais filtros

Base de dados
Tipo de documento
Intervalo de ano de publicação
1.
Entropy (Basel) ; 25(10)2023 Sep 24.
Artigo em Inglês | MEDLINE | ID: mdl-37895498

RESUMO

The Minimum Vertex Weighted Coloring (MinVWC) problem is an important generalization of the classic Minimum Vertex Coloring (MinVC) problem which is NP-hard. Given a simple undirected graph G=(V,E), the MinVC problem is to find a coloring s.t. any pair of adjacent vertices are assigned different colors and the number of colors used is minimized. The MinVWC problem associates each vertex with a positive weight and defines the weight of a color to be the weight of its heaviest vertices, then the goal is the find a coloring that minimizes the sum of weights over all colors. Among various approaches, reduction is an effective one. It tries to obtain a subgraph whose optimal solutions can conveniently be extended into optimal ones for the whole graph, without costly branching. In this paper, we propose a reduction algorithm based on maximal clique enumeration. More specifically our algorithm utilizes a certain proportion of maximal cliques and obtains lower bounds in order to perform reductions. It alternates between clique sampling and graph reductions and consists of three successive procedures: promising clique reductions, better bound reductions and post reductions. Experimental results show that our algorithm returns considerably smaller subgraphs for numerous large benchmark graphs, compared to the most recent method named RedLS. Also, we evaluate individual impacts and some practical properties of our algorithm. Furthermore, we have a theorem which indicates that the reduction effects of our algorithm are equivalent to that of a counterpart which enumerates all maximal cliques in the whole graph if the run time is sufficiently long.

2.
Int J Comput Vis ; 112(3): 319-341, 2015 May 01.
Artigo em Inglês | MEDLINE | ID: mdl-26052182

RESUMO

We consider a problem of finding maximum weight subgraphs (MWS) that satisfy hard constraints in a weighted graph. The constraints specify the graph nodes that must belong to the solution as well as mutual exclusions of graph nodes, i.e., pairs of nodes that cannot belong to the same solution. Our main contribution is a novel inference approach for solving this problem in a sequential monte carlo (SMC) sampling framework. Usually in an SMC framework there is a natural ordering of the states of the samples. The order typically depends on observations about the states or on the annealing setup used. In many applications (e.g., image jigsaw puzzle problems), all observations (e.g., puzzle pieces) are given at once and it is hard to define a natural ordering. Therefore, we relax the assumption of having ordered observations about states and propose a novel SMC algorithm for obtaining maximum a posteriori estimate of a high-dimensional posterior distribution. This is achieved by exploring different orders of states and selecting the most informative permutations in each step of the sampling. Our experimental results demonstrate that the proposed inference framework significantly outperforms loopy belief propagation in solving the image jigsaw puzzle problem. In particular, our inference quadruples the accuracy of the puzzle assembly compared to that of loopy belief propagation.

3.
Artigo em Inglês | MEDLINE | ID: mdl-38536700

RESUMO

The recent advances in compressing high-accuracy convolutional neural networks (CNNs) have witnessed remarkable progress in real-time object detection. To accelerate detection speed, lightweight detectors always have few convolution layers using a single-path backbone. Single-path architecture, however, involves continuous pooling and downsampling operations, always resulting in coarse and inaccurate feature maps that are disadvantageous to locate objects. On the other hand, due to limited network capacity, recent lightweight networks are often weak in representing large-scale visual data. To address these problems, we present a dual-path network, named DPNet, with a lightweight attention scheme for real-time object detection. The dual-path architecture enables us to extract in parallel high-level semantic features and low-level object details. Although DPNet has a nearly duplicated shape with respect to single-path detectors, the computational costs and model size are not significantly increased. To enhance representation capability, a lightweight self-correlation module (LSCM) is designed to capture global interactions, with only a few computational overheads and network parameters. In the neck, LSCM is extended into a lightweight cross correlation module (LCCM), capturing mutual dependencies among neighboring scale features. We have conducted exhaustive experiments on MS COCO, Pascal VOC 2007, and ImageNet datasets. The experimental results demonstrate that DPNet achieves a state-of-the-art trade off between detection accuracy and implementation efficiency. More specifically, DPNet achieves 31.3% AP on MS COCO test-dev, 82.7% mAP on Pascal VOC 2007 test set, and 41.6% mAP on ImageNet validation set, together with nearly 2.5M model size, 1.04 GFLOPs, and 164 and 196 frames/s (FPS) FPS for [Formula: see text] input images of three datasets.

4.
IEEE Trans Image Process ; 32: 2811-2826, 2023.
Artigo em Inglês | MEDLINE | ID: mdl-37171922

RESUMO

Impressive advances in acquisition and sharing technologies have made the growth of multimedia collections and their applications almost unlimited. However, the opposite is true for the availability of labeled data, which is needed for supervised training, since such data is often expensive and time-consuming to obtain. While there is a pressing need for the development of effective retrieval and classification methods, the difficulties faced by supervised approaches highlight the relevance of methods capable of operating with few or no labeled data. In this work, we propose a novel manifold learning algorithm named Rank Flow Embedding (RFE) for unsupervised and semi-supervised scenarios. The proposed method is based on ideas recently exploited by manifold learning approaches, which include hypergraphs, Cartesian products, and connected components. The algorithm computes context-sensitive embeddings, which are refined following a rank-based processing flow, while complementary contextual information is incorporated. The generated embeddings can be exploited for more effective unsupervised retrieval or semi-supervised classification based on Graph Convolutional Networks. Experimental results were conducted on 10 different collections. Various features were considered, including the ones obtained with recent Convolutional Neural Networks (CNN) and Vision Transformer (ViT) models. High effective results demonstrate the effectiveness of the proposed method on different tasks: unsupervised image retrieval, semi-supervised classification, and person Re-ID. The results demonstrate that RFE is competitive or superior to the state-of-the-art in diverse evaluated scenarios.

5.
IEEE Trans Neural Netw Learn Syst ; 34(6): 2896-2907, 2023 Jun.
Artigo em Inglês | MEDLINE | ID: mdl-34520373

RESUMO

Entropy minimization has been widely used in unsupervised domain adaptation (UDA). However, existing works reveal that the use of entropy-minimization-only may lead to collapsed trivial solutions for UDA. In this article, we try to seek possible close-to-ideal UDA solutions by focusing on some intuitive properties of the ideal domain adaptation solution. In particular, we propose to introduce diversity maximization for further regulating entropy minimization. In order to achieve the possible minimum target risk for UDA, we show that diversity maximization should be elaborately balanced with entropy minimization, the degree of which can be finely controlled with the use of deep embedded validation in an unsupervised manner. The proposed minimal-entropy diversity maximization (MEDM) can be directly implemented by stochastic gradient descent without the use of adversarial learning. Empirical evidence demonstrates that MEDM outperforms the state-of-the-art methods on four popular domain adaptation datasets.

6.
IEEE Trans Image Process ; 32: 2568-2579, 2023.
Artigo em Inglês | MEDLINE | ID: mdl-37093727

RESUMO

It is challenging to characterize the intrinsic geometry of high-degree algebraic curves with lower-degree algebraic curves. The reduction in the curve's degree implies lower computation costs, which is crucial for various practical computer vision systems. In this paper, we develop a characteristic mapping (CM) to recursively degenerate 3n points on a planar curve of n th order to 3(n-1) points on a curve of (n-1) th order. The proposed characteristic mapping enables curve grouping on a line, a curve of the lowest order, that preserves the intrinsic geometric properties of a higher-order curve (ellipse). We prove a necessary condition and derive an efficient arc grouping module that finds valid elliptical arc segments by determining whether the mapped three points are colinear, invoking minimal computation. We embed the module into two latest arc-based ellipse detection methods, which reduces their running time by 25% and 50% on average over five widely used data sets. This yields faster detection than the state-of-the-art algorithms while keeping their precision comparable or even higher. Two CM embedded methods also significantly surpass a deep learning method on all evaluation metrics.

7.
J Imaging ; 7(3)2021 Mar 08.
Artigo em Inglês | MEDLINE | ID: mdl-34460705

RESUMO

Visual features and representation learning strategies experienced huge advances in the previous decade, mainly supported by deep learning approaches. However, retrieval tasks are still performed mainly based on traditional pairwise dissimilarity measures, while the learned representations lie on high dimensional manifolds. With the aim of going beyond pairwise analysis, post-processing methods have been proposed to replace pairwise measures by globally defined measures, capable of analyzing collections in terms of the underlying data manifold. The most representative approaches are diffusion and ranked-based methods. While the diffusion approaches can be computationally expensive, the rank-based methods lack theoretical background. In this paper, we propose an efficient Rank-based Diffusion Process which combines both approaches and avoids the drawbacks of each one. The obtained method is capable of efficiently approximating a diffusion process by exploiting rank-based information, while assuring its convergence. The algorithm exhibits very low asymptotic complexity and can be computed regionally, being suitable to outside of dataset queries. An experimental evaluation conducted for image retrieval and person re-ID tasks on diverse datasets demonstrates the effectiveness of the proposed approach with results comparable to the state-of-the-art.

8.
Annu Int Conf IEEE Eng Med Biol Soc ; 2021: 2700-2703, 2021 11.
Artigo em Inglês | MEDLINE | ID: mdl-34891808

RESUMO

Recent studies have shown that Dental Panoramic Radiograph (DPR) images have great potential for prescreening of osteoporosis given the high degree of correlation between the bone density and trabecular bone structure. Most of the research works in these area had used pretrained models for feature extraction and classification with good success. However, when the size of the data set is limited it becomes difficult to use these pretrained networks and gain high confidence scores. In this paper, we evaluated the diagnostic performance of deep convolutional neural networks (DCNN)based computer-assisted diagnosis (CAD) system in the detection of osteoporosis on panoramic radiographs, through a comparison with diagnoses made by oral and maxillofacial radiologists. With the available labelled dataset of 70 images, results were reproduced for the preliminary study model. Furthermore, the model performance was enhanced using different computer vision techniques. Specifically, the age meta data available for each patient was leveraged to obtain more accurate predictions. Lastly, we tried to leverage these images, ages and osteoporotic labels to create a neural network based regression model and predict the Bone Mineral Density (BMD) value for each patient. Experimental results showed that the proposed CAD system was in high accord with experienced oral and maxillofacial radiologists in detecting osteoporosis and achieved 87.86% accuracy.Clinical relevance- This paper presents a method to detect osteoporosis using DPR images and age data with multi-column DCNN and then leverage this data to predict Bone Mineral Density for each patient.


Assuntos
Densidade Óssea , Osteoporose , Humanos , Redes Neurais de Computação , Osteoporose/diagnóstico por imagem , Radiografia , Radiografia Panorâmica
9.
IEEE Trans Pattern Anal Mach Intell ; 31(8): 1525-31, 2009 Aug.
Artigo em Inglês | MEDLINE | ID: mdl-19542585

RESUMO

This paper addresses the problem of piecewise linear approximation of point sets without any constraints on the order of data points or the number of model components (line segments). We point out two problems with the maximum likelihood estimate (MLE) that present serious drawbacks in practical applications. One is that the parametric models obtained using a classical MLE framework are not guaranteed to be close to data points. It is typically impossible, in this classical framework, to detect whether a parametric model fits the data well or not. The second problem is related to accurately choosing the optimal number of model components. We first fit a nonparametric density to the data points and use it to define a neighborhood of the data. Observations inside this neighborhood are deemed informative; those outside the neighborhood are deemed uninformative for our purpose. This provides us with a means to recognize when models fail to properly fit the data. We then obtain maximum likelihood estimates by optimizing the Kullback-Leibler Divergence (KLD) between the nonparametric data density restricted to this neighborhood and a mixture of parametric models. We prove that, under the assumption of a reasonably large sample size, the inferred model components are close to their ground-truth model component counterparts. This holds independently of the initial number of assumed model components or their associated parameters. Moreover, in the proposed approach, we are able to estimate the number of significant model components without any additional computation.

10.
IEEE Trans Image Process ; 18(5): 1025-36, 2009 May.
Artigo em Inglês | MEDLINE | ID: mdl-19342337

RESUMO

We propose a video coding scheme that departs from traditional Motion Estimation/DCT frameworks and instead uses Karhunen-Loeve Transform (KLT)/Joint Spatiotemporal Prediction framework. In particular, a novel approach that performs joint spatial and temporal prediction simultaneously is introduced. It bypasses the complex H.26x interframe techniques and it is less computationally intensive. Because of the advantage of the effective joint prediction and the image-dependent color space transformation (KLT), the proposed approach is demonstrated experimentally to consistently lead to improved video quality, and in many cases to better compression rates and improved computational speed.

11.
IEEE Trans Pattern Anal Mach Intell ; 41(5): 1213-1226, 2019 May.
Artigo em Inglês | MEDLINE | ID: mdl-29993682

RESUMO

Diffusion process has advanced object retrieval greatly as it can capture the underlying manifold structure. Recent studies have experimentally demonstrated that tensor product diffusion can better reveal the intrinsic relationship between objects than other variants. However, the principle remains unclear, i.e., what kind of manifold structure is captured. In this paper, we propose a new affinity learning algorithm called Regularized Diffusion Process (RDP). By deeply exploring the properties of RDP, our first yet basic contribution is providing a manifold-based explanation for tensor product diffusion. A novel criterion measuring the smoothness of the manifold is defined, which simultaneously regularizes four vertices in the affinity graph. Inspired by this observation, we further contribute two variants towards two specific goals. While ARDP can learn similarities across heterogeneous domains, HRDP performs affinity learning on tensor product hypergraph, considering the relationships between objects are generally more complex than pairwise. Consequently, RDP, ARDP and HRDP constitute a generic tool for object retrieval in most commonly-used settings, no matter the input relationships between objects are derived from the same domain or not, and in pairwise formulation or not. Comprehensive experiments on 10 retrieval benchmarks, especially on large scale data, validate the effectiveness and generalization of our work.

12.
Med Image Anal ; 53: 165-178, 2019 04.
Artigo em Inglês | MEDLINE | ID: mdl-30798116

RESUMO

Developing new deep learning methods for medical image analysis is a prevalent research topic in machine learning. In this paper, we propose a deep learning scheme with a novel loss function for weakly supervised breast cancer diagnosis. According to the Nottingham Grading System, mitotic count plays an important role in breast cancer diagnosis and grading. To determine the cancer grade, pathologists usually need to manually count mitosis from a great deal of histopathology images, which is a very tedious and time-consuming task. This paper proposes an automatic method for detecting mitosis. We regard the mitosis detection task as a semantic segmentation problem and use a deep fully convolutional network to address it. Different from conventional training data used in semantic segmentation system, the training label of mitosis data is usually in the format of centroid pixel, rather than all the pixels belonging to a mitosis. The centroid label is a kind of weak label, which is much easier to annotate and can save the effort of pathologists a lot. However, technically this weak label is not sufficient for training a mitosis segmentation model. To tackle this problem, we expand the single-pixel label to a novel label with concentric circles, where the inside circle is a mitotic region and the ring around the inside circle is a "middle ground". During the training stage, we do not compute the loss of the ring region because it may have the presence of both mitotic and non-mitotic pixels. This new loss termed as "concentric loss" is able to make the semantic segmentation network be trained with the weakly annotated mitosis data. On the generated segmentation map from the segmentation model, we filter out low confidence and obtain mitotic cells. On the challenging ICPR 2014 MITOSIS dataset and AMIDA13 dataset, we achieve a 0.562 F-score and 0.673 F-score respectively, outperforming all previous approaches significantly. On the latest TUPAC16 dataset, we obtain a F-score of 0.669, which is also the state-of-the-art result. The excellent results quantitatively demonstrate the effectiveness of the proposed mitosis segmentation network with the concentric loss. All of our code has been made publicly available at https://github.com/ChaoLi977/SegMitos_mitosis_detection.


Assuntos
Neoplasias da Mama/diagnóstico por imagem , Neoplasias da Mama/patologia , Processamento de Imagem Assistida por Computador/métodos , Mitose , Simulação por Computador , Feminino , Humanos , Redes Neurais de Computação , Aprendizado de Máquina Supervisionado
13.
IEEE Trans Image Process ; 28(1): 88-101, 2019 Jan.
Artigo em Inglês | MEDLINE | ID: mdl-30080147

RESUMO

As a post-processing procedure, the diffusion process has demonstrated its ability of substantially improving the performance of various visual retrieval systems. Whereas, great efforts are also devoted to similarity (or metric) fusion, seeing that only one individual type of similarity cannot fully reveal the intrinsic relationship between objects. This stimulates a great research interest of considering similarity fusion in the framework of the diffusion process (i.e., fusion with diffusion) for robust retrieval. In this paper, we first revisit representative methods about fusion with diffusion and provide new insights which are ignored by previous researchers. Then, observing that existing algorithms are susceptible to noisy similarities, the proposed regularized ensemble diffusion (RED) is bundled with an automatic weight learning paradigm, so that the negative impacts of noisy similarities are suppressed. Though formulated as a convex optimization problem, one advantage of RED is that it converts back into the iteration-based solver with the same computational complexity as the conventional diffusion process. At last, we integrate several recently-proposed similarities with the proposed framework. The experimental results suggest that we can achieve new state-of-the-art performances on various retrieval tasks, including 3D shape retrieval on the ModelNet data set, and image retrieval on the Holidays and Ukbench data sets.

14.
IEEE Trans Pattern Anal Mach Intell ; 30(7): 1282-92, 2008 Jul.
Artigo em Inglês | MEDLINE | ID: mdl-18550909

RESUMO

This paper presents a novel framework to for shape recognition based on object silhouettes. The main idea is to match skeleton graphs by comparing the shortest paths between skeleton endpoints. In contrast to typical tree or graph matching methods, we completely ignore the topological graph structure. Our approach is motivated by the fact that visually similar skeleton graphs may have completely different topological structures. The proposed comparison of shortest paths between endpoints of skeleton graphs yields correct matching results in such cases. The skeletons are pruned by contour partitioning with Discrete Curve Evolution, which implies that the endpoints of skeleton branches correspond to visual parts of the objects. The experimental results demonstrate that our method is able to produce correct results in the presence of articulations, stretching, and occlusion.


Assuntos
Algoritmos , Inteligência Artificial , Interpretação de Imagem Assistida por Computador/métodos , Reconhecimento Automatizado de Padrão/métodos , Técnica de Subtração , Aumento da Imagem/métodos , Armazenamento e Recuperação da Informação/métodos , Reprodutibilidade dos Testes , Sensibilidade e Especificidade
15.
Med Image Anal ; 45: 121-133, 2018 04.
Artigo em Inglês | MEDLINE | ID: mdl-29455111

RESUMO

Mitotic count is a critical predictor of tumor aggressiveness in the breast cancer diagnosis. Nowadays mitosis counting is mainly performed by pathologists manually, which is extremely arduous and time-consuming. In this paper, we propose an accurate method for detecting the mitotic cells from histopathological slides using a novel multi-stage deep learning framework. Our method consists of a deep segmentation network for generating mitosis region when only a weak label is given (i.e., only the centroid pixel of mitosis is annotated), an elaborately designed deep detection network for localizing mitosis by using contextual region information, and a deep verification network for improving detection accuracy by removing false positives. We validate the proposed deep learning method on two widely used Mitosis Detection in Breast Cancer Histological Images (MITOSIS) datasets. Experimental results show that we can achieve the highest F-score on the MITOSIS dataset from ICPR 2012 grand challenge merely using the deep detection network. For the ICPR 2014 MITOSIS dataset that only provides the centroid location of mitosis, we employ the segmentation model to estimate the bounding box annotation for training the deep detection network. We also apply the verification model to eliminate some false positives produced from the detection model. By fusing scores of the detection and verification models, we achieve the state-of-the-art results. Moreover, our method is very fast with GPU computing, which makes it feasible for clinical practice.


Assuntos
Neoplasias da Mama/diagnóstico por imagem , Neoplasias da Mama/patologia , Aprendizado Profundo , Processamento de Imagem Assistida por Computador/métodos , Mitose , Algoritmos , Feminino , Humanos
16.
IEEE Trans Pattern Anal Mach Intell ; 29(1): 126-40, 2007 Jan.
Artigo em Inglês | MEDLINE | ID: mdl-17108388

RESUMO

Digitization is not as easy as it looks. If one digitizes a 3D object even with a dense sampling grid, the reconstructed digital object may have topological distortions and, in general, there exists no upper bound for the Hausdorff distance. This explains why so far no algorithm has been known which guarantees topology preservation. However, as we will show, it is possible to repair the obtained digital image in a locally bounded way so that it is homeomorphic and close to the 3D object. The resulting digital object is always well-composed, which has nice implications for a lot of image analysis problems. Moreover, we will show that the surface of the original object is homeomorphic to the result of the marching cubes algorithm. This is really surprising since it means that the well-known topological problems of the marching cubes reconstruction simply do not occur for digital images of r-regular objects. Based on the trilinear interpolation, we also construct a smooth isosurface from the digital image that has the same topology as the original surface. Finally, we give a surprisingly simple topology preserving reconstruction method by using overlapping balls instead of cubical voxels. This is the first approach of digitizing 3D objects which guarantees topology preservation and gives an upper bound for the geometric distortion. Since the output can be chosen as a pure voxel presentation, a union of balls, a reconstruction by trilinear interpolation, a smooth isosurface, or the piecewise linear marching cubes surface, the results are directly applicable to a huge class of image analysis algorithms. Moreover, we show how one can efficiently estimate the volume and the surface area of 3D objects by looking at their digitizations. Measuring volume and surface area of digital objects are important problems in 3D image analysis. Good estimators should be multigrid convergent, i.e., the error goes to zero with increasing sampling density. We will show that every presented reconstruction method can be used for volume estimation and we will give a solution for the much more difficult problem of multigrid-convergent surface area estimation. Our solution is based on simple counting of voxels and we are the first to be able to give absolute bounds for the surface area.


Assuntos
Algoritmos , Inteligência Artificial , Aumento da Imagem/métodos , Interpretação de Imagem Assistida por Computador/métodos , Imageamento Tridimensional/métodos , Reconhecimento Automatizado de Padrão/métodos , Processamento de Sinais Assistido por Computador , Armazenamento e Recuperação da Informação/métodos , Reprodutibilidade dos Testes , Sensibilidade e Especificidade
17.
IEEE Trans Pattern Anal Mach Intell ; 29(3): 449-62, 2007 Mar.
Artigo em Inglês | MEDLINE | ID: mdl-17224615

RESUMO

In this paper, we introduce a new skeleton pruning method based on contour partitioning. Any contour partition can be used, but the partitions obtained by Discrete Curve Evolution (DCE) yield excellent results. The theoretical properties and the experiments presented demonstrate that obtained skeletons are in accord with human visual perception and stable, even in the presence of significant noise and shape variations, and have the same topology as the original skeletons. In particular, we have proven that the proposed approach never produces spurious branches, which are common when using the known skeleton pruning methods. Moreover, the proposed pruning method does not displace the skeleton points. Consequently, all skeleton points are centers of maximal disks. Again, many existing methods displace skeleton points in order to produces pruned skeletons.


Assuntos
Algoritmos , Inteligência Artificial , Aumento da Imagem/métodos , Interpretação de Imagem Assistida por Computador/métodos , Reconhecimento Automatizado de Padrão/métodos , Processamento de Sinais Assistido por Computador , Armazenamento e Recuperação da Informação/métodos , Análise Numérica Assistida por Computador , Reprodutibilidade dos Testes , Sensibilidade e Especificidade
18.
IEEE Trans Pattern Anal Mach Intell ; 37(3): 541-54, 2015 Mar.
Artigo em Inglês | MEDLINE | ID: mdl-26353260

RESUMO

In this paper, we present a novel partition framework, called dense subgraph partition (DSP), to automatically, precisely and efficiently decompose a positive hypergraph into dense subgraphs. A positive hypergraph is a graph or hypergraph whose edges, except self-loops, have positive weights. We first define the concepts of core subgraph, conditional core subgraph, and disjoint partition of a conditional core subgraph, then define DSP based on them. The result of DSP is an ordered list of dense subgraphs with decreasing densities, which uncovers all underlying clusters, as well as outliers. A divide-and-conquer algorithm, called min-partition evolution, is proposed to efficiently compute the partition. DSP has many appealing properties. First, it is a nonparametric partition and it reveals all meaningful clusters in a bottom-up way. Second, it has an exact and efficient solution, called min-partition evolution algorithm. The min-partition evolution algorithm is a divide-and-conquer algorithm, thus time-efficient and memory-friendly, and suitable for parallel processing. Third, it is a unified partition framework for a broad range of graphs and hypergraphs. We also establish its relationship with the densest k-subgraph problem (DkS), an NP-hard but fundamental problem in graph theory, and prove that DSP gives precise solutions to DkS for all kin a graph-dependent set, called critical k-set. To our best knowledge, this is a strong result which has not been reported before. Moreover, as our experimental results show, for sparse graphs, especially web graphs, the size of critical k-set is close to the number of vertices in the graph. We test the proposed partition framework on various tasks, and the experimental results clearly illustrate its advantages.

19.
IEEE Trans Pattern Anal Mach Intell ; 37(12): 2361-73, 2015 Dec.
Artigo em Inglês | MEDLINE | ID: mdl-26539843

RESUMO

View-based 3D shape retrieval is a popular branch in 3D shape analysis owing to the high discriminative property of 2D views. However, many previous works do not scale up to large 3D shape databases. We propose a two layer coding (TLC) framework to conduct shape matching much more efficiently. The first layer coding is applied to pairs of views represented as depth images. The spatial relationship of each view pair is captured with so-called eigen-angle, which is the planar angle between the two views measured at the center of the 3D shape. Prior to the second layer coding, the view pairs are divided into subsets according to their eigen-angles. Consequently, view pairs that differ significantly in their eigen-angles are encoded with different codewords, which implies that spatial arrangement of views is preserved in the second layer coding. The final feature vector of a 3D shape is the concatenation of all the encoded features from different subsets, which is used for efficient indexing directly. TLC is not limited to encode the local features from 2D views, but can be also applied to encoding 3D features. Exhaustive experimental results confirm that TLC achieves state-of-the-art performance in both retrieval accuracy and efficiency.

20.
IEEE Trans Pattern Anal Mach Intell ; 35(9): 2131-42, 2013 Sep.
Artigo em Inglês | MEDLINE | ID: mdl-23868775

RESUMO

In this paper, we propose an efficient algorithm to detect dense subgraphs of a weighted graph. The proposed algorithm, called the shrinking and expansion algorithm (SEA), iterates between two phases, namely, the expansion phase and the shrink phase, until convergence. For a current subgraph, the expansion phase adds the most related vertices based on the average affinity between each vertex and the subgraph. The shrink phase considers all pairwise relations in the current subgraph and filters out vertices whose average affinities to other vertices are smaller than the average affinity of the result subgraph. In both phases, SEA operates on small subgraphs; thus it is very efficient. Significant dense subgraphs are robustly enumerated by running SEA from each vertex of the graph. We evaluate SEA on two different applications: solving correspondence problems and cluster analysis. Both theoretic analysis and experimental results show that SEA is very efficient and robust, especially when there exists a large amount of noise in edge weights.

SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA