Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 25
Filtrar
1.
IEEE Trans Image Process ; 33: 3161-3173, 2024.
Artigo em Inglês | MEDLINE | ID: mdl-38683701

RESUMO

Detecting ellipses poses a challenging low-level task indispensable to many image analysis applications. Existing ellipse detection methods commonly encounter two fundamental issues. First, inferior detection accuracy could be incurred on a small ellipse than that on a large one; this introduces the scale issue. Second, inferior detection accuracy could be yielded along the minor axis than along the major one of the same ellipse; this leads to the anisotropy issue. To address these issues simultaneously, a novel anisotropic scale-invariant (ASI) ellipse detection methodology is proposed. Our basic idea is to perform ellipse detection in a transformed image space referred to as the ellipse normalization (EN) space, in which the desired ellipse from the original image is 'normalized' to the unit circle. With the establishment of the EN-space, an analytical ellipse fitting scheme and a set of distance measures are developed. Theoretical justifications are then derived to prove that both our ellipse fitting scheme and distance measures are invariant to anisotropic scaling, and thus each ellipse can be detected with the same accuracy regardless of its size and ellipticity. By incorporating these components into two recent state-of-the-art algorithms, two ASI ellipse detectors are finally developed and exploited to verify the effectiveness of our proposed methodology.

2.
IEEE Trans Image Process ; 31: 3765-3779, 2022.
Artigo em Inglês | MEDLINE | ID: mdl-35604974

RESUMO

This paper proposes a new full-reference image quality assessment (IQA) model for performing perceptual quality evaluation on light field (LF) images, called the spatial and geometry feature-based model (SGFM). Considering that the LF image describe both spatial and geometry information of the scene, the spatial features are extracted over the sub-aperture images (SAIs) by using contourlet transform and then exploited to reflect the spatial quality degradation of the LF images, while the geometry features are extracted across the adjacent SAIs based on 3D-Gabor filter and then explored to describe the viewing consistency loss of the LF images. These schemes are motivated and designed based on the fact that the human eyes are more interested in the scale, direction, contour from the spatial perspective and viewing angle variations from the geometry perspective. These operations are applied to the reference and distorted LF images independently. The degree of similarity can be computed based on the above-measured quantities for jointly arriving at the final IQA score of the distorted LF image. Experimental results on three commonly-used LF IQA datasets show that the proposed SGFM is more in line with the quality assessment of the LF images perceived by the human visual system (HVS), compared with multiple classical and state-of-the-art IQA models.

3.
IEEE Trans Image Process ; 31: 6175-6187, 2022.
Artigo em Inglês | MEDLINE | ID: mdl-36126028

RESUMO

In this paper, a full-reference video quality assessment (VQA) model is designed for the perceptual quality assessment of the screen content videos (SCVs), called the hybrid spatiotemporal feature-based model (HSFM). The SCVs are of hybrid structure including screen and natural scenes, which are perceived by the human visual system (HVS) with different visual effects. With this consideration, the three dimensional Laplacian of Gaussian (3D-LOG) filter and three dimensional Natural Scene Statistics (3D-NSS) are exploited to extract the screen and natural spatiotemporal features, based on the reference and distorted SCV sequences separately. The similarities of these extracted features are then computed independently, followed by generating the distorted screen and natural quality scores for screen and natural scenes. After that, an adaptive screen and natural quality fusion scheme through the local video activity is developed to combine them for arriving at the final VQA score of the distorted SCV under evaluation. The experimental results on the Screen Content Video Database (SCVD) and Compressed Screen Content Video Quality (CSCVQ) databases have shown that the proposed HSFM is more in line with the perceptual quality assessment of the SCVs perceived by the HVS, compared with a variety of classic and latest IQA/VQA models.


Assuntos
Algoritmos , Bases de Dados Factuais , Humanos , Gravação em Vídeo/métodos
4.
Artigo em Inglês | MEDLINE | ID: mdl-32997630

RESUMO

A new multi-scale deep learning (MDL) framework is proposed and exploited for conducting image interpolation in this paper. The core of the framework is a seeding network that needs to be designed for the targeted task. For image interpolation, a novel attention-aware inception network (AIN) is developed as the seeding network; it has two key stages: 1) feature extraction based on the low-resolution input image; and 2) feature-to-image mapping to enlarge image's size or resolution. Note that the designed seeding network, AIN, needs to be trained with a matched training dataset at each scale. For that, multi-scale image patches are generated using our proposed pyramid cut, which outperforms the conventional image pyramid method by completely avoiding aliasing issue. After training, the trained AINs are then combined for processing the input image in the testing stage. Extensive experimental simulation results obtained from seven image datasets (comprising 359 images in total) have clearly shown that the proposed MAIN consistently delivers highly accurate interpolated images.

5.
Artigo em Inglês | MEDLINE | ID: mdl-32881686

RESUMO

Existing neural networks proposed for low-level image processing tasks are usually implemented by stacking convolution layers with limited kernel size. Every convolution layer merely involves in context information from a small local neighborhood. More contextual features can be explored as more convolution layers are adopted. However it is difficult and costly to take full advantage of long-range dependencies. We propose a novel non-local module, Pyramid Non-local Block, to build up connection between every pixel and all remain pixels. The proposed module is capable of efficiently exploiting pairwise dependencies between different scales of low-level structures. The target is fulfilled through first learning a query feature map with full resolution and a pyramid of reference feature maps with downscaled resolutions. Then correlations with multi-scale reference features are exploited for enhancing pixel-level feature representation. The calculation procedure is economical considering memory consumption and computational cost. Based on the proposed module, we devise a Pyramid Non-local Enhanced Networks for edge-preserving image smoothing which achieves state-of-the-art performance in imitating three classical image smoothing algorithms. Additionally, the pyramid non-local block can be directly incorporated into convolution neural networks for other image restoration tasks. We integrate it into two existing methods for image denoising and single image super-resolution, achieving consistently improved performance.

6.
Artigo em Inglês | MEDLINE | ID: mdl-32149636

RESUMO

In this paper, a progressive collaborative representation (PCR) framework is proposed that is able to incorporate any existing color image demosaicing method for further boosting its demosaicing performance. Our PCR consists of two phases: (i) offline training and (ii) online refinement. In phase (i), multiple training-and-refining stages will be performed. In each stage, a new dictionary will be established through the learning of a large number of feature-patch pairs, extracted from the demosaicked images of the current stage and their corresponding original full-color images. After training, a projection matrix will be generated and exploited to refine the current demosaicked image. The updated image with improved image quality will be used as the input for the next training-and-refining stage and performed the same processing likewise. At the end of phase (i), all the projection matrices generated as above-mentioned will be exploited in phase (ii) to conduct online demosaicked image refinement of the test image. Extensive simulations conducted on two commonly-used test datasets (i.e., the IMAX and Kodak) for evaluating the demosaicing algorithms have clearly demonstrated that our proposed PCR framework is able to constantly boost the performance of any image demosaicing method we experimented, in terms of the objective and subjective performance evaluations.

7.
Artigo em Inglês | MEDLINE | ID: mdl-32886610

RESUMO

Lossy compression brings artifacts into the compressed image and degrades the visual quality. In recent years, many compression artifacts removal methods based on convolutional neural network (CNN) have been developed with great success. However, these methods usually train a model based on one specific value or a small range of quality factors. Obviously, if the test images quality factor does not match to the assumed value range, then degraded performance will be resulted. With this motivation and further consideration of practical usage, a highly robust compression artifacts removal network is proposed in this paper. Our proposed network is a single model approach that can be trained for handling a wide range of quality factors while consistently delivering superior or comparable image artifacts removal performance. To demonstrate, we focus on the JPEG compression with quality factors, ranging from 1 to 60. Note that a turnkey success of our proposed network lies in the novel utilization of the quantization tables as part of the training data. Furthermore, it has two branches in parallel-i.e., the restoration branch and the global branch. The former effectively removes the local artifacts, such as ringing artifacts removal. On the other hand, the latter extracts the global features of the entire image that provides highly instrumental image quality improvement, especially effective on dealing with the global artifacts, such as blocking, color shifting. Extensive experimental results performed on color and grayscale images have clearly demonstrated the effectiveness and efficacy of our proposed single-model approach on the removal of compression artifacts from the decoded image.

8.
Artigo em Inglês | MEDLINE | ID: mdl-32845839

RESUMO

In this paper, we make the first attempt to study the subjective and objective quality assessment for the screen content videos (SCVs). For that, we construct the first large-scale video quality assessment (VQA) database specifically for the SCVs, called the screen content video database (SCVD). This SCVD provides 16 reference SCVs, 800 distorted SCVs, and their corresponding subjective scores, and it is made publicly available for research usage. The distorted SCVs are generated from each reference SCV with 10 distortion types and 5 degradation levels for each distortion type. Each distorted SCV is rated by at least 32 subjects in the subjective test. Furthermore, we propose the first full-reference VQA model for the SCVs, called the spatiotemporal Gabor feature tensor-based model (SGFTM), to objectively evaluate the perceptual quality of the distorted SCVs. This is motivated by the observation that 3D-Gabor filter can well stimulate the visual functions of the human visual system (HVS) on perceiving videos, being more sensitive to the edge and motion information that are often-encountered in the SCVs. Specifically, the proposed SGFTM exploits 3D-Gabor filter to individually extract the spatiotemporal Gabor feature tensors from the reference and distorted SCVs, followed by measuring their similarities and later combining them together through the developed spatiotemporal feature tensor pooling strategy to obtain the final SGFTM score. Experimental results on SCVD have shown that the proposed SGFTM yields a high consistency on the subjective perception of SCV quality and consistently outperforms multiple classical and state-of-the-art image/video quality assessment models.

9.
IEEE Trans Pattern Anal Mach Intell ; 31(8): 1517-24, 2009 Aug.
Artigo em Inglês | MEDLINE | ID: mdl-19542584

RESUMO

The curvature scale-space (CSS) technique is suitable for extracting curvature features from objects with noisy boundaries. To detect corner points in a multiscale framework, Rattarangsi and Chin investigated the scale-space behavior of planar-curve corners. Unfortunately, their investigation was based on an incorrect assumption, viz., that planar curves have no shrinkage under evolution. In the present paper, this mistake is corrected. First, it is demonstrated that a planar curve may shrink nonuniformly as it evolves across increasing scales. Then, by taking into account the shrinkage effect of evolved curves, the CSS trajectory maps of various corner models are investigated and their properties are summarized. The scale-space trajectory of a corner may either persist, vanish, merge with a neighboring trajectory, or split into several trajectories. The scale-space trajectories of adjacent corners may attract each other when the corners have the same concavity, or repel each other when the corners have opposite concavities. Finally, we present a standard curvature measure for computing the CSS maps of digital curves, with which it is shown that planar-curve corners have consistent scale-space behavior in the digital case as in the continuous case.

10.
Artigo em Inglês | MEDLINE | ID: mdl-31478850

RESUMO

3D point clouds associated with attributes are considered as a promising paradigm for immersive communication. However, the corresponding compression schemes for this media are still in the infant stage. Moreover, in contrast to conventional image/video compression, it is a more challenging task to compress 3D point cloud data, arising from the irregular structure. In this paper, we propose a novel and effective compression scheme for the attributes of voxelized 3D point clouds. In the first stage, an input voxelized 3D point cloud is divided into blocks of equal size. Then, to deal with the irregular structure of 3D point clouds, a geometry-guided sparse representation (GSR) is proposed to eliminate the redundancy within each block, which is formulated as an ℓ0-norm regularized optimization problem. Also, an inter-block prediction scheme is applied to remove the redundancy between blocks. Finally, by quantitatively analyzing the characteristics of the resulting transform coefficients by GSR, an effective entropy coding strategy that is tailored to our GSR is developed to generate the bitstream. Experimental results over various benchmark datasets show that the proposed compression scheme is able to achieve better rate-distortion performance and visual quality, compared with state-of-the-art methods.

11.
IEEE Trans Image Process ; 27(9): 4465-4477, 2018 Sep.
Artigo em Inglês | MEDLINE | ID: mdl-29897872

RESUMO

In this paper, a highly-adaptive unsharp masking (UM) method is proposed and called the blurriness-guided UM, or BUM, in short. The proposed BUM exploits the estimated local blurriness as the guidance information to perform pixel-wise enhancement. The consideration of local blurriness is motivated by the fact that enhancing a highly-sharp or a highly-blurred image region is undesirable, since this could easily yield unpleasant image artifacts due to over-enhancement or noise enhancement, respectively. Our proposed BUM algorithm has two powerful adaptations as follows. First, the enhancement strength is adjusted for each pixel on the input image according to the degree of local blurriness measured at the local region of this pixel's location. All such measurements collectively form the blurriness map, from which the scaling matrix can be obtained using our proposed mapping process. Second, we also consider the type of layer-decomposition filter exploited for generating the base layer and the detail layer, since this consideration would effectively help to prevent over-enhancement artifacts. In this paper, the layer-decomposition filter is considered from the viewpoint of edge-preserving type versus non-edge-preserving type. Extensive simulations experimented on various test images have clearly demonstrated that our proposed BUM is able to consistently yield superior enhanced images with better perceptual quality to that of using a fixed enhancement strength or other state-of-the-art adaptive UM methods.

12.
IEEE Trans Image Process ; 27(9): 4516-4528, 2018 Sep.
Artigo em Inglês | MEDLINE | ID: mdl-29897876

RESUMO

In this paper, an accurate and efficient full-reference image quality assessment (IQA) model using the extracted Gabor features, called Gabor feature-based model (GFM), is proposed for conducting objective evaluation of screen content images (SCIs). It is well-known that the Gabor filters are highly consistent with the response of the human visual system (HVS), and the HVS is highly sensitive to the edge information. Based on these facts, the imaginary part of the Gabor filter that has odd symmetry and yields edge detection is exploited to the luminance of the reference and distorted SCI for extracting their Gabor features, respectively. The local similarities of the extracted Gabor features and two chrominance components, recorded in the LMN color space, are then measured independently. Finally, the Gabor-feature pooling strategy is employed to combine these measurements and generate the final evaluation score. Experimental simulation results obtained from two large SCI databases have shown that the proposed GFM model not only yields a higher consistency with the human perception on the assessment of SCIs but also requires a lower computational complexity, compared with that of classical and state-of-the-art IQA models. The source code for the proposed GFM will be available at http://smartviplab.org/pubilcations/GFM.html.

13.
IEEE Trans Image Process ; 16(2): 428-41, 2007 Feb.
Artigo em Inglês | MEDLINE | ID: mdl-17269636

RESUMO

It has been well established that critically sampled boundary pre-/postfiltering operators can improve the coding efficiency and mitigate blocking artifacts in traditional discrete cosine transform-based block coders at low bit rates. In these systems, both the prefilter and the postfilter are square matrices. This paper proposes to use undersampled boundary pre- and postfiltering modules, where the pre-/postfilters are rectangular matrices. Specifically, the prefilter is a "fat" matrix, while the postfilter is a "tall" one. In this way, the size of the prefiltered image is smaller than that of the original input image, which leads to improved compression performance and reduced computational complexities at low bit rates. The design and VLSI-friendly implementation of the undersampled pre-/postfilters are derived. Their relations to lapped transforms and filter banks are also presented. Two design examples are also included to demonstrate the validity of the theory. Furthermore, image coding results indicate that the proposed undersampled pre-/postfiltering systems yield excellent and stable performance in low bit-rate image coding.


Assuntos
Algoritmos , Compressão de Dados/métodos , Aumento da Imagem/métodos , Interpretação de Imagem Assistida por Computador/métodos , Processamento de Sinais Assistido por Computador , Análise Numérica Assistida por Computador , Tamanho da Amostra
14.
IEEE Trans Image Process ; 16(2): 491-502, 2007 Feb.
Artigo em Inglês | MEDLINE | ID: mdl-17269641

RESUMO

In this paper, the design of the error resilient time-domain lapped transform is formulated as a linear minimal mean-squared error problem. The optimal Wiener solution and several simplifications with different tradeoffs between complexity and performance are developed. We also prove the persymmetric structure of these Wiener filters. The existing mean reconstruction method is proven to be a special case of the proposed framework. Our method also includes as a special case the linear interpolation method used in DCT-based systems when there is no pre/postfiltering and when the quantization noise is ignored. The design criteria in our previous results are scrutinized and improved solutions are obtained. Various design examples and multiple description image coding experiments are reported to demonstrate the performance of the proposed method.


Assuntos
Algoritmos , Aumento da Imagem/métodos , Interpretação de Imagem Assistida por Computador/métodos , Armazenamento e Recuperação da Informação/métodos , Processamento de Sinais Assistido por Computador , Análise Numérica Assistida por Computador
15.
IEEE Trans Image Process ; 26(10): 4818-4831, 2017 Oct.
Artigo em Inglês | MEDLINE | ID: mdl-28644808

RESUMO

In this paper, an accurate full-reference image quality assessment (IQA) model developed for assessing screen content images (SCIs), called the edge similarity (ESIM), is proposed. It is inspired by the fact that the human visual system (HVS) is highly sensitive to edges that are often encountered in SCIs; therefore, essential edge features are extracted and exploited for conducting IQA for the SCIs. The key novelty of the proposed ESIM lies in the extraction and use of three salient edge features-i.e., edge contrast, edge width, and edge direction. The first two attributes are simultaneously generated from the input SCI based on a parametric edge model, while the last one is derived directly from the input SCI. The extraction of these three features will be performed for the reference SCI and the distorted SCI, individually. The degree of similarity measured for each above-mentioned edge attribute is then computed independently, followed by combining them together using our proposed edge-width pooling strategy to generate the final ESIM score. To conduct the performance evaluation of our proposed ESIM model, a new and the largest SCI database (denoted as SCID) is established in our work and made to the public for download. Our database contains 1800 distorted SCIs that are generated from 40 reference SCIs. For each SCI, nine distortion types are investigated, and five degradation levels are produced for each distortion type. Extensive simulation results have clearly shown that the proposed ESIM model is more consistent with the perception of the HVS on the evaluation of distorted SCIs than the multiple state-of-the-art IQA methods.

16.
IEEE Trans Image Process ; 15(6): 1506-16, 2006 Jun.
Artigo em Inglês | MEDLINE | ID: mdl-16764275

RESUMO

A novel switching median filter incorporating with a powerful impulse noise detection method, called the boundary discriminative noise detection (BDND), is proposed in this paper for effectively denoising extremely corrupted images. To determine whether the current pixel is corrupted, the proposed BDND algorithm first classifies the pixels of a localized window, centering on the current pixel, into three groups--lower intensity impulse noise, uncorrupted pixels, and higher intensity impulse noise. The center pixel will then be considered as "uncorrupted," provided that it belongs to the "uncorrupted" pixel group, or "corrupted." For that, two boundaries that discriminate these three groups require to be accurately determined for yielding a very high noise detection accuracy--in our case, achieving zero miss-detection rate while maintaining a fairly low false-alarm rate, even up to 70% noise corruption. Four noise models are considered for performance evaluation. Extensive simulation results conducted on both monochrome and color images under a wide range (from 10% to 90%) of noise corruption clearly show that our proposed switching median filter substantially outperforms all existing median-based filters, in terms of suppressing impulse noise while preserving image details, and yet, the proposed BDND is algorithmically simple, suitable for real-time implementation and application.


Assuntos
Algoritmos , Artefatos , Aumento da Imagem/métodos , Interpretação de Imagem Assistida por Computador/métodos , Imageamento Tridimensional/métodos , Processamento de Sinais Assistido por Computador , Análise Discriminante , Filtração/métodos , Armazenamento e Recuperação da Informação/métodos , Reprodutibilidade dos Testes , Sensibilidade e Especificidade
17.
IEEE Trans Image Process ; 15(4): 819-32, 2006 Apr.
Artigo em Inglês | MEDLINE | ID: mdl-16579371

RESUMO

Forward error correction based multiple description (MD-FEC) transcoding for transmitting embedded bitstream over the packet erasure networks has been extensively studied in the past. In the existing work, a single embedded source bitstream, e.g., the bitstream of a group of pictures (GOP) encoded using three-dimensional set partitioning in hierarchical trees is optimally protected unequal error protection (UEP) in the rate-distortion sense. However, most of the previous work on transmitting embedded video using MD-FEC assumed that one GOP is transmitted only once, and did not consider the chance of retransmission. This may lead to noticeable video quality variations due to varying channel conditions. In this paper, a novel window-based packetization scheme is proposed, which combats bursty packet loss by combining the following three techniques: UEP, retransmission, and GOP-level interleaving. In particular, two retransmission mechanisms, namely segment-wise retransmission and byte-wise retransmission, are proposed based on different types of receiver feedback. Moreover, two levels of rate allocations are introduced: intra-GOP rate allocation minimizes the distortion of individual GOP; while inter-GOP rate allocation intends to reduce video quality fluctuations by adaptively allocating bandwidth according to video signal characteristics and client buffer status. In this way, more consistent video quality can be achieved under various packet loss probabilities, as demonstrated by our experimental results.


Assuntos
Redes de Comunicação de Computadores , Gráficos por Computador , Compressão de Dados/métodos , Aumento da Imagem/métodos , Interpretação de Imagem Assistida por Computador/métodos , Processamento de Sinais Assistido por Computador , Gravação em Vídeo/métodos , Algoritmos , Compressão de Dados/normas , Aumento da Imagem/normas , Interpretação de Imagem Assistida por Computador/normas , Fotografação/métodos , Fotografação/normas , Viés de Seleção , Sensibilidade e Especificidade , Gravação em Vídeo/normas
18.
IEEE Trans Image Process ; 14(2): 189-99, 2005 Feb.
Artigo em Inglês | MEDLINE | ID: mdl-15700524

RESUMO

In this paper, we investigate the problem of transmitting embedded encoded object-oriented images over the packet-erasure networks. After giving a review of the existing combined unequal error protection (CUEP) and individual unequal error protection (IUEP) schemes, a novel weighted unequal error protection (WUEP) packetization scheme is proposed, which serves as an alternative to the existing methods. In our proposed framework, the embedded bitstreams of all concerned image objects are packetized into multiple description packet streams before transmission. Two levels of rate allocation are introduced: intraobject rate allocation provides unequal error protection to the embedded bitstream of each object and minimizes its associated mean distortion; interobject rate allocation aims at minimizing the weighted mean distortion by adaptively allocating the rate budget among different objects according to their importance. Furthermore, our proposed packetization scheme ensures independent access and manipulation of individual image object. A detailed comparison between CUEP, IUEP, and WUEP is presented along with the experimental results, so that one can choose the most suitable approach according to the requirements.


Assuntos
Algoritmos , Artefatos , Compressão de Dados/métodos , Aumento da Imagem/métodos , Interpretação de Imagem Assistida por Computador/métodos , Processamento de Sinais Assistido por Computador , Gravação em Vídeo/métodos , Redes de Comunicação de Computadores , Simulação por Computador , Análise Numérica Assistida por Computador , Reprodutibilidade dos Testes , Sensibilidade e Especificidade
19.
IEEE Trans Image Process ; 24(12): 5879-91, 2015 Dec.
Artigo em Inglês | MEDLINE | ID: mdl-26441414

RESUMO

A recently developed demosaicing methodology, called residual interpolation (RI), has demonstrated superior performance over the conventional color-component difference interpolation. However, it has been observed that the existing RI-based methods fail to fully exploit the potential of RI strategy on the reconstruction of the most important G channel, as only the R and B channels are restored through the RI strategy. Since any reconstruction error introduced in the G channel will be carried over into the demosaicing process of the other two channels, this makes the restoration of the G channel highly instrumental to the quality of the final demosaiced image. In this paper, a novel iterative RI (IRI) process is developed for reconstructing a highly accurate G channel first; in essence, it can be viewed as an iterative refinement process for the estimation of those missing pixel values on the G channel. The key novelty of the proposed IRI process is that all the three channels will mutually guide each other until a stopping criterion is met. Based on the restored G channel, the mosaiced R and B channels will be, respectively, reconstructed by exploiting the existing RI method without iteration. Extensive simulations conducted on two commonly-used test datasets for demosaicing algorithms have demonstrated that our algorithm has achieved the best performance in most cases, compared with the existing state-of-the-art demosaicing methods on both objective and subjective performance evaluations.

20.
IEEE Trans Image Process ; 11(8): 944-52, 2002.
Artigo em Inglês | MEDLINE | ID: mdl-18244688

RESUMO

A conventional color histogram (CCH) considers neither the color similarity across different bins nor the color dissimilarity in the same bin. Therefore, it is sensitive to noisy interference such as illumination changes and quantization errors. Furthermore, CCHs large dimension or histogram bins requires large computation on histogram comparison. To address these concerns, this paper presents a new color histogram representation, called fuzzy color histogram (FCH), by considering the color similarity of each pixel's color associated to all the histogram bins through fuzzy-set membership function. A novel and fast approach for computing the membership values based on fuzzy c-means algorithm is introduced. The proposed FCH is further exploited in the application of image indexing and retrieval. Experimental results clearly show that FCH yields better retrieval results than CCH. Such computing methodology is fairly desirable for image retrieval over large image databases.

SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA