Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 26
Filtrar
Mais filtros












Base de dados
Intervalo de ano de publicação
1.
Artigo em Inglês | MEDLINE | ID: mdl-38691434

RESUMO

This article studies an emerging practical problem called heterogeneous prototype learning (HPL). Unlike the conventional heterogeneous face synthesis (HFS) problem that focuses on precisely translating a face image from a source domain to another target one without removing facial variations, HPL aims at learning the variation-free prototype of an image in the target domain while preserving the identity characteristics. HPL is a compounded problem involving two cross-coupled subproblems, that is, domain transfer and prototype learning (PL), thus making most of the existing HFS methods that simply transfer the domain style of images unsuitable for HPL. To tackle HPL, we advocate disentangling the prototype and domain factors in their respective latent feature spaces and then replacing the source domain with the target one for generating a new heterogeneous prototype. In doing so, the two subproblems in HPL can be solved jointly in a unified manner. Based on this, we propose a disentangled HPL framework, dubbed DisHPL, which is composed of one encoder-decoder generator and two discriminators. The generator and discriminators play adversarial games such that the generator embeds contaminated images into a prototype feature space only capturing identity information and a domain-specific feature space, while generating realistic-looking heterogeneous prototypes. Experiments on various heterogeneous datasets with diverse variations validate the superiority of DisHPL.

2.
IEEE Trans Cybern ; 54(5): 3338-3351, 2024 May.
Artigo em Inglês | MEDLINE | ID: mdl-37028342

RESUMO

Compressive sensing (CS) techniques using a few compressed measurements have drawn considerable interest in reconstructing multispectral imagery (MSI). Nonlocal-based tensor methods have been widely used for MSI-CS reconstruction, which employ the nonlocal self-similarity (NSS) property of MSI to obtain satisfactory results. However, such methods only consider the internal priors of MSI while ignoring important external image information, for example deep-driven priors learned from a corpus of natural image datasets. Meanwhile, they usually suffer from annoying ringing artifacts due to the aggregation of overlapping patches. In this article, we propose a novel approach for highly effective MSI-CS reconstruction using multiple complementary priors (MCPs). The proposed MCP jointly exploits nonlocal low-rank and deep image priors under a hybrid plug-and-play framework, which contains multiple pairs of complementary priors, namely, internal and external, shallow and deep, and NSS and local spatial priors. To make the optimization tractable, a well-known alternating direction method of multiplier (ADMM) algorithm based on the alternating minimization framework is developed to solve the proposed MCP-based MSI-CS reconstruction problem. Extensive experimental results demonstrate that the proposed MCP algorithm outperforms many state-of-the-art CS techniques in MSI reconstruction. The source code of the proposed MCP-based MSI-CS reconstruction algorithm is available at: https://github.com/zhazhiyuan/MCP_MSI_CS_Demo.git.

3.
Artigo em Inglês | MEDLINE | ID: mdl-37792650

RESUMO

Spectral computed tomography (CT) is an emerging technology, that generates a multienergy attenuation map for the interior of an object and extends the traditional image volume into a 4-D form. Compared with traditional CT based on energy-integrating detectors, spectral CT can make full use of spectral information, resulting in high resolution and providing accurate material quantification. Numerous model-based iterative reconstruction methods have been proposed for spectral CT reconstruction. However, these methods usually suffer from difficulties such as laborious parameter selection and expensive computational costs. In addition, due to the image similarity of different energy bins, spectral CT usually implies a strong low-rank prior, which has been widely adopted in current iterative reconstruction models. Singular value thresholding (SVT) is an effective algorithm to solve the low-rank constrained model. However, the SVT method requires a manual selection of thresholds, which may lead to suboptimal results. To relieve these problems, in this article, we propose a sparse and low-rank unrolling network (SOUL-Net) for spectral CT image reconstruction, that learns the parameters and thresholds in a data-driven manner. Furthermore, a Taylor expansion-based neural network backpropagation method is introduced to improve the numerical stability. The qualitative and quantitative results demonstrate that the proposed method outperforms several representative state-of-the-art algorithms in terms of detail preservation and artifact reduction.

4.
Proc Natl Acad Sci U S A ; 120(26): e2303262120, 2023 Jun 27.
Artigo em Inglês | MEDLINE | ID: mdl-37339215

RESUMO

Graphene nanoribbons (GNRs) are widely recognized as intriguing building blocks for high-performance electronics and catalysis owing to their unique width-dependent bandgap and ample lone pair electrons on both sides of GNR, respectively, over the graphene nanosheet counterpart. However, it remains challenging to mass-produce kilogram-scale GNRs to render their practical applications. More importantly, the ability to intercalate nanofillers of interest within GNR enables in-situ large-scale dispersion and retains structural stability and properties of nanofillers for enhanced energy conversion and storage. This, however, has yet to be largely explored. Herein, we report a rapid, low-cost freezing-rolling-capillary compression strategy to yield GNRs at a kilogram scale with tunable interlayer spacing for situating a set of functional nanomaterials for electrochemical energy conversion and storage. Specifically, GNRs are created by sequential freezing, rolling, and capillary compression of large-sized graphene oxide nanosheets in liquid nitrogen, followed by pyrolysis. The interlayer spacing of GNRs can be conveniently regulated by tuning the amount of nanofillers of different dimensions added. As such, heteroatoms; metal single atoms; and 0D, 1D, and 2D nanomaterials can be readily in-situ intercalated into the GNR matrix, producing a rich variety of functional nanofiller-dispersed GNR nanocomposites. They manifest promising performance in electrocatalysis, battery, and supercapacitor due to excellent electronic conductivity, catalytic activity, and structural stability of the resulting GNR nanocomposites. The freezing-rolling-capillary compression strategy is facile, robust, and generalizable. It renders the creation of versatile GNR-derived nanocomposites with adjustable interlay spacing of GNR, thereby underpinning future advances in electronics and clean energy applications.

5.
Artigo em Inglês | MEDLINE | ID: mdl-37027772

RESUMO

Learning the generalizable feature representation is critical to few-shot image classification. While recent works exploited task-specific feature embedding using meta-tasks for few-shot learning, they are limited in many challenging tasks as being distracted by the excursive features such as the background, domain, and style of the image samples. In this work, we propose a novel disentangled feature representation (DFR) framework, dubbed DFR, for few-shot learning applications. DFR can adaptively decouple the discriminative features that are modeled by the classification branch, from the class-irrelevant component of the variation branch. In general, most of the popular deep few-shot learning methods can be plugged in as the classification branch, thus DFR can boost their performance on various few-shot tasks. Furthermore, we propose a novel FS-DomainNet dataset based on DomainNet, for benchmarking the few-shot domain generalization (DG) tasks. We conducted extensive experiments to evaluate the proposed DFR on general, fine-grained, and cross-domain few-shot classification, as well as few-shot DG, using the corresponding four benchmarks, i.e., mini-ImageNet, tiered-ImageNet, Caltech-UCSD Birds 200-2011 (CUB), and the proposed FS-DomainNet. Thanks to the effective feature disentangling, the DFR-based few-shot classifiers achieved state-of-the-art results on all datasets.

6.
IEEE Trans Neural Netw Learn Syst ; 34(2): 867-881, 2023 02.
Artigo em Inglês | MEDLINE | ID: mdl-34403349

RESUMO

Single sample per person face recognition (SSPP FR) is one of the most challenging problems in FR due to the extreme lack of enrolment data. To date, the most popular SSPP FR methods are the generic learning methods, which recognize query face images based on the so-called prototype plus variation (i.e., P+V) model. However, the classic P+V model suffers from two major limitations: 1) it linearly combines the prototype and variation images in the observational pixel-spatial space and cannot generalize to multiple nonlinear variations, e.g., poses, which are common in face images and 2) it would be severely impaired once the enrolment face images are contaminated by nuisance variations. To address the two limitations, it is desirable to disentangle the prototype and variation in a latent feature space and to manipulate the images in a semantic manner. To this end, we propose a novel disentangled prototype plus variation model, dubbed DisP+V, which consists of an encoder-decoder generator and two discriminators. The generator and discriminators play two adversarial games such that the generator nonlinearly encodes the images into a latent semantic space, where the more discriminative prototype feature and the less discriminative variation feature are disentangled. Meanwhile, the prototype and variation features can guide the generator to generate an identity-preserved prototype and the corresponding variation, respectively. Experiments on various real-world face datasets demonstrate the superiority of our DisP+V model over the classic P+V model for SSPP FR. Furthermore, DisP+V demonstrates its unique characteristics in both prototype recovery and face editing/interpolation.


Assuntos
Algoritmos , Redes Neurais de Computação , Humanos , Face , Reconhecimento Automatizado de Padrão/métodos
7.
IEEE Trans Neural Netw Learn Syst ; 34(10): 7593-7607, 2023 Oct.
Artigo em Inglês | MEDLINE | ID: mdl-35130172

RESUMO

As a spotlighted nonlocal image representation model, group sparse representation (GSR) has demonstrated a great potential in diverse image restoration tasks. Most of the existing GSR-based image restoration approaches exploit the nonlocal self-similarity (NSS) prior by clustering similar patches into groups and imposing sparsity to each group coefficient, which can effectively preserve image texture information. However, these methods have imposed only plain sparsity over each individual patch of the group, while neglecting other beneficial image properties, e.g., low-rankness (LR), leads to degraded image restoration results. In this article, we propose a novel low-rankness guided group sparse representation (LGSR) model for highly effective image restoration applications. The proposed LGSR jointly utilizes the sparsity and LR priors of each group of similar patches under a unified framework. The two priors serve as the complementary priors in LGSR for effectively preserving the texture and structure information of natural images. Moreover, we apply an alternating minimization algorithm with an adaptively adjusted parameter scheme to solve the proposed LGSR-based image restoration problem. Extensive experiments are conducted to demonstrate that the proposed LGSR achieves superior results compared with many popular or state-of-the-art algorithms in various image restoration tasks, including denoising, inpainting, and compressive sensing (CS).

8.
Micromachines (Basel) ; 13(7)2022 Jun 29.
Artigo em Inglês | MEDLINE | ID: mdl-35888847

RESUMO

Recent advances in machine learning, from large-scale optimization to building deep neural networks, are increasingly being applied in the emerging field of computational sensing and imaging [...].

9.
Artigo em Inglês | MEDLINE | ID: mdl-35853066

RESUMO

While deep learning succeeds in a wide range of tasks, it highly depends on the massive collection of annotated data which is expensive and time-consuming. To lower the cost of data annotation, active learning has been proposed to interactively query an oracle to annotate a small proportion of informative samples in an unlabeled dataset. Inspired by the fact that the samples with higher loss are usually more informative to the model than the samples with lower loss, in this article we present a novel deep active learning approach that queries the oracle for data annotation when the unlabeled sample is believed to incorporate high loss. The core of our approach is a measurement temporal output discrepancy (TOD) that estimates the sample loss by evaluating the discrepancy of outputs given by models at different optimization steps. Our theoretical investigation shows that TOD lower-bounds the accumulated sample loss thus it can be used to select informative unlabeled samples. On basis of TOD, we further develop an effective unlabeled data sampling strategy as well as an unsupervised learning criterion for active learning. Due to the simplicity of TOD, our methods are efficient, flexible, and task-agnostic. Extensive experimental results demonstrate that our approach achieves superior performances than the state-of-the-art active learning methods on image classification and semantic segmentation tasks. In addition, we show that TOD can be utilized to select the best model of potentially the highest testing accuracy from a pool of candidate models.

10.
Micromachines (Basel) ; 13(4)2022 Mar 31.
Artigo em Inglês | MEDLINE | ID: mdl-35457869

RESUMO

X-ray imaging machines are widely used in border control checkpoints or public transportation, for luggage scanning and inspection. Recent advances in deep learning enabled automatic object detection of X-ray imaging results to largely reduce labor costs. Compared to tasks on natural images, object detection for X-ray inspection are typically more challenging, due to the varied sizes and aspect ratios of X-ray images, random locations of the small target objects within the redundant background region, etc. In practice, we show that directly applying off-the-shelf deep learning-based detection algorithms for X-ray imagery can be highly time-consuming and ineffective. To this end, we propose a Task-Driven Cropping scheme, dubbed TDC, for improving the deep image detection algorithms towards efficient and effective luggage inspection via X-ray images. Instead of processing the whole X-ray images for object detection, we propose a two-stage strategy, which first adaptively crops X-ray images and only preserves the task-related regions, i.e., the luggage regions for security inspection. A task-specific deep feature extractor is used to rapidly identify the importance of each X-ray image pixel. Only the regions that are useful and related to the detection tasks are kept and passed to the follow-up deep detector. The varied-scale X-ray images are thus reduced to the same size and aspect ratio, which enables a more efficient deep detection pipeline. Besides, to benchmark the effectiveness of X-ray image detection algorithms, we propose a novel dataset for X-ray image detection, dubbed SIXray-D, based on the popular SIXray dataset. In SIXray-D, we provide the complete and more accurate annotations of both object classes and bounding boxes, which enables model training for supervised X-ray detection methods. Our results show that our proposed TDC algorithm can effectively boost popular detection algorithms, by achieving better detection mAPs or reducing the run time.

11.
IEEE Trans Med Imaging ; 41(8): 2144-2156, 2022 Aug.
Artigo em Inglês | MEDLINE | ID: mdl-35235505

RESUMO

Spectral computed tomography (CT) reconstructs images from different spectral data through photon counting detectors (PCDs). However, due to the limited number of photons and the counting rate in the corresponding spectral segment, the reconstructed spectral images are usually affected by severe noise. In this paper, we propose a fourth-order nonlocal tensor decomposition model for spectral CT image reconstruction (FONT-SIR). To maintain the original spatial relationships among similar patches and improve the imaging quality, similar patches without vectorization are grouped in both spectral and spatial domains simultaneously to form the fourth-order processing tensor unit. The similarity of different patches is measured with the cosine similarity of latent features extracted using principal component analysis (PCA). By imposing the constraints of the weighted nuclear and total variation (TV) norms, each fourth-order tensor unit is decomposed into a low-rank component and a sparse component, which can efficiently remove noise and artifacts while preserving the structural details. Moreover, the alternating direction method of multipliers (ADMM) is employed to solve the decomposition model. Extensive experimental results on both simulated and real data sets demonstrate that the proposed FONT-SIR achieves superior qualitative and quantitative performance compared with several state-of-the-art methods.

12.
IEEE Trans Image Process ; 31: 1311-1324, 2022.
Artigo em Inglês | MEDLINE | ID: mdl-35020596

RESUMO

Constructing effective priors is critical to solving ill-posed inverse problems in image processing and computational imaging. Recent works focused on exploiting non-local similarity by grouping similar patches for image modeling, and demonstrated state-of-the-art results in many image restoration applications. However, compared to classic methods based on filtering or sparsity, non-local algorithms are more time-consuming, mainly due to the highly inefficient block matching step, i.e., distance between every pair of overlapping patches needs to be computed. In this work, we propose a novel Self-Convolution operator to exploit image non-local properties in a unified framework. We prove that the proposed Self-Convolution based formulation can generalize the commonly-used non-local modeling methods, as well as produce results equivalent to standard methods, but with much cheaper computation. Furthermore, by applying Self-Convolution, we propose an effective multi-modality image restoration scheme, which is much more efficient than conventional block matching for non-local modeling. Experimental results demonstrate that (1) Self-Convolution with fast Fourier transform implementation can significantly speed up most of the popular non-local image restoration algorithms, with two-fold to nine-fold faster block matching, and (2) the proposed online multi-modality image restoration scheme achieves superior denoising results than competing methods in both efficiency and effectiveness on RGB-NIR images. The code for this work is publicly available at https://github.com/GuoLanqing/Self-Convolution.

13.
IEEE Trans Neural Netw Learn Syst ; 33(9): 4451-4465, 2022 Sep.
Artigo em Inglês | MEDLINE | ID: mdl-33625989

RESUMO

Recent works on structural sparse representation (SSR), which exploit image nonlocal self-similarity (NSS) prior by grouping similar patches for processing, have demonstrated promising performance in various image restoration applications. However, conventional SSR-based image restoration methods directly fit the dictionaries or transforms to the internal (corrupted) image data. The trained internal models inevitably suffer from overfitting to data corruption, thus generating the degraded restoration results. In this article, we propose a novel hybrid structural sparsification error (HSSE) model for image restoration, which jointly exploits image NSS prior using both the internal and external image data that provide complementary information. Furthermore, we propose a general image restoration scheme based on the HSSE model, and an alternating minimization algorithm for a range of image restoration applications, including image inpainting, image compressive sensing and image deblocking. Extensive experiments are conducted to demonstrate that the proposed HSSE-based scheme outperforms many popular or state-of-the-art image restoration methods in terms of both objective metrics and visual perception.

14.
IEEE Trans Cybern ; 52(11): 12440-12453, 2022 Nov.
Artigo em Inglês | MEDLINE | ID: mdl-34161250

RESUMO

This article proposes a novel nonconvex structural sparsity residual constraint (NSSRC) model for image restoration, which integrates structural sparse representation (SSR) with nonconvex sparsity residual constraint (NC-SRC). Although SSR itself is powerful for image restoration by combining the local sparsity and nonlocal self-similarity in natural images, in this work, we explicitly incorporate the novel NC-SRC prior into SSR. Our proposed approach provides more effective sparse modeling for natural images by applying a more flexible sparse representation scheme, leading to high-quality restored images. Moreover, an alternating minimizing framework is developed to solve the proposed NSSRC-based image restoration problems. Extensive experimental results on image denoising and image deblocking validate that the proposed NSSRC achieves better results than many popular or state-of-the-art methods over several publicly available datasets.

15.
IEEE Trans Image Process ; 30: 5819-5834, 2021.
Artigo em Inglês | MEDLINE | ID: mdl-34133279

RESUMO

Recent works that utilized deep models have achieved superior results in various image restoration (IR) applications. Such approach is typically supervised, which requires a corpus of training images with distributions similar to the images to be recovered. On the other hand, the shallow methods, which are usually unsupervised remain promising performance in many inverse problems, e.g., image deblurring and image compressive sensing (CS), as they can effectively leverage nonlocal self-similarity priors of natural images. However, most of such methods are patch-based leading to the restored images with various artifacts due to naive patch aggregation in addition to the slow speed. Using either approach alone usually limits performance and generalizability in IR tasks. In this paper, we propose a joint low-rank and deep (LRD) image model, which contains a pair of triply complementary priors, namely, internal and external, shallow and deep, and non-local and local priors. We then propose a novel hybrid plug-and-play (H-PnP) framework based on the LRD model for IR. Following this, a simple yet effective algorithm is developed to solve the proposed H-PnP based IR problems. Extensive experimental results on several representative IR tasks, including image deblurring, image CS and image deblocking, demonstrate that the proposed H-PnP algorithm achieves favorable performance compared to many popular or state-of-the-art IR methods in terms of both objective and visual perception.

16.
IEEE Trans Image Process ; 30: 5223-5238, 2021.
Artigo em Inglês | MEDLINE | ID: mdl-34010133

RESUMO

Image nonlocal self-similarity (NSS) property has been widely exploited via various sparsity models such as joint sparsity (JS) and group sparse coding (GSC). However, the existing NSS-based sparsity models are either too restrictive, e.g., JS enforces the sparse codes to share the same support, or too general, e.g., GSC imposes only plain sparsity on the group coefficients, which limit their effectiveness for modeling real images. In this paper, we propose a novel NSS-based sparsity model, namely, low-rank regularized group sparse coding (LR-GSC), to bridge the gap between the popular GSC and JS. The proposed LR-GSC model simultaneously exploits the sparsity and low-rankness of the dictionary-domain coefficients for each group of similar patches. An alternating minimization with an adaptive adjusted parameter strategy is developed to solve the proposed optimization problem for different image restoration tasks, including image denoising, image deblocking, image inpainting, and image compressive sensing. Extensive experimental results demonstrate that the proposed LR-GSC algorithm outperforms many popular or state-of-the-art methods in terms of objective and perceptual metrics.

17.
Cytometry A ; 99(11): 1123-1133, 2021 11.
Artigo em Inglês | MEDLINE | ID: mdl-33550703

RESUMO

Imaging flow cytometry has become a popular technology for bioparticle image analysis because of its capability of capturing thousands of images per second. Nevertheless, the vast number of images generated by imaging flow cytometry imposes great challenges for data analysis especially when the species have similar morphologies. In this work, we report a deep learning-enabled high-throughput system for predicting Cryptosporidium and Giardia in drinking water. This system combines imaging flow cytometry and an efficient artificial neural network called MCellNet, which achieves a classification accuracy >99.6%. The system can detect Cryptosporidium and Giardia with a sensitivity of 97.37% and a specificity of 99.95%. The high-speed analysis reaches 346 frames per second, outperforming the state-of-the-art deep learning algorithm MobileNetV2 in speed (251 frames per second) with a comparable classification accuracy. The reported system empowers rapid, accurate, and high throughput bioparticle detection in clinical diagnostics, environmental monitoring and other potential biosensing applications.


Assuntos
Criptosporidiose , Cryptosporidium , Aprendizado Profundo , Criptosporidiose/diagnóstico por imagem , Citometria de Fluxo , Giardia , Humanos
18.
RSC Adv ; 11(29): 17603-17610, 2021 May 13.
Artigo em Inglês | MEDLINE | ID: mdl-35480202

RESUMO

Recent deep neural networks have shown superb performance in analyzing bioimages for disease diagnosis and bioparticle classification. Conventional deep neural networks use simple classifiers such as SoftMax to obtain highly accurate results. However, they have limitations in many practical applications that require both low false alarm rate and high recovery rate, e.g., rare bioparticle detection, in which the representative image data is hard to collect, the training data is imbalanced, and the input images in inference time could be different from the training images. Deep metric learning offers a better generatability by using distance information to model the similarity of the images and learning function maps from image pixels to a latent space, playing a vital role in rare object detection. In this paper, we propose a robust model based on a deep metric neural network for rare bioparticle (Cryptosporidium or Giardia) detection in drinking water. Experimental results showed that the deep metric neural network achieved a high accuracy of 99.86% in classification, 98.89% in precision rate, 99.16% in recall rate and zero false alarm rate. The reported model empowers imaging flow cytometry with capabilities of biomedical diagnosis, environmental monitoring, and other biosensing applications.

19.
Artigo em Inglês | MEDLINE | ID: mdl-32903181

RESUMO

Group sparse representation (GSR) has made great strides in image restoration producing superior performance, realized through employing a powerful mechanism to integrate the local sparsity and nonlocal self-similarity of images. However, due to some form of degradation (e.g., noise, down-sampling or pixels missing), traditional GSR models may fail to faithfully estimate sparsity of each group in an image, thus resulting in a distorted reconstruction of the original image. This motivates us to design a simple yet effective model that aims to address the above mentioned problem. Specifically, we propose group sparsity residual constraint with nonlocal priors (GSRC-NLP) for image restoration. Through introducing the group sparsity residual constraint, the problem of image restoration is further defined and simplified through attempts at reducing the group sparsity residual. Towards this end, we first obtain a good estimation of the group sparse coefficient of each original image group by exploiting the image nonlocal self-similarity (NSS) prior along with self-supervised learning scheme, and then the group sparse coefficient of the corresponding degraded image group is enforced to approximate the estimation. To make the proposed scheme tractable and robust, two algorithms, i.e., iterative shrinkage/thresholding (IST) and alternating direction method of multipliers (ADMM), are employed to solve the proposed optimization problems for different image restoration tasks. Experimental results on image denoising, image inpainting and image compressive sensing (CS) recovery, demonstrate that the proposed GSRC-NLP based image restoration algorithm is comparable to state-of-the-art denoising methods and outperforms several state-of-the-art image inpainting and image CS recovery methods in terms of both objective and perceptual quality metrics.

20.
Artigo em Inglês | MEDLINE | ID: mdl-32822296

RESUMO

Through exploiting the image nonlocal self-similarity (NSS) prior by clustering similar patches to construct patch groups, recent studies have revealed that structural sparse representation (SSR) models can achieve promising performance in various image restoration tasks. However, most existing SSR methods only exploit the NSS prior from the input degraded (internal) image, and few methods utilize the NSS prior from external clean image corpus; how to jointly exploit the NSS priors of internal image and external clean image corpus is still an open problem. In this paper, we propose a novel approach for image restoration by simultaneously considering internal and external nonlocal self-similarity (SNSS) priors that offer mutually complementary information. Specifically, we first group nonlocal similar patches from images of a training corpus. Then a group-based Gaussian mixture model (GMM) learning algorithm is applied to learn an external NSS prior. We exploit the SSR model by integrating the NSS priors of both internal and external image data. An alternating minimization with an adaptive parameter adjusting strategy is developed to solve the proposed SNSS-based image restoration problems, which makes the entire algorithm more stable and practical. Experimental results on three image restoration applications, namely image denoising, deblocking and deblurring, demonstrate that the proposed SNSS produces superior results compared to many popular or state-of-the-art methods in both objective and perceptual quality measurements.

SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA
...