Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 3 de 3
Filtrar
Mais filtros








Base de dados
Intervalo de ano de publicação
1.
Sensors (Basel) ; 23(1)2022 Dec 30.
Artigo em Inglês | MEDLINE | ID: mdl-36617016

RESUMO

To date, the best-performing blind super-resolution (SR) techniques follow one of two paradigms: (A) train standard SR networks on synthetic low-resolution-high-resolution (LR-HR) pairs or (B) predict the degradations of an LR image and then use these to inform a customised SR network. Despite significant progress, subscribers to the former miss out on useful degradation information and followers of the latter rely on weaker SR networks, which are significantly outperformed by the latest architectural advancements. In this work, we present a framework for combining any blind SR prediction mechanism with any deep SR network. We show that a single lightweight metadata insertion block together with a degradation prediction mechanism can allow non-blind SR architectures to rival or outperform state-of-the-art dedicated blind SR networks. We implement various contrastive and iterative degradation prediction schemes and show they are readily compatible with high-performance SR networks such as RCAN and HAN within our framework. Furthermore, we demonstrate our framework's robustness by successfully performing blind SR on images degraded with blurring, noise and compression. This represents the first explicit combined blind prediction and SR of images degraded with such a complex pipeline, acting as a baseline for further advancements.


Assuntos
Algoritmos , Compressão de Dados
2.
IEEE Trans Pattern Anal Mach Intell ; 42(5): 1162-1175, 2020 May.
Artigo em Inglês | MEDLINE | ID: mdl-30668462

RESUMO

Light field imaging has recently known a regain of interest due to the availability of practical light field capturing systems that offer a wide range of applications in the field of computer vision. However, capturing high-resolution light fields remains technologically challenging since the increase in angular resolution is often accompanied by a significant reduction in spatial resolution. This paper describes a learning-based spatial light field super-resolution method that allows the restoration of the entire light field with consistency across all angular views. The algorithm first uses optical flow to align the light field and then reduces its angular dimension using low-rank approximation. We then consider the linearly independent columns of the resulting low-rank model as an embedding, which is restored using a deep convolutional neural network (DCNN). The super-resolved embedding is then used to reconstruct the remaining views. The original disparities are restored using inverse warping where missing pixels are approximated using a novel light field inpainting algorithm. Experimental results show that the proposed method outperforms existing light field super-resolution algorithms, achieving PSNR gains of 0.23 dB over the second best performing method. The performance is shown to be further improved using iterative back-projection as a post-processing step.

3.
IEEE Trans Image Process ; 26(9): 4562-4577, 2017 Sep.
Artigo em Inglês | MEDLINE | ID: mdl-28641259

RESUMO

Most face super-resolution methods assume that low- and high-resolution manifolds have similar local geometrical structure; hence, learn local models on the low-resolution manifold (e.g., sparse or locally linear embedding models), which are then applied on the high-resolution manifold. However, the low-resolution manifold is distorted by the one-to-many relationship between low- and high-resolution patches. This paper presents the Linear Model of Coupled Sparse Support (LM-CSS) method, which learns linear models based on the local geometrical structure on the high-resolution manifold rather than on the low-resolution manifold. For this, in a first step, the low-resolution patch is used to derive a globally optimal estimate of the high-resolution patch. The approximated solution is shown to be close in the Euclidean space to the ground truth, but is generally smooth and lacks the texture details needed by the state-of-the-art face recognizers. Unlike existing methods, the sparse support that best estimates the first approximated solution is found on the high-resolution manifold. The derived support is then used to extract the atoms from the coupled low- and high-resolution dictionaries that are most suitable to learn an up-scaling function for every facial region. The proposed solution was also extended to compute face super-resolution of non-frontal images. Extensive experimental results conducted on a total of 1830 facial images show that the proposed method outperforms seven face super-resolution and a state-of-the-art cross-resolution face recognition method in terms of both quality and recognition.

SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA