Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 14 de 14
Filtrar
Más filtros

Banco de datos
Tipo del documento
País de afiliación
Intervalo de año de publicación
1.
Sensors (Basel) ; 21(7)2021 Mar 28.
Artículo en Inglés | MEDLINE | ID: mdl-33800532

RESUMEN

Hyperspectral image (HSI) super-resolution (SR) is a challenging task due to its ill-posed nature, and has attracted extensive attention by the research community. Previous methods concentrated on leveraging various hand-crafted image priors of a latent high-resolution hyperspectral (HR-HS) image to regularize the degradation model of the observed low-resolution hyperspectral (LR-HS) and HR-RGB images. Different optimization strategies for searching a plausible solution, which usually leads to a limited reconstruction performance, were also exploited. Recently, deep-learning-based methods evolved for automatically learning the abundant image priors in a latent HR-HS image. These methods have made great progress for HS image super resolution. Current deep-learning methods have faced difficulties in designing more complicated and deeper neural network architectures for boosting the performance. They also require large-scale training triplets, such as the LR-HS, HR-RGB, and their corresponding HR-HS images for neural network training. These training triplets significantly limit their applicability to real scenarios. In this work, a deep unsupervised fusion-learning framework for generating a latent HR-HS image using only the observed LR-HS and HR-RGB images without previous preparation of any other training triplets is proposed. Based on the fact that a convolutional neural network architecture is capable of capturing a large number of low-level statistics (priors) of images, the automatic learning of underlying priors of spatial structures and spectral attributes in a latent HR-HS image using only its corresponding degraded observations is promoted. Specifically, the parameter space of a generative neural network used for learning the required HR-HS image to minimize the reconstruction errors of the observations using mathematical relations between data is investigated. Moreover, special convolutional layers for approximating the degradation operations between observations and the latent HR-HS image are specifically to construct an end-to-end unsupervised learning framework for HS image super-resolution. Experiments on two benchmark HS datasets, including the CAVE and Harvard, demonstrate that the proposed method can is capable of producing very promising results, even under a large upscaling factor. Furthermore, it can outperform other unsupervised state-of-the-art methods by a large margin, and manifests its superiority and efficiency.

2.
Opt Express ; 27(21): 30502-30516, 2019 Oct 14.
Artículo en Inglés | MEDLINE | ID: mdl-31684297

RESUMEN

The spectral reflectance of objects provides intrinsic information on material properties that have been proven beneficial in a diverse range of applications, e.g., remote sensing, agriculture and diagnostic medicine, to name a few. Existing methods for the spectral reflectance recovery from RGB or monochromatic images either ignore the effect from the illumination or implement/optimize the illumination under the linear representation assumption of the spectral reflectance. In this paper, we present a simple and efficient convolutional neural network (CNN)-based spectral reflectance recovery method with optimal illuminations. Specifically, we design illumination optimization layer to optimally multiplex illumination spectra in a given dataset or to design the optimal one under physical restrictions. Meanwhile, we develop the nonlinear representation for spectral reflectance in a data-driven way and jointly optimize illuminations under this representation in a CNN-based end-to-end architecture. Experimental results on both synthetic and real data show that our method outperforms the state-of-the-arts and verifies the advantages of deeply optimal illumination and nonlinear representation of the spectral reflectance.

3.
Opt Lett ; 44(22): 5646-5649, 2019 Nov 15.
Artículo en Inglés | MEDLINE | ID: mdl-31730128

RESUMEN

Due to the latest progress in image sensor manufacturing technology, the emergence of a sensor equipped with an RGGB Bayer filter and a directional polarizing filter has brought significant advantages to computer vision tasks where RGB and polarization information is required. In this regard, joint chromatic and polarimetric image demosaicing is indispensable. However, as a new type of array pattern, there is no dedicated method for this challenging task. In this Letter, we collect, to the best of our knowledge, the first chromatic-polarization dataset and propose a chromatic-polarization demosaicing network (CPDNet) to address this joint chromatic and polarimetric image demosaicing issue. The proposed CPDNet is composed of the residual block and the multi-task structure with the costumed loss function. The experimental results show that our proposed methods are capable of faithfully recovering full 12-channel chromatic and polarimetric information for each pixel from a single mosaic image in terms of quantitative measures and visual quality.

4.
Sensors (Basel) ; 19(24)2019 Dec 07.
Artículo en Inglés | MEDLINE | ID: mdl-31817912

RESUMEN

Hyperspectral imaging is capable of acquiring the rich spectral information of scenes and has great potential for understanding the characteristics of different materials in many applications ranging from remote sensing to medical imaging. However, due to hardware limitations, the existed hyper-/multi-spectral imaging devices usually cannot obtain high spatial resolution. This study aims to generate a high resolution hyperspectral image according to the available low resolution hyperspectral and high resolution RGB images. We propose a novel hyperspectral image superresolution method via non-negative sparse representation of reflectance spectra with a data guided sparsity constraint. The proposed method firstly learns the hyperspectral dictionary from the low resolution hyperspectral image and then transforms it into the RGB one with the camera response function, which is decided by the physical property of the RGB imaging camera. Given the RGB vector and the RGB dictionary, the sparse representation of each pixel in the high resolution image is calculated with the guidance of a sparsity map, which measures pixel material purity. The sparsity map is generated by analyzing the local content similarity of a focused pixel in the available high resolution RGB image and quantifying the spectral mixing degree motivated by the fact that the pixel spectrum of a pure material should have sparse representation of the spectral dictionary. Since the proposed method adaptively adjusts the sparsity in the spectral representation based on the local content of the available high resolution RGB image, it can produce more robust spectral representation for recovering the target high resolution hyperspectral image. Comprehensive experiments on two public hyperspectral datasets and three real remote sensing images validate that the proposed method achieves promising performances compared to the existing state-of-the-art methods.

5.
IEEE Trans Pattern Anal Mach Intell ; 45(12): 15445-15461, 2023 Dec.
Artículo en Inglés | MEDLINE | ID: mdl-37651493

RESUMEN

Spectral photoacoustic imaging (PAI) is a new technology that is able to provide 3D geometric structure associated with 1D wavelength-dependent absorption information of the interior of a target in a non-invasive manner. It has potentially broad applications in clinical and medical diagnosis. Unfortunately, the usability of spectral PAI is severely affected by a time-consuming data scanning process and complex noise. Therefore in this study, we propose a reliability-aware restoration framework to recover clean 4D data from incomplete and noisy observations. To the best of our knowledge, this is the first attempt for the 4D spectral PA data restoration problem that solves data completion and denoising simultaneously. We first present a sequence of analyses, including modeling of data reliability in the depth and spectral domains, developing an adaptive correlation graph, and analyzing local patch orientation. On the basis of these analyses, we explore global sparsity and local self-similarity for restoration. We demonstrated the effectiveness of our proposed approach through experiments on real data captured from patients, where our approach outperformed the state-of-the-art methods in both objective evaluation and subjective assessment.

6.
IEEE Trans Pattern Anal Mach Intell ; 44(11): 8520-8537, 2022 Nov.
Artículo en Inglés | MEDLINE | ID: mdl-34375279

RESUMEN

Enhancing the visibility in extreme low-light environments is a challenging task. Under nearly lightless condition, existing image denoising methods could easily break down due to significantly low SNR. In this paper, we systematically study the noise statistics in the imaging pipeline of CMOS photosensors, and formulate a comprehensive noise model that can accurately characterize the real noise structures. Our novel model considers the noise sources caused by digital camera electronics which are largely overlooked by existing methods yet have significant influence on raw measurement in the dark. It provides a way to decouple the intricate noise structure into different statistical distributions with physical interpretations. Moreover, our noise model can be used to synthesize realistic training data for learning-based low-light denoising algorithms. In this regard, although promising results have been shown recently with deep convolutional neural networks, the success heavily depends on abundant noisy-clean image pairs for training, which are tremendously difficult to obtain in practice. Generalizing their trained models to images from new devices is also problematic. Extensive experiments on multiple low-light denoising datasets - including a newly collected one in this work covering various devices - show that a deep neural network trained with our proposed noise formation model can reach surprisingly-high accuracy. The results are on par with or sometimes even outperform training with paired real data, opening a new door to real-world extreme low-light photography.

7.
IEEE Trans Pattern Anal Mach Intell ; 44(12): 9464-9476, 2022 Dec.
Artículo en Inglés | MEDLINE | ID: mdl-34818188

RESUMEN

Optical flow estimation in low-light conditions is a challenging task for existing methods and current optical flow datasets lack low-light samples. Even if the dark images are enhanced before estimation, which could achieve great visual perception, it still leads to suboptimal optical flow results because information like motion consistency may be broken during the enhancement. We propose to apply a novel training policy to learn optical flow directly from new synthetic and real low-light images. Specifically, first, we design a method to collect a new optical flow dataset in multiple exposures with shared optical flow pseudo labels. Then we apply a two-step process to create a synthetic low-light optical flow dataset, based on an existing bright one, by simulating low-light raw features from the multi-exposure raw images we collected. To extend the data diversity, we also include published low-light raw videos without optical flow labels. In our training pipeline, with the three datasets, we create two teacher-student pairs to progressively obtain optical flow labels for all data. Finally, we apply a mix-up training policy with our diversified datasets to produce low-light-robust optical flow models for release. The experiments show that our method can relatively maintain the optical flow accuracy as the image exposure descends and the generalization ability of our method is tested with different cameras in multiple practical scenes.

8.
IEEE Trans Pattern Anal Mach Intell ; 44(1): 256-272, 2022 Jan.
Artículo en Inglés | MEDLINE | ID: mdl-32750820

RESUMEN

Hyperspectral image (HSI) recovery from a single RGB image has attracted much attention, whose performance has recently been shown to be sensitive to the camera spectral response (CSR). In this paper, we present an efficient convolutional neural network (CNN) based method, which can jointly select the optimal CSR from a candidate dataset and learn a mapping to recover HSI from a single RGB image captured with this algorithmically selected camera under multi-chip or single-chip setups. Given a specific CSR, we first present a HSI recovery network, which accounts for the underlying characteristics of the HSI, including spectral nonlinear mapping and spatial similarity. Later, we append a CSR selection layer onto the recovery network, and the optimal CSR under both multi-chip and single-chip setups can thus be automatically determined from the network weights under the nonnegative sparse constraint. Experimental results on three hyperspectral datasets and two camera spectral response datasets demonstrate that our HSI recovery network outperforms state-of-the-art methods in terms of both quantitative metrics and perceptive quality, and the selection layer always returns a CSR consistent to the best one determined by exhaustive search. Finally, we show that our method can also perform well in the real capture system, and collect a hyperspectral flower dataset to evaluate the effect from HSI recovery on classification problem.

9.
IEEE Trans Pattern Anal Mach Intell ; 43(8): 2611-2622, 2021 Aug.
Artículo en Inglés | MEDLINE | ID: mdl-32078532

RESUMEN

This paper introduces a novel depth recovery method based on light absorption in water. Water absorbs light at almost all wavelengths whose absorption coefficient is related to the wavelength. Based on the Beer-Lambert model, we introduce a bispectral depth recovery method that leverages the light absorption difference between two near-infrared wavelengths captured with a distant point source and orthographic cameras. Through extensive analysis, we show that accurate depth can be recovered irrespective of the surface texture and reflectance, and introduce algorithms to correct for nonidealities of a practical implementation including tilted light source and camera placement, nonideal bandpass filters and the perspective effect of the camera with a diverging point light source. We construct a coaxial bispectral depth imaging system using low-cost off-the-shelf hardware and demonstrate its use for recovering the shapes of complex and dynamic objects in water. We also present a trispectral variant to further improve robustness to extremely challenging surface reflectance. Experimental results validate the theory and practical implementation of this novel depth recovery paradigm, which we refer to as shape from water.

10.
IEEE Trans Image Process ; 30: 7280-7291, 2021.
Artículo en Inglés | MEDLINE | ID: mdl-34403344

RESUMEN

Since specular reflection often exists in the real captured images and causes deviation between the recorded color and intrinsic color, specular reflection separation can bring advantages to multiple applications that require consistent object surface appearance. However, due to the color of an object is significantly influenced by the color of the illumination, the existing researches still suffer from the near-duplicate challenge, that is, the separation becomes unstable when the illumination color is close to the surface color. In this paper, we derive a polarization guided model to incorporate the polarization information into a designed iteration optimization separation strategy to separate the specular reflection. Based on the analysis of polarization, we propose a polarization guided model to generate a polarization chromaticity image, which is able to reveal the geometrical profile of the input image in complex scenarios, e.g., diversity of illumination. The polarization chromaticity image can accurately cluster the pixels with similar diffuse color. We further use the specular separation of all these clusters as an implicit prior to ensure that the diffuse component will not be mistakenly separated as the specular component. With the polarization guided model, we reformulate the specular reflection separation into a unified optimization function which can be solved by the ADMM strategy. The specular reflection will be detected and separated jointly by RGB and polarimetric information. Both qualitative and quantitative experimental results have shown that our method can faithfully separate the specular reflection, especially in some challenging scenarios.

11.
IEEE Trans Image Process ; 30: 4171-4182, 2021.
Artículo en Inglés | MEDLINE | ID: mdl-33822723

RESUMEN

The emergence of the single-chip polarized color sensor now allows for simultaneously capturing chromatic and polarimetric information of the scene on a monochromatic image plane. However, unlike the usual camera with an embedded demosaicing method, the latest polarized color camera is not delivered with an in-built demosaicing tool. For demosaicing, the users have to down-sample the captured images or to use traditional interpolation techniques. Neither of them can perform well since the polarization and color are interdependent. Therefore, joint chromatic and polarimetric demosaicing is the key to obtaining high-quality polarized color images. In this paper, we propose a joint chromatic and polarimetric demosaicing model to address this challenging problem. Instead of mechanically demosaicing for the multi-channel polarized color image, we further present a sparse representation-based optimization strategy that utilizes chromatic information and polarimetric information to jointly optimize the model. To avoid the interaction between color and polarization during demosaicing, we separately construct the corresponding dictionaries. We also build an optical data acquisition system to collect a dataset, which contains various sources of polarization, such as illumination, reflectance and birefringence. Results of both qualitative and quantitative experiments have shown that our method is capable of faithfully recovering full RGB information of four polarization angles for each pixel from a single mosaic input image. Moreover, the proposed method can perform well not only on the synthetic data but the real captured data.

12.
IEEE Trans Pattern Anal Mach Intell ; 43(1): 48-61, 2021 Jan.
Artículo en Inglés | MEDLINE | ID: mdl-31295106

RESUMEN

This paper presents a precise, stable, and invertible reflectance model for photometric stereo. This microfacet-based model is applicable to all types of isotropic surface reflectance, covering cases from diffusion to specular reflections. We introduce a single variable to physically quantify the surface smoothness, and by monotonically sliding this variable between 0 and 1, our model enables a versatile representation that can smoothly transform between an ellipsoid of revolution and the equation for Lambertian reflectance. In the inverse domain, this model offers a compact and physically interpretable formulation, for which we introduce a fast and lightweight solver that allows accurate estimations for both surface smoothness and surface shape. Finally, extensive experiments on the appearances of synthesized and real objects evidence that this model is state-of-the-art in our off-the-shelf solution.

13.
Artículo en Inglés | MEDLINE | ID: mdl-30010566

RESUMEN

Fusing a low-resolution hyperspectral image with the corresponding high-resolution multispectral image to obtain a high-resolution hyperspectral image is an important technique for capturing comprehensive scene information in both spatial and spectral domains. Existing approaches adopt sparsity promoting strategy, and encode the spectral information of each pixel independently, which results in noisy sparse representation. We propose a novel hyperspectral image super-resolution method via a self-similarity constrained sparse representation. We explore the similar patch structures across the whole image and the pixels with close appearance in local regions to create globalstructure groups and local-spectral super-pixels. By forcing the similarity of the sparse representations for pixels belonging to the same group and super-pixel, we alleviate the effect of the outliers in the learned sparse coding. Experiment results on benchmark datasets validate that the proposed method outperforms the stateof- the-art methods in both quantitative metrics and visual effect.

14.
Artículo en Inglés | MEDLINE | ID: mdl-30010563

RESUMEN

Recently, many hyperspectral (HS) image superresolution methods that merge a low spatial resolution HS image and a high spatial resolution three-channel RGB image have been proposed in spectral imaging. A largely ignored fact is that most existing commercial RGB cameras capture high resolution images by a single CCD/CMOS sensor equipped with a color filter array (CFA). In this paper, we account for the common imaging mechanism of commercial RGB cameras, and propose to use a mosaic RGB image for HS image super-resolution, which prevents demosaicing error and thus its propagation into the HS image super-resolution results. We design a proper nonlocal low-rank regularization to exploit the intrinsic properties - rich self-repeating patterns and high correlation across spectra - within HS images of natural scenes, and formulate the HS image super-resolution task into a variational optimization problem, which can be efficiently solved via the alternating direction method of multipliers (ADMM). The effectiveness of the proposed method has been evaluated on two benchmark datasets, demonstrating that the proposed method can provide substantial improvement over the current state-of-the-art HS image superresolution methods without considering the mosaicing effect. Finally, we show that our method can also perform well in the real capture system.

SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA