Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 9 de 9
Filtrar
Más filtros










Base de datos
Intervalo de año de publicación
1.
Sci Rep ; 13(1): 20051, 2023 Nov 16.
Artículo en Inglés | MEDLINE | ID: mdl-37973995

RESUMEN

Global warming and pollution could lead to the destruction of marine habitats and loss of species. The anomalous behavior of underwater creatures can be used as a biometer for assessing the health status of our ocean. Advances in behavior recognition have been driven by the active application of deep learning methods, yet many of them render superior accuracy at the cost of high computational complexity and slow inference. This paper presents a real-time anomalous behavior recognition approach that incorporates a lightweight deep learning model (Lite3D), object detection, and multitarget tracking. Lite3D is characterized in threefold: (1) image frames contain only regions of interest (ROI) generated by an object detector; (2) no fully connected layers are needed, the prediction head itself is a flatten layer of 1 × [Formula: see text] @ 1× 1, [Formula: see text]= number of categories; (3) all the convolution kernels are 3D, except the first layer degenerated to 2D. Through the tracking, a sequence of ROI-only frames is subjected to 3D convolutions for stacked feature extraction. Compared to other 3D models, Lite3D is 50 times smaller in size and 57 times lighter in terms of trainable parameters and can achieve 99% of F1-score. Lite3D is ideal for mounting on ROV or AUV to perform real-time edge computing.

2.
IEEE Trans Image Process ; 31: 6789-6799, 2022.
Artículo en Inglés | MEDLINE | ID: mdl-36288229

RESUMEN

Image motion blur results from a combination of object motions and camera shakes, and such blurring effect is generally directional and non-uniform. Previous research attempted to solve non-uniform blurs using self-recurrent multi-scale, multi-patch, or multi-temporal architectures with self-attention to obtain decent results. However, using self-recurrent frameworks typically leads to a longer inference time, while inter-pixel or inter-channel self-attention may cause excessive memory usage. This paper proposes a Blur-aware Attention Network (BANet), that accomplishes accurate and efficient deblurring via a single forward pass. Our BANet utilizes region-based self-attention with multi-kernel strip pooling to disentangle blur patterns of different magnitudes and orientations and cascaded parallel dilated convolution to aggregate multi-scale content features. Extensive experimental results on the GoPro and RealBlur benchmarks demonstrate that the proposed BANet performs favorably against the state-of-the-arts in blurred image restoration and can provide deblurred results in real-time.

3.
Sensors (Basel) ; 22(6)2022 Mar 10.
Artículo en Inglés | MEDLINE | ID: mdl-35336336

RESUMEN

This work proposes to develop an underwater image enhancement method based on histogram-equalization (HE) approximation using physics-based dichromatic modeling (PDM). Images captured underwater usually suffer from low contrast and color distortions due to light scattering and attenuation. The PDM describes the image formation process, which can be used to restore nature-degraded images, such as underwater images. However, it does not assure that the restored images have good contrast. Thus, we propose approximating the conventional HE based on the PDM to recover the color distortions of underwater images and enhance their contrast through convex optimization. Experimental results demonstrate the proposed method performs favorably against state-of-the-art underwater image restoration approaches.

4.
Sensors (Basel) ; 22(3)2022 Jan 25.
Artículo en Inglés | MEDLINE | ID: mdl-35161677

RESUMEN

Bilateral Filtering (BF) is an effective edge-preserving smoothing technique in image processing. However, an inherent problem of BF for image denoising is that it is challenging to differentiate image noise and details with the range kernel, thus often preserving both noise and edges in denoising. This letter proposes a novel Dual-Histogram BF (DHBF) method that exploits an edge-preserving noise-reduced guidance image to compute the range kernel, removing isolated noisy pixels for better denoising results. Furthermore, we approximate the spatial kernel using mean filtering based on column histogram construction to achieve constant-time filtering regardless of the kernel radius' size and achieve better smoothing. Experimental results on multiple benchmark datasets for denoising show that the proposed DHBF outperforms other state-of-the-art BF methods.

5.
IEEE Trans Neural Netw Learn Syst ; 33(12): 7853-7862, 2022 Dec.
Artículo en Inglés | MEDLINE | ID: mdl-34181551

RESUMEN

Fusing low dynamic range (LDR) for high dynamic range (HDR) images has gained a lot of attention, especially to achieve real-world application significance when the hardware resources are limited to capture images with different exposure times. However, existing HDR image generation by picking the best parts from each LDR image often yields unsatisfactory results due to either the lack of input images or well-exposed contents. To overcome this limitation, we model the HDR image generation process in two-exposure fusion as a deep reinforcement learning problem and learn an online compensating representation to fuse with LDR inputs for HDR image generation. Moreover, we build a two-exposure dataset with reference HDR images from a public multiexposure dataset that has not yet been normalized to train and evaluate the proposed model. By assessing the built dataset, we show that our reinforcement HDR image generation significantly outperforms other competing methods under different challenging scenarios, even with limited well-exposed contents. More experimental results on a no-reference multiexposure image dataset demonstrate the generality and effectiveness of the proposed model. To the best of our knowledge, this is the first work to use a reinforcement-learning-based framework for an online compensating representation in two-exposure image fusion.

6.
Sensors (Basel) ; 21(16)2021 Aug 10.
Artículo en Inglés | MEDLINE | ID: mdl-34450832

RESUMEN

Complementary metal-oxide-semiconductor (CMOS) image sensors can cause noise in images collected or transmitted in unfavorable environments, especially low-illumination scenarios. Numerous approaches have been developed to solve the problem of image noise removal. However, producing natural and high-quality denoised images remains a crucial challenge. To meet this challenge, we introduce a novel approach for image denoising with the following three main contributions. First, we devise a deep image prior-based module that can produce a noise-reduced image as well as a contrast-enhanced denoised one from a noisy input image. Second, the produced images are passed through a proposed image fusion (IF) module based on Laplacian pyramid decomposition to combine them and prevent noise amplification and color shift. Finally, we introduce a progressive refinement (PR) module, which adopts the summed-area tables to take advantage of spatially correlated information for edge and image quality enhancement. Qualitative and quantitative evaluations demonstrate the efficiency, superiority, and robustness of our proposed method.


Asunto(s)
Algoritmos , Aumento de la Imagen , Relación Señal-Ruido
7.
Sensors (Basel) ; 22(1)2021 Dec 22.
Artículo en Inglés | MEDLINE | ID: mdl-35009567

RESUMEN

In contrast to conventional digital images, high-dynamic-range (HDR) images have a broader range of intensity between the darkest and brightest regions to capture more details in a scene. Such images are produced by fusing images with different exposure values (EVs) for the same scene. Most existing multi-scale exposure fusion (MEF) algorithms assume that the input images are multi-exposed with small EV intervals. However, thanks to emerging spatially multiplexed exposure technology that can capture an image pair of short and long exposure simultaneously, it is essential to deal with two-exposure image fusion. To bring out more well-exposed contents, we generate a more helpful intermediate virtual image for fusion using the proposed Optimized Adaptive Gamma Correction (OAGC) to have better contrast, saturation, and well-exposedness. Fusing the input images with the enhanced virtual image works well even though both inputs are underexposed or overexposed, which other state-of-the-art fusion methods could not handle. The experimental results show that our method performs favorably against other state-of-the-art image fusion methods in generating high-quality fusion results.


Asunto(s)
Algoritmos
8.
IEEE Trans Image Process ; 27(6): 2856-2868, 2018 Jun.
Artículo en Inglés | MEDLINE | ID: mdl-29570087

RESUMEN

Images degraded by light scattering and absorption, such as hazy, sandstorm, and underwater images, often suffer color distortion and low contrast because of light traveling through turbid media. In order to enhance and restore such images, we first estimate ambient light using the depth-dependent color change. Then, via calculating the difference between the observed intensity and the ambient light, which we call the scene ambient light differential, scene transmission can be estimated. Additionally, adaptive color correction is incorporated into the image formation model (IFM) for removing color casts while restoring contrast. Experimental results on various degraded images demonstrate the new method outperforms other IFM-based methods subjectively and objectively. Our approach can be interpreted as a generalization of the common dark channel prior (DCP) approach to image restoration, and our method reduces to several DCP variants for different special cases of ambient lighting and turbid medium conditions.

9.
IEEE Trans Image Process ; 26(4): 1579-1594, 2017 Apr.
Artículo en Inglés | MEDLINE | ID: mdl-28182556

RESUMEN

Underwater images often suffer from color distortion and low contrast, because light is scattered and absorbed when traveling through water. Such images with different color tones can be shot in various lighting conditions, making restoration and enhancement difficult. We propose a depth estimation method for underwater scenes based on image blurriness and light absorption, which can be used in the image formation model (IFM) to restore and enhance underwater images. Previous IFM-based image restoration methods estimate scene depth based on the dark channel prior or the maximum intensity prior. These are frequently invalidated by the lighting conditions in underwater images, leading to poor restoration results. The proposed method estimates underwater scene depth more accurately. Experimental results on restoring real and synthesized underwater images demonstrate that the proposed method outperforms other IFM-based underwater image restoration methods.

SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA
...