Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 3 de 3
Filter
Add more filters










Database
Language
Publication year range
1.
Sensors (Basel) ; 23(18)2023 Sep 08.
Article in English | MEDLINE | ID: mdl-37765817

ABSTRACT

Images captured under poor lighting conditions often suffer from low brightness, low contrast, color distortion, and noise. The function of low-light image enhancement is to improve the visual effect of such images for subsequent processing. Recently, deep learning has been used more and more widely in image processing with the development of artificial intelligence technology, and we provide a comprehensive review of the field of low-light image enhancement in terms of network structure, training data, and evaluation metrics. In this paper, we systematically introduce low-light image enhancement based on deep learning in four aspects. First, we introduce the related methods of low-light image enhancement based on deep learning. We then describe the low-light image quality evaluation methods, organize the low-light image dataset, and finally compare and analyze the advantages and disadvantages of the related methods and give an outlook on the future development direction.

2.
Front Plant Sci ; 14: 1304962, 2023.
Article in English | MEDLINE | ID: mdl-38186591

ABSTRACT

Introduction: Efficient and accurate varietal classification of wheat grains is crucial for maintaining varietal purity and reducing susceptibility to pests and diseases, thereby enhancing crop yield. Traditional manual and machine learning methods for wheat grain identification often suffer from inefficiencies and the use of large models. In this study, we propose a novel classification and recognition model called SCGNet, designed for rapid and efficient wheat grain classification. Methods: Specifically, our proposed model incorporates several modules that enhance information exchange and feature multiplexing between group convolutions. This mechanism enables the network to gather feature information from each subgroup of the previous layer, facilitating effective utilization of upper-layer features. Additionally, we introduce sparsity in channel connections between groups to further reduce computational complexity without compromising accuracy. Furthermore, we design a novel classification output layer based on 3-D convolution, replacing the traditional maximum pooling layer and fully connected layer in conventional convolutional neural networks (CNNs). This modification results in more efficient classification output generation. Results: We conduct extensive experiments using a curated wheat grain dataset, demonstrating the superior performance of our proposed method. Our approach achieves an impressive accuracy of 99.56%, precision of 99.59%, recall of 99.55%, and an F 1-score of 99.57%. Discussion: Notably, our method also exhibits the lowest number of Floating-Point Operations (FLOPs) and the number of parameters, making it a highly efficient solution for wheat grains classification.

3.
Article in English | MEDLINE | ID: mdl-35657839

ABSTRACT

Underwater images typically suffer from color deviations and low visibility due to the wavelength-dependent light absorption and scattering. To deal with these degradation issues, we propose an efficient and robust underwater image enhancement method, called MLLE. Specifically, we first locally adjust the color and details of an input image according to a minimum color loss principle and a maximum attenuation map-guided fusion strategy. Afterward, we employ the integral and squared integral maps to compute the mean and variance of local image blocks, which are used to adaptively adjust the contrast of the input image. Meanwhile, a color balance strategy is introduced to balance the color differences between channel a and channel b in the CIELAB color space. Our enhanced results are characterized by vivid color, improved contrast, and enhanced details. Extensive experiments on three underwater image enhancement datasets demonstrate that our method outperforms the state-of-the-art methods. Our method is also appealing in its fast processing speed within 1s for processing an image of size 1024×1024×3 on a single CPU. Experiments further suggest that our method can effectively improve the performance of underwater image segmentation, keypoint detection, and saliency detection. The project page is available at https://li-chongyi.github.io/proj_MMLE.html.

SELECTION OF CITATIONS
SEARCH DETAIL