Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 3 de 3
Filtrar
Mais filtros

Base de dados
Ano de publicação
Tipo de documento
Assunto da revista
País de afiliação
Intervalo de ano de publicação
1.
Langmuir ; 39(19): 6924-6931, 2023 May 16.
Artigo em Inglês | MEDLINE | ID: mdl-37129080

RESUMO

Carbon-based supercapacitors with high performance have a wide foreground among various energy storage devices. In this work, wood-based hollow carbon spheres (WHCS) were prepared from liquefied wood through the processes of emulsification, curing, carbonization, and activation. Then, the hydrodeposition method was used to introduce nickel sulfide (NiS) to the surface of the microspheres, obtaining NiS/WHCS as the supercapacitor electrode. The results show that NiS/WHCS microspheres exhibited a core-shell structure and flower-like morphology with a specific surface (307.55 m2 g-1) and a large total pore volume (0.14 cm3 g-1). Also, the capacitance could be up to 1533.6 F g-1 at a current density of 1 A g-1. In addition, after 1000 charge/discharge cycles, the specific capacitance remained at 72.8% at the initial current density of 5 A g-1. Hence, NiS/WHCS with excellent durability and high specific capacitance is a potential candidate for electrode materials.

2.
Sensors (Basel) ; 23(6)2023 Mar 21.
Artigo em Inglês | MEDLINE | ID: mdl-36992005

RESUMO

The preservation of image details in the defogging process is still one key challenge in the field of deep learning. The network uses the generation of confrontation loss and cyclic consistency loss to ensure that the generated defog image is similar to the original image, but it cannot retain the details of the image. To this end, we propose a detail enhanced image CycleGAN to retain the detail information during the process of defogging. Firstly, the algorithm uses the CycleGAN network as the basic framework and combines the U-Net network's idea with this framework to extract visual information features in different spaces of the image in multiple parallel branches, and it introduces Dep residual blocks to learn deeper feature information. Secondly, a multi-head attention mechanism is introduced in the generator to strengthen the expressive ability of features and balance the deviation produced by the same attention mechanism. Finally, experiments are carried out on the public data set D-Hazy. Compared with the CycleGAN network, the network structure of this paper improves the SSIM and PSNR of the image dehazing effect by 12.2% and 8.1% compared with the network and can retain image dehazing details.

3.
Sensors (Basel) ; 22(21)2022 Oct 27.
Artigo em Inglês | MEDLINE | ID: mdl-36365919

RESUMO

Small object detection is one of the key challenges in the current computer vision field due to the low amount of information carried and the information loss caused by feature extraction. You Only Look Once v5 (YOLOv5) adopts the Path Aggregation Network to alleviate the problem of information loss, but it cannot restore the information that has been lost. To this end, an auxiliary information-enhanced YOLO is proposed to improve the sensitivity and detection performance of YOLOv5 to small objects. Firstly, a context enhancement module containing a receptive field size of 21×21 is proposed, which captures the global and local information of the image by fusing multi-scale receptive fields, and introduces an attention branch to enhance the expressive ability of key features and suppress background noise. To further enhance the feature expression ability of small objects, we introduce the high- and low-frequency information decomposed by wavelet transform into PANet to participate in multi-scale feature fusion, so as to solve the problem that the features of small objects gradually disappear after multiple downsampling and pooling operations. Experiments on the challenging dataset Tsinghua-Tencent 100 K show that the mean average precision of the proposed model is 9.5% higher than that of the original YOLOv5 while maintaining the real-time speed, which is better than the mainstream object detection models.

SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA