Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 3 de 3
Filtrar
Mais filtros

Base de dados
Ano de publicação
Tipo de documento
Assunto da revista
País de afiliação
Intervalo de ano de publicação
1.
Sensors (Basel) ; 24(4)2024 Feb 19.
Artigo em Inglês | MEDLINE | ID: mdl-38400503

RESUMO

In Advanced Driving Assistance Systems (ADAS), Automated Driving Systems (ADS), and Driver Assistance Systems (DAS), RGB camera sensors are extensively utilized for object detection, semantic segmentation, and object tracking. Despite their popularity due to low costs, RGB cameras exhibit weak robustness in complex environments, particularly underperforming in low-light conditions, which raises a significant concern. To address these challenges, multi-sensor fusion systems or specialized low-light cameras have been proposed, but their high costs render them unsuitable for widespread deployment. On the other hand, improvements in post-processing algorithms offer a more economical and effective solution. However, current research in low-light image enhancement still shows substantial gaps in detail enhancement on nighttime driving datasets and is characterized by high deployment costs, failing to achieve real-time inference and edge deployment. Therefore, this paper leverages the Swin Vision Transformer combined with a gamma transformation integrated U-Net for the decoupled enhancement of initial low-light inputs, proposing a deep learning enhancement network named Vehicle-based Efficient Low-light Image Enhancement (VELIE). VELIE achieves state-of-the-art performance on various driving datasets with a processing time of only 0.19 s, significantly enhancing high-dimensional environmental perception tasks in low-light conditions.

2.
Heliyon ; 10(4): e25676, 2024 Feb 29.
Artigo em Inglês | MEDLINE | ID: mdl-38404879

RESUMO

According to the climate emission reduction commitment of the Paris Agreement, all countries are actively seeking a new path of energy conservation and emission reduction, and trying to "bend downward" the global greenhouse gas emission curve. For China's carbon peak before 2030 and carbon neutral target before 2060, explore whether FDI can reduce China's energy consumption and carbon emissions. From the new research perspective of FDI quality, this paper explores the potential ways to improve regional energy-carbon emission performance (ECEP), and applied dynamic threshold effect and two-stage least squares for validation. The specific results are as follows: FDI quality improvement can have a significant positive impact on regional ECEP.The development level of renewable energy, the optimization of industrial structure and the enhancement of green innovation ability can positively regulate the impact of FDI on energy-carbon emission performance. At the same time, the results of the dynamic panel threshold model demonstrate that with the economic growth pressure of local governments decreases and the fiscal decentralization increases, the role of FDI quality in promoting the ECEP could be stronger. The influence of FDI quality on ECEP has regional heterogeneity, and the influence of FDI quality on ECEP is regional heterogeneous, and the influence of FDI quality on ECEP is more significant in inland and midwestern regions than in coastal and eastern regions. This study provides experience for FDI to formulate the quality assessment system and formulate foreign investment policy.

3.
IEEE Trans Pattern Anal Mach Intell ; 44(7): 3719-3732, 2022 Jul.
Artigo em Inglês | MEDLINE | ID: mdl-33497325

RESUMO

We consider the problem of referring segmentation in images and videos with natural language. Given an input image (or video) and a referring expression, the goal is to segment the entity referred by the expression in the image or video. In this paper, we propose a cross-modal self-attention (CMSA) module to utilize fine details of individual words and the input image or video, which effectively captures the long-range dependencies between linguistic and visual features. Our model can adaptively focus on informative words in the referring expression and important regions in the visual input. We further propose a gated multi-level fusion (GMLF) module to selectively integrate self-attentive cross-modal features corresponding to different levels of visual features. This module controls the feature fusion of information flow of features at different levels with high-level and low-level semantic information related to different attentive words. Besides, we introduce cross-frame self-attention (CFSA) module to effectively integrate temporal information in consecutive frames which extends our method in the case of referring segmentation in videos. Experiments on benchmark datasets of four referring image datasets and two actor and action video segmentation datasets consistently demonstrate that our proposed approach outperforms existing state-of-the-art methods.

SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA