Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 6 de 6
Filtrar
Mais filtros

Base de dados
Tipo de documento
Assunto da revista
País de afiliação
Intervalo de ano de publicação
1.
Sensors (Basel) ; 24(10)2024 May 18.
Artigo em Inglês | MEDLINE | ID: mdl-38794069

RESUMO

The segmentation of abnormal regions is vital in smart manufacturing. The blurring sauce-packet leakage segmentation task (BSLST) is designed to distinguish the sauce packet and the leakage's foreground and background at the pixel level. However, the existing segmentation system for detecting sauce-packet leakage on intelligent sensors encounters an issue of imaging blurring caused by uneven illumination. This issue adversely affects segmentation performance, thereby hindering the measurements of leakage area and impeding the automated sauce-packet production. To alleviate this issue, we propose the two-stage illumination-aware sauce-packet leakage segmentation (ISLS) method for intelligent sensors. The ISLS comprises two main stages: illumination-aware region enhancement and leakage region segmentation. In the first stage, YOLO-Fastestv2 is employed to capture the Region of Interest (ROI), which reduces redundancy computations. Additionally, we propose image enhancement to relieve the impact of uneven illumination, enhancing the texture details of the ROI. In the second stage, we propose a novel feature extraction network. Specifically, we propose the multi-scale feature fusion module (MFFM) and the Sequential Self-Attention Mechanism (SSAM) to capture discriminative representations of leakage. The multi-level features are fused by the MFFM with a small number of parameters, which capture leakage semantics at different scales. The SSAM realizes the enhancement of valid features and the suppression of invalid features by the adaptive weighting of spatial and channel dimensions. Furthermore, we generate a self-built dataset of sauce packets, including 606 images with various leakage areas. Comprehensive experiments demonstrate that our ISLS method shows better results than several state-of-the-art methods, with additional performance analyses deployed on intelligent sensors to affirm the effectiveness of our proposed method.

2.
Entropy (Basel) ; 25(8)2023 Aug 04.
Artigo em Inglês | MEDLINE | ID: mdl-37628197

RESUMO

Recently, end-to-end deep models for video compression have made steady advancements. However, this resulted in a lengthy and complex pipeline containing numerous redundant parameters. The video compression approaches based on implicit neural representation (INR) allow videos to be directly represented as a function approximated by a neural network, resulting in a more lightweight model, whereas the singularity of the feature extraction pipeline limits the network's ability to fit the mapping function for video frames. Hence, we propose a neural representation approach for video compression with an implicit multiscale fusion network (NRVC), utilizing normalized residual networks to improve the effectiveness of INR in fitting the target function. We propose the multiscale representations for video compression (MSRVC) network, which effectively extracts features from the input video sequence to enhance the degree of overfitting in the mapping function. Additionally, we propose the feature extraction channel attention (FECA) block to capture interaction information between different feature extraction channels, further improving the effectiveness of feature extraction. The results show that compared to the NeRV method with similar bits per pixel (BPP), NRVC has a 2.16% increase in the decoded peak signal-to-noise ratio (PSNR). Moreover, NRVC outperforms the conventional HEVC in terms of PSNR.

3.
Sensors (Basel) ; 22(17)2022 Sep 05.
Artigo em Inglês | MEDLINE | ID: mdl-36081166

RESUMO

The thermal imaging pedestrian-detection system has excellent performance in different lighting scenarios, but there are problems regarding weak texture, object occlusion, and small objects. Meanwhile, large high-performance models have higher latency on edge devices with limited computing power. To solve the above problems, in this paper, we propose a real-time thermal imaging pedestrian-detection method for edge computing devices. Firstly, we utilize multi-scale mosaic data augmentation to enhance the diversity and texture of objects, which alleviates the impact of complex environments. Then, the parameter-free attention mechanism is introduced into the network to enhance features, which barely increases the computing cost of the network. Finally, we accelerate multi-channel video detection through quantization and multi-threading techniques on edge computing devices. Additionally, we create a high-quality thermal infrared dataset to facilitate the research. The comparative experiments on the self-built dataset, YDTIP, and three public datasets, with other methods show that our method also has certain advantages.

4.
Entropy (Basel) ; 24(4)2022 Mar 23.
Artigo em Inglês | MEDLINE | ID: mdl-35455106

RESUMO

Visible thermal person re-identification (VT Re-ID) is the task of matching pedestrian images collected by thermal and visible light cameras. The two main challenges presented by VT Re-ID are the intra-class variation between pedestrian images and the cross-modality difference between visible and thermal images. Existing works have principally focused on local representation through cross-modality feature distribution, but ignore the internal connection of the local features of pedestrian body parts. Therefore, this paper proposes a dual-path attention network model to establish the spatial dependency relationship between the local features of the pedestrian feature map and to effectively enhance the feature extraction. Meanwhile, we propose cross-modality dual-constraint loss, which adds the center and boundary constraints for each class distribution in the embedding space to promote compactness within the class and enhance the separability between classes. Our experimental results show that our proposed approach has advantages over the state-of-the-art methods on the two public datasets SYSU-MM01 and RegDB. The result for the SYSU-MM01 is Rank-1/mAP 57.74%/54.35%, and the result for the RegDB is Rank-1/mAP 76.07%/69.43%.

5.
Molecules ; 26(21)2021 Oct 25.
Artigo em Inglês | MEDLINE | ID: mdl-34770841

RESUMO

MicroRNA160 plays a crucial role in plant development by negatively regulating the auxin response factors (ARFs). In this manuscript, we design an automatic molecule machine (AMM) based on the dual catalytic hairpin assembly (D-CHA) strategy for the signal amplification detection of miRNA160. The detection system contains four hairpin-shaped DNA probes (HP1, HP2, HP3, and HP4). For HP1, the loop is designed to be complementary to miRNA160. A fragment of DNA with the same sequences as miRNA160 is separated into two pieces that are connected at the 3' end of HP2 and 5' end of HP3, respectively. In the presence of the target, four HPs are successively dissolved by the first catalytic hairpin assembly (CHA1), forming a four-way DNA junction (F-DJ) that enables the rearrangement of separated DNA fragments at the end of HP2 and HP3 and serving as an integrated target analogue for initiating the second CHA reaction, generating an enhanced fluorescence signal. Assay experiments demonstrate that D-CHA has a better performance compared with traditional CHA, achieving the detection limit as low as 10 pM for miRNA160 as deduced from its corresponding DNA surrogates. Moreover, non-target miRNAs, as well as single-base mutation targets, can be detected. Overall, the D-CHA strategy provides a competitive method for plant miRNAs detection.


Assuntos
Técnicas Biossensoriais , Sondas de DNA , DNA Catalítico , Sequências Repetidas Invertidas , MicroRNAs/análise , Fatores de Transcrição , MicroRNAs/genética , MicroRNAs/metabolismo , Reprodutibilidade dos Testes , Sensibilidade e Especificidade , Fatores de Transcrição/metabolismo
6.
Sensors (Basel) ; 20(24)2020 Dec 17.
Artigo em Inglês | MEDLINE | ID: mdl-33348795

RESUMO

We focus on exploring the LIDAR-RGB fusion-based 3D object detection in this paper. This task is still challenging in two aspects: (1) the difference of data formats and sensor positions contributes to the misalignment of reasoning between the semantic features of images and the geometric features of point clouds. (2) The optimization of traditional IoU is not equal to the regression loss of bounding boxes, resulting in biased back-propagation for non-overlapping cases. In this work, we propose a cascaded cross-modality fusion network (CCFNet), which includes a cascaded multi-scale fusion module (CMF) and a novel center 3D IoU loss to resolve these two issues. Our CMF module is developed to reinforce the discriminative representation of objects by reasoning the relation of corresponding LIDAR geometric capability and RGB semantic capability of the object from two modalities. Specifically, CMF is added in a cascaded way between the RGB and LIDAR streams, which selects salient points and transmits multi-scale point cloud features to each stage of RGB streams. Moreover, our center 3D IoU loss incorporates the distance between anchor centers to avoid the oversimple optimization for non-overlapping bounding boxes. Extensive experiments on the KITTI benchmark have demonstrated that our proposed approach performs better than the compared methods.

SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA