RESUMO
Real-time quality monitoring through molten pool images is a critical focus in researching high-quality, intelligent automated welding. However, challenges such as the dynamic nature of the molten pool, changes in camera perspective, and variations in pool shape make defect detection using single-frame images difficult. We propose a multi-scale fusion method for defect monitoring based on molten pool videos to address these issues. This method analyzes the temporal changes in light spots on the molten pool surface, transferring features between frames to capture dynamic behavior. Our approach employs multi-scale feature fusion using row and column convolutions along with a gated fusion module to accommodate variations in pool size and position, enabling the detection of light spot changes of different sizes and directions from coarse to fine. Additionally, incorporating mixed attention with row and column features enables the model to capture the characteristics of the molten pool more efficiently. Our method achieves an accuracy of 97.416% on a molten pool video dataset, with a processing time of 16 ms per sample. Experimental results on the UCF101-24 and JHMDB datasets also demonstrate the method's generalization capability.
RESUMO
Infrared small target detection technology plays a crucial role in various fields such as military reconnaissance, power patrol, medical diagnosis, and security. The advancement of deep learning has led to the success of convolutional neural networks in target segmentation. However, due to challenges like small target scales, weak signals, and strong background interference in infrared images, convolutional neural networks often face issues like leakage and misdetection in small target segmentation tasks. To address this, an enhanced U-Net method called MST-UNet is proposed, the method combines multi-scale feature decomposition and fusion and attention mechanisms. The method involves using Haar wavelet transform instead of maximum pooling for downsampling in the encoder to minimize feature loss and enhance feature utilization. Additionally, a multi-scale residual unit is introduced to extract contextual information at different scales, improving sensory field and feature expression. The inclusion of a triple attention mechanism in the encoder structure further enhances multidimensional information utilization and feature recovery by the decoder. Experimental analysis on the NUDT-SIRST dataset demonstrates that the proposed method significantly improves target contour accuracy and segmentation precision, achieving IoU and nIoU values of 80.09% and 80.19%, respectively.
RESUMO
As deep learning technology has progressed, automated medical image analysis is becoming ever more crucial in clinical diagnosis. However, due to the diversity and complexity of blood cell images, traditional models still exhibit deficiencies in blood cell detection. To address blood cell detection, we developed the TW-YOLO approach, leveraging multi-scale feature fusion techniques. Firstly, traditional CNN (Convolutional Neural Network) convolution has poor recognition capabilities for certain blood cell features, so the RFAConv (Receptive Field Attention Convolution) module was incorporated into the backbone of the model to enhance its capacity to extract geometric characteristics from blood cells. At the same time, utilizing the feature pyramid architecture of YOLO (You Only Look Once), we enhanced the fusion of features at different scales by incorporating the CBAM (Convolutional Block Attention Module) in the detection head and the EMA (Efficient Multi-Scale Attention) module in the neck, thereby improving the recognition ability of blood cells. Additionally, to meet the specific needs of blood cell detection, we designed the PGI-Ghost (Programmable Gradient Information-Ghost) strategy to finely describe the gradient flow throughout the process of extracting features, further improving the model's effectiveness. Experiments on blood cell detection datasets such as BloodCell-Detection-Dataset (BCD) reveal that TW-YOLO outperforms other models by 2%, demonstrating excellent performance in the task of blood cell detection. In addition to advancing blood cell image analysis research, this work offers strong technical support for future automated medical diagnostics.
Assuntos
Células Sanguíneas , Aprendizado Profundo , Redes Neurais de Computação , Humanos , Células Sanguíneas/citologia , Processamento de Imagem Assistida por Computador/métodos , AlgoritmosRESUMO
Two-dimensional human pose estimation aims to equip computers with the ability to accurately recognize human keypoints and comprehend their spatial contexts within media content. However, the accuracy of real-time human pose estimation diminishes when processing images with occluded body parts or overlapped individuals. To address these issues, we propose a method based on the YOLO framework. We integrate the convolutional concepts of Kolmogorov-Arnold Networks (KANs) through introducing non-linear activation functions to enhance the feature extraction capabilities of the convolutional kernels. Moreover, to improve the detection of small target keypoints, we integrate the cross-stage partial (CSP) approach and utilize the small object enhance pyramid (SOEP) module for feature integration. We also innovatively incorporate a layered shared convolution with batch normalization detection head (LSCB), consisting of multiple shared convolutional layers and batch normalization layers, to enable cross-stage feature fusion and address the low utilization of model parameters. Given the structure and purpose of the proposed model, we name it KSL-POSE. Compared to the baseline model YOLOv8l-POSE, KSL-POSE achieves significant improvements, increasing the average detection accuracy by 1.5% on the public MS COCO 2017 data set. Furthermore, the model also demonstrates competitive performance on the CrowdPOSE data set, thus validating its generalization ability.
Assuntos
Algoritmos , Postura , Humanos , Postura/fisiologia , Processamento de Imagem Assistida por Computador/métodos , Redes Neurais de ComputaçãoRESUMO
The precise building extraction from high-resolution remote sensing images holds significant application for urban planning, resource management, and environmental conservation. In recent years, deep neural networks (DNNs) have garnered substantial attention for their adeptness in learning and extracting features, becoming integral to building extraction methodologies and yielding noteworthy performance outcomes. Nonetheless, prevailing DNN-based models for building extraction often overlook spatial information during the feature extraction phase. Additionally, many existing models employ a simplistic and direct approach in the feature fusion stage, potentially leading to spurious target detection and the amplification of internal noise. To address these concerns, we present a multi-scale attention network (MSANet) tailored for building extraction from high-resolution remote sensing images. In our approach, we initially extracted multi-scale building feature information, leveraging the multi-scale channel attention mechanism and multi-scale spatial attention mechanism. Subsequently, we employed adaptive hierarchical weighting processes on the extracted building features. Concurrently, we introduced a gating mechanism to facilitate the effective fusion of multi-scale features. The efficacy of the proposed MSANet was evaluated using the WHU aerial image dataset and the WHU satellite image dataset. The experimental results demonstrate compelling performance metrics, with the F1 scores registering at 93.76% and 77.64% on the WHU aerial imagery dataset and WHU satellite dataset II, respectively. Furthermore, the intersection over union (IoU) values stood at 88.25% and 63.46%, surpassing benchmarks set by DeepLabV3 and GSMC.
RESUMO
Casting defects in turbine blades can significantly reduce an aero-engine's service life and cause secondary damage to the blades when exposed to harsh environments. Therefore, casting defect detection plays a crucial role in enhancing aircraft performance. Existing defect detection methods face challenges in effectively detecting multi-scale defects and handling imbalanced datasets, leading to unsatisfactory defect detection results. In this work, a novel blade defect detection method is proposed. This method is based on a detection transformer with a multi-scale fusion attention mechanism, considering comprehensive features. Firstly, a novel joint data augmentation (JDA) method is constructed to alleviate the imbalanced dataset issue by effectively increasing the number of sample data. Then, an attention-based channel-adaptive weighting (ACAW) feature enhancement module is established to fully apply complementary information among different feature channels, and further refine feature representations. Consequently, a multi-scale feature fusion (MFF) module is proposed to integrate high-dimensional semantic information and low-level representation features, enhancing multi-scale defect detection precision. Moreover, R-Focal loss is developed in an MFF attention-based DEtection TRansformer (DETR) to further solve the issue of imbalanced datasets and accelerate model convergence using the random hyper-parameters search strategy. An aero-engine turbine blade defect X-ray (ATBDX) image dataset is applied to validate the proposed method. The comparative results demonstrate that this proposed method can effectively integrate multi-scale image features and enhance multi-scale defect detection precision.
RESUMO
Traditional methods for pest recognition have certain limitations in addressing the challenges posed by diverse pest species, varying sizes, diverse morphologies, and complex field backgrounds, resulting in a lower recognition accuracy. To overcome these limitations, this paper proposes a novel pest recognition method based on attention mechanism and multi-scale feature fusion (AM-MSFF). By combining the advantages of attention mechanism and multi-scale feature fusion, this method significantly improves the accuracy of pest recognition. Firstly, we introduce the relation-aware global attention (RGA) module to adaptively adjust the feature weights of each position, thereby focusing more on the regions relevant to pests and reducing the background interference. Then, we propose the multi-scale feature fusion (MSFF) module to fuse feature maps from different scales, which better captures the subtle differences and the overall shape features in pest images. Moreover, we introduce generalized-mean pooling (GeMP) to more accurately extract feature information from pest images and better distinguish different pest categories. In terms of the loss function, this study proposes an improved focal loss (FL), known as balanced focal loss (BFL), as a replacement for cross-entropy loss. This improvement aims to address the common issue of class imbalance in pest datasets, thereby enhancing the recognition accuracy of pest identification models. To evaluate the performance of the AM-MSFF model, we conduct experiments on two publicly available pest datasets (IP102 and D0). Extensive experiments demonstrate that our proposed AM-MSFF outperforms most state-of-the-art methods. On the IP102 dataset, the accuracy reaches 72.64%, while on the D0 dataset, it reaches 99.05%.
RESUMO
Multi-scale feature fusion techniques and covariance pooling have been shown to have positive implications for completing computer vision tasks, including fine-grained image classification. However, existing algorithms that use multi-scale feature fusion techniques for fine-grained classification tend to consider only the first-order information of the features, failing to capture more discriminative features. Likewise, existing fine-grained classification algorithms using covariance pooling tend to focus only on the correlation between feature channels without considering how to better capture the global and local features of the image. Therefore, this paper proposes a multi-scale covariance pooling network (MSCPN) that can capture and better fuse features at different scales to generate more representative features. Experimental results on the CUB200 and MIT indoor67 datasets achieve state-of-the-art performance (CUB200: 94.31% and MIT indoor67: 92.11%).
RESUMO
With the advent of autonomous vehicle applications, the importance of LiDAR point cloud 3D object detection cannot be overstated. Recent studies have demonstrated that methods for aggregating features from voxels can accurately and efficiently detect objects in large, complex 3D detection scenes. Nevertheless, most of these methods do not filter background points well and have inferior detection performance for small objects. To ameliorate this issue, this paper proposes an Attention-based and Multiscale Feature Fusion Network (AMFF-Net), which utilizes a Dual-Attention Voxel Feature Extractor (DA-VFE) and a Multi-scale Feature Fusion (MFF) Module to improve the precision and efficiency of 3D object detection. The DA-VFE considers pointwise and channelwise attention and integrates them into the Voxel Feature Extractor (VFE) to enhance key point cloud information in voxels and refine more-representative voxel features. The MFF Module consists of self-calibrated convolutions, a residual structure, and a coordinate attention mechanism, which acts as a 2D Backbone to expand the receptive domain and capture more contextual information, thus better capturing small object locations, enhancing the feature-extraction capability of the network and reducing the computational overhead. We performed evaluations of the proposed model on the nuScenes dataset with a large number of driving scenarios. The experimental results showed that the AMFF-Net achieved 62.8% in the mAP, which significantly boosted the performance of small object detection compared to the baseline network and significantly reduced the computational overhead, while the inference speed remained essentially the same. AMFF-Net also achieved advanced performance on the KITTI dataset.
RESUMO
In the production process of metal industrial products, the deficiencies and limitations of existing technologies and working conditions can have adverse effects on the quality of the final products, making surface defect detection particularly crucial. However, collecting a sufficient number of samples of defective products can be challenging. Therefore, treating surface defect detection as a semi-supervised problem is appropriate. In this paper, we propose a method based on a Transformer with pruned and merged multi-scale masked feature fusion. This method learns the semantic context from normal samples. We incorporate the Vision Transformer (ViT) into a generative adversarial network to jointly learn the generation in the high-dimensional image space and the inference in the latent space. We use an encoder-decoder neural network with long skip connections to capture information between shallow and deep layers. During training and testing, we design block masks of different scales to obtain rich semantic context information. Additionally, we introduce token merging (ToMe) into the ViT to improve the training speed of the model without affecting the training results. In this paper, we focus on the problems of rust, scratches, and other defects on the metal surface. We conduct various experiments on five metal industrial product datasets and the MVTec AD dataset to demonstrate the superiority of our method.
RESUMO
Few-shot object detection (FSOD) is proposed to solve the application problem of traditional detectors in scenarios lacking training samples. The meta-learning methods have attracted the researchers' attention for their excellent generalization performance. They usually select the same class of support features according to the query labels to weight the query features. However, the model cannot possess the ability of active identification only by using the same category support features, and feature selection causes difficulties in the testing process without labels. The single-scale feature of the model also leads to poor performance in small object detection. In addition, the hard samples in the support branch impact the backbone's representation of the support features, thus impacting the feature weighting process. To overcome these problems, we propose a multi-scale feature fusion and attentive learning (MSFFAL) framework for few-shot object detection. We first design the backbone with multi-scale feature fusion and channel attention mechanism to improve the model's detection accuracy on small objects and the representation of hard support samples. Based on this, we propose an attention loss to replace the feature weighting module. The loss allows the model to consistently represent the objects of the same category in the two branches and realizes the active recognition of the model. The model no longer depends on query labels to select features when testing, optimizing the model testing process. The experiments show that MSFFAL outperforms the state-of-the-art (SOTA) by 0.7-7.8% on the Pascal VOC and exhibits 1.61 times the result of the baseline model in MS COCO's small objects detection.
RESUMO
The detection of traffic signs is easily affected by changes in the weather, partial occlusion, and light intensity, which increases the number of potential safety hazards in practical applications of autonomous driving. To address this issue, a new traffic sign dataset, namely the enhanced Tsinghua-Tencent 100K (TT100K) dataset, was constructed, which includes the number of difficult samples generated using various data augmentation strategies such as fog, snow, noise, occlusion, and blur. Meanwhile, a small traffic sign detection network for complex environments based on the framework of YOLOv5 (STC-YOLO) was constructed to be suitable for complex scenes. In this network, the down-sampling multiple was adjusted, and a small object detection layer was adopted to obtain and transmit richer and more discriminative small object features. Then, a feature extraction module combining a convolutional neural network (CNN) and multi-head attention was designed to break the limitations of ordinary convolution extraction to obtain a larger receptive field. Finally, the normalized Gaussian Wasserstein distance (NWD) metric was introduced to make up for the sensitivity of the intersection over union (IoU) loss to the location deviation of tiny objects in the regression loss function. A more accurate size of the anchor boxes for small objects was achieved using the K-means++ clustering algorithm. Experiments on 45 types of sign detection results on the enhanced TT100K dataset showed that the STC-YOLO algorithm outperformed YOLOv5 by 9.3% in the mean average precision (mAP), and the performance of STC-YOLO was comparable with that of the state-of-the-art methods on the public TT100K dataset and CSUST Chinese Traffic Sign Detection Benchmark (CCTSDB2021) dataset.
Assuntos
Algoritmos , Povo Asiático , Humanos , Benchmarking , Análise por Conglomerados , LuzRESUMO
Object detection in unmanned aerial vehicle (UAV) images is an extremely challenging task and involves problems such as multi-scale objects, a high proportion of small objects, and high overlap between objects. To address these issues, first, we design a Vectorized Intersection Over Union (VIOU) loss based on YOLOv5s. This loss uses the width and height of the bounding box as a vector to construct a cosine function that corresponds to the size of the box and the aspect ratio and directly compares the center point value of the box to improve the accuracy of the bounding box regression. Second, we propose a Progressive Feature Fusion Network (PFFN) that addresses the issue of insufficient semantic extraction of shallow features by Panet. This allows each node of the network to fuse semantic information from deep layers with features from the current layer, thus significantly improving the detection ability of small objects in multi-scale scenes. Finally, we propose an Asymmetric Decoupled (AD) head, which separates the classification network from the regression network and improves the classification and regression capabilities of the network. Our proposed method results in significant improvements on two benchmark datasets compared to YOLOv5s. On the VisDrone 2019 dataset, the performance increased by 9.7% from 34.9% to 44.6%, and on the DOTA dataset, the performance increased by 2.1%.
RESUMO
In the field of metallurgy, the timely and accurate detection of surface defects on metallic materials is a crucial quality control task. However, current defect detection approaches face challenges with large model parameters and low detection rates. To address these issues, this paper proposes a lightweight recognition model for surface damage on steel strips, named LSD-YOLOv5. First, we design a shallow feature enhancement module to replace the first Conv structure in the backbone network. Second, the Coordinate Attention mechanism is introduced into the MobileNetV2 bottleneck structure to maintain the lightweight nature of the model. Then, we propose a smaller bidirectional feature pyramid network (BiFPN-S) and combine it with Concat operation for efficient bidirectional cross-scale connectivity and weighted feature fusion. Finally, the Soft-DIoU-NMS algorithm is employed to enhance the recognition efficiency in scenarios where targets overlap. Compared with the original YOLOv5s, the LSD-YOLOv5 model achieves a reduction of 61.5% in model parameters and a 28.7% improvement in detection speed, while improving recognition accuracy by 2.4%. This demonstrates that the model achieves an optimal balance between detection accuracy and speed, while maintaining a lightweight structure.
RESUMO
This study aimed to address the problems of low detection accuracy and inaccurate positioning of small-object detection in remote sensing images. An improved architecture based on the Swin Transformer and YOLOv5 is proposed. First, Complete-IOU (CIOU) was introduced to improve the K-means clustering algorithm, and then an anchor of appropriate size for the dataset was generated. Second, a modified CSPDarknet53 structure combined with Swin Transformer was proposed to retain sufficient global context information and extract more differentiated features through multi-head self-attention. Regarding the path-aggregation neck, a simple and efficient weighted bidirectional feature pyramid network was proposed for effective cross-scale feature fusion. In addition, extra prediction head and new feature fusion layers were added for small objects. Finally, Coordinate Attention (CA) was introduced to the YOLOv5 network to improve the accuracy of small-object features in remote sensing images. Moreover, the effectiveness of the proposed method was demonstrated by several kinds of experiments on the DOTA (Dataset for Object detection in Aerial images). The mean average precision on the DOTA dataset reached 74.7%. Compared with YOLOv5, the proposed method improved the mean average precision (mAP) by 8.9%, which can achieve a higher accuracy of small-object detection in remote sensing images.
RESUMO
In the context of COVID-19 pandemic prevention and control, it is of vital significance to realize accurate face mask detection via computer vision technique. In this paper, a novel attention improved Yolo (AI-Yolo) model is proposed, which can handle existing challenges in the complicated real-world scenarios with dense distribution, small-size object detection and interference of similar occlusions. In particular, a selective kernel (SK) module is set to achieve convolution domain soft attention mechanism with split, fusion and selection operations; a spatial pyramid pooling (SPP) module is applied to enhance the expression of local and global features, which enriches the receptive field information; and a feature fusion (FF) module is utilized to promote sufficient fusions of multi-scale features from each resolution branch, which adopts basic convolution operators without excessive computational complexity. In addition, the complete intersection over union (CIoU) loss function is adopted in the training stage for accurate positioning. Experiments are carried out on two challenging public face mask detection datasets, and the results demonstrate the superiority of the proposed AI-Yolo against other seven state-of-the-art object detection algorithms, which achieves the best results in terms of mean average precision and F1 score on both datasets. Furthermore, effectiveness of the meticulously designed modules in AI-Yolo is validated through extensive ablation studies. In a word, the proposed AI-Yolo is competent to accomplish face mask detection tasks under extremely complex situations with precise localization and accurate classification.
RESUMO
BACKGROUND: Low-dose CT (LDCT) images usually contain serious noise and artifacts, which weaken the readability of the image. OBJECTIVE: To solve this problem, we propose a compound feature attention network with edge enhancement for LDCT denoising (CFAN-Net), which consists of an edge-enhanced module and a proposed compound feature attention block (CFAB). METHODS: The edge enhancement module extracts edge details with the trainable Sobel convolution. CFAB consists of an interactive feature learning module (IFLM), a multi-scale feature fusion module (MFFM), and a joint attention module (JAB), which removes noise from LDCT images in a coarse-to-fine manner. First, in IFLM, the noise is initially removed by cross-latitude interactive judgment learning. Second, in MFFM, multi-scale and pixel attention are integrated to explore fine noise removal. Finally, in JAB, we focus on key information, extract useful features, and improve the efficiency of network learning. To construct a high-quality image, we repeat the above operation by cascading CFAB. RESULTS: By applying CFAN-Net to process the 2016 NIH AAPM-Mayo LDCT challenge test dataset, experiments show that the peak signal-to-noise ratio value is 33.9692 and the structural similarity value is 0.9198. CONCLUSIONS: Compared with several existing LDCT denoising algorithms, CFAN-Net effectively preserves the texture of CT images while removing noise and artifacts.
Assuntos
Algoritmos , Tomografia Computadorizada por Raios X , Tomografia Computadorizada por Raios X/métodos , Razão Sinal-Ruído , Artefatos , Processamento de Imagem Assistida por ComputadorRESUMO
Microplastic particles produced by non-degradable waste plastic bottles have a critical impact on the environment. Reasonable recycling is a premise that protects the environment and improves economic benefits. In this paper, a multi-scale feature fusion method for RGB and hyperspectral images based on Segmenting Objects by Locations (RHFF-SOLOv1) is proposed, which uses multi-sensor fusion technology to improve the accuracy of identifying transparent polyethylene terephthalate (PET) bottles, blue PET bottles, and transparent polypropylene (PP) bottles on a black conveyor belt. A line-scan camera and near-infrared (NIR) hyperspectral camera covering the spectral range from 935.9 nm to 1722.5 nm are used to obtain RGB and hyperspectral images synchronously. Moreover, we propose a hyperspectral feature band selection method that effectively reduces the dimensionality and selects the bands from 1087.6 nm to 1285.1 nm as the features of the hyperspectral image. The results show that the proposed fusion method improves the accuracy of plastic bottle classification compared with the SOLOv1 method, and the overall accuracy is 95.55%. Finally, compared with other space-spectral fusion methods, RHFF-SOLOv1 is superior to most of them and achieves the best (97.5%) accuracy in blue bottle classification.
RESUMO
Three-dimensional object detection in the point cloud can provide more accurate object data for autonomous driving. In this paper, we propose a method named MA-MFFC that uses an attention mechanism and a multi-scale feature fusion network with ConvNeXt module to improve the accuracy of object detection. The multi-attention (MA) module contains point-channel attention and voxel attention, which are used in voxelization and 3D backbone. By considering the point-wise and channel-wise, the attention mechanism enhances the information of key points in voxels, suppresses background point clouds in voxelization, and improves the robustness of the network. The voxel attention module is used in the 3D backbone to obtain more robust and discriminative voxel features. The MFFC module contains the multi-scale feature fusion network and the ConvNeXt module; the multi-scale feature fusion network can extract rich feature information and improve the detection accuracy, and the convolutional layer is replaced with the ConvNeXt module to enhance the feature extraction capability of the network. The experimental results show that the average accuracy is 64.60% for pedestrians and 80.92% for cyclists on the KITTI dataset, which is 1.33% and 2.1% higher, respectively, compared with the baseline network, enabling more accurate detection and localization of more difficult objects.
Assuntos
Veículos Autônomos , Humanos , PedestresRESUMO
In view of the poor performance of traditional feature point detection methods in low-texture situations, we design a new self-supervised feature extraction network that can be applied to the visual odometer (VO) front-end feature extraction module based on the deep learning method. First, the network uses the feature pyramid structure to perform multi-scale feature fusion to obtain a feature map containing multi-scale information. Then, the feature map is passed through the position attention module and the channel attention module to obtain the feature dependency relationship of the spatial dimension and the channel dimension, respectively, and the weighted spatial feature map and the channel feature map are added element by element to enhance the feature representation. Finally, the weighted feature maps are trained for detectors and descriptors respectively. In addition, in order to improve the prediction accuracy of feature point locations and speed up the network convergence, we add a confidence loss term and a tolerance loss term to the loss functions of the detector and descriptor, respectively. The experiments show that our network achieves satisfactory performance under the Hpatches dataset and KITTI dataset, indicating the reliability of the network.