Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 4 de 4
Filtrar
Mais filtros

Base de dados
Tipo de documento
País de afiliação
Intervalo de ano de publicação
1.
Sensors (Basel) ; 24(1)2024 Jan 03.
Artigo em Inglês | MEDLINE | ID: mdl-38203151

RESUMO

The accurate and efficient detection of defective insulators is an essential prerequisite for ensuring the safety of the power grid in the new generation of intelligent electrical system inspections. Currently, traditional object detection algorithms for detecting defective insulators in images face issues such as excessive parameter size, low accuracy, and slow detection speed. To address the aforementioned issues, this article proposes an insulator defect detection model based on the lightweight Faster R-CNN (Faster Region-based Convolutional Network) model (Faster R-CNN-tiny). First, the Faster R-CNN model's backbone network is turned into a lightweight version of it by substituting EfficientNet for ResNet (Residual Network), greatly decreasing the model parameters while increasing its detection accuracy. The second step is to employ a feature pyramid to build feature maps with various resolutions for feature fusion, which enables the detection of objects at various scales. In addition, replacing ordinary convolutions in the network model with more efficient depth-wise separable convolutions increases detection speed while slightly reducing network detection accuracy. Transfer learning is introduced, and a training method involving freezing and unfreezing the model is employed to enhance the network's ability to detect small target defects. The proposed model is validated using the insulator self-exploding defect dataset. The experimental results show that Faster R-CNN-tiny significantly outperforms the Faster R-CNN (ResNet) model in terms of mean average precision (mAP), frames per second (FPS), and number of parameters.

2.
Sensors (Basel) ; 23(14)2023 Jul 24.
Artigo em Inglês | MEDLINE | ID: mdl-37514937

RESUMO

Meter reading is an important part of intelligent inspection, and the current meter reading method based on target detection has problems of low accuracy and large error. In order to improve the accuracy of automatic meter reading, this paper proposes an automatic reading method for pointer-type meters based on the YOLOv5-Meter Reading (YOLOv5-MR) model. Firstly, in order to improve the detection performance of small targets in YOLOv5 framework, a multi-scale target detection layer is added to the YOLOv5 framework, and a set of Anchors is designed based on the lightning rod dial data set; secondly, the loss function and up-sampling method are improved to enhance the model training convergence speed and obtain the optimal up-sampling parameters; Finally, a new external circle fitting method of the dial is proposed, and the dial reading is calculated by the center angle algorithm. The experimental results on the self-built dataset show that the Mean Average Precision (mAP) of the YOLOv5-MR target detection model reaches 79%, which is 3% better than the YOLOv5 model, and outperforms other advanced pointer-type meter reading models.

3.
Sensors (Basel) ; 23(16)2023 Aug 08.
Artigo em Inglês | MEDLINE | ID: mdl-37631571

RESUMO

Scene text recognition is a crucial area of research in computer vision. However, current mainstream scene text recognition models suffer from incomplete feature extraction due to the small downsampling scale used to extract features and obtain more features. This limitation hampers their ability to extract complete features of each character in the image, resulting in lower accuracy in the text recognition process. To address this issue, a novel text recognition model based on multi-scale fusion and the convolutional recurrent neural network (CRNN) has been proposed in this paper. The proposed model has a convolutional layer, a feature fusion layer, a recurrent layer, and a transcription layer. The convolutional layer uses two scales of feature extraction, which enables it to derive two distinct outputs for the input text image. The feature fusion layer fuses the different scales of features and forms a new feature. The recurrent layer learns contextual features from the input sequence of features. The transcription layer outputs the final result. The proposed model not only expands the recognition field but also learns more image features at different scales; thus, it extracts a more complete set of features and achieving better recognition of text. The results of experiments are then presented to demonstrate that the proposed model outperforms the CRNN model on text datasets, such as Street View Text, IIIT-5K, ICDAR2003, and ICDAR2013 scenes, in terms of text recognition accuracy.

4.
PeerJ ; 9: e11262, 2021.
Artigo em Inglês | MEDLINE | ID: mdl-33986992

RESUMO

DNA-binding proteins (DBPs) play pivotal roles in many biological functions such as alternative splicing, RNA editing, and methylation. Many traditional machine learning (ML) methods and deep learning (DL) methods have been proposed to predict DBPs. However, these methods either rely on manual feature extraction or fail to capture long-term dependencies in the DNA sequence. In this paper, we propose a method, called PDBP-Fusion, to identify DBPs based on the fusion of local features and long-term dependencies only from primary sequences. We utilize convolutional neural network (CNN) to learn local features and use bi-directional long-short term memory network (Bi-LSTM) to capture critical long-term dependencies in context. Besides, we perform feature extraction, model training, and model prediction simultaneously. The PDBP-Fusion approach can predict DBPs with 86.45% sensitivity, 79.13% specificity, 82.81% accuracy, and 0.661 MCC on the PDB14189 benchmark dataset. The MCC of our proposed methods has been increased by at least 9.1% compared to other advanced prediction models. Moreover, the PDBP-Fusion also gets superior performance and model robustness on the PDB2272 independent dataset. It demonstrates that the PDBP-Fusion can be used to predict DBPs from sequences accurately and effectively; the online server is at http://119.45.144.26:8080/PDBP-Fusion/.

SELEÇÃO DE REFERÊNCIAS
Detalhe da pesquisa