Your browser doesn't support javascript.
loading
Study on the Recognition of Coal Miners' Unsafe Behavior and Status in the Hoist Cage Based on Machine Vision.
Yao, Wei; Wang, Aiming; Nie, Yifan; Lv, Zhengyan; Nie, Shuai; Huang, Congwei; Liu, Zhenyu.
  • Yao W; School of Mechanical and Electrical Engineering, China University of Mining & Technology, Beijing 100083, China.
  • Wang A; Digital Inteltech, CHN Energy, Beijing 100011, China.
  • Nie Y; School of Mechanical and Electrical Engineering, China University of Mining & Technology, Beijing 100083, China.
  • Lv Z; School of Mechanical and Electrical Engineering, China University of Mining & Technology, Beijing 100083, China.
  • Nie S; School of Mechanical and Electrical Engineering, China University of Mining & Technology, Beijing 100083, China.
  • Huang C; School of Mechanical and Electrical Engineering, China University of Mining & Technology, Beijing 100083, China.
  • Liu Z; School of Mechanical and Electrical Engineering, China University of Mining & Technology, Beijing 100083, China.
Sensors (Basel) ; 23(21)2023 Oct 28.
Article en En | MEDLINE | ID: mdl-37960492
The hoist cage is used to lift miners in a coal mine's auxiliary shaft. Monitoring miners' unsafe behaviors and their status in the hoist cage is crucial to production safety in coal mines. In this study, a visual detection model is proposed to estimate the number and categories of miners, and to identify whether the miners are wearing helmets and whether they have fallen in the hoist cage. A dataset with eight categories of miners' statuses in hoist cages was developed for training and validating the model. Using the dataset, the classical models were trained for comparison, from which the YOLOv5s model was selected to be the basic model. Due to small-sized targets, poor lighting conditions, and coal dust and shelter, the detection accuracy of the Yolov5s model was only 89.2%. To obtain better detection accuracy, k-means++ clustering algorithm, a BiFPN-based feature fusion network, the convolutional block attention module (CBAM), and a CIoU loss function were proposed to improve the YOLOv5s model, and an attentional multi-scale cascaded feature fusion-based YOLOv5s model (AMCFF-YOLOv5s) was subsequently developed. The training results on the self-built dataset indicate that its detection accuracy increased to 97.6%. Moreover, the AMCFF-YOLOv5s model was proven to be robust to noise and light.
Palabras clave