Abstract:
Accurate and real-time detection of safety helmet wearing is the key to reducing construction safety hazards. However, the construction site environment is complex, with problems such as dense personnel, obscured targets, and cluttered backgrounds. Existing safety helmet detection algorithms are difficult to adapt to, and there are false detections and missed detections of dense small targets and obscured targets, and high computing power requirements are required. In view of this, a safety helmet detection algorithm based on YOLOv7-tiny is proposed, that is DS-YOLO. In the backbone network, the DS-ELAN network combined with distribution offset convolution is used, and a lightweight attention mechanism is introduced to reduce the amount of floating-point operations and enhance the ability to extract key features; in the neck network, the multi-scale feature fusion ability of the model is enhanced by combining BiFPN with a small target detection layer, thereby improving the model's detection performance for small and dense targets; WIoU Loss is used as the bounding box regression loss function to focus on anchor boxes of normal quality to improve model performance. Experimental results show that the floating-point computing amount of DS-YOLO is reduced by 10.6% compared with YOLOv7-tiny, the mAP in all target scenarios is increased by 4.1%, and the mAP in small target scenarios is increased by 3.2%. The detection speed of 36.6 frame/s is achieved. The model achieves a good balance between speed and accuracy, and is more suitable for deployment and application in real construction site environments with insufficient computing power.