Skip to content

Enhances construction site safety using YOLO for object detection, identifying hazards like workers without helmets or safety vests, and proximity to machinery or vehicles. HDBSCAN clusters safety cone coordinates to create monitored zones. Post-processing algorithms improve detection accuracy.

License

Notifications You must be signed in to change notification settings

yihong1120/Construction-Hazard-Detection

🇬🇧 English | 🇹🇼 繁體中文

AI-Driven Construction Safety Banner



"Construction-Hazard-Detection" is an AI-powered tool designed to enhance safety on construction sites. By leveraging the YOLO model for object detection, it identifies potential hazards such as:

  • Workers without helmets
  • Workers without safety vests
  • Workers near machinery or vehicles
  • Workers in restricted areas, restricted areas will be automatically generated by computing and clustering the coordinates of safety cones.

Post-processing algorithms further improve detection accuracy. The system is built for real-time deployment, offering instant analysis and alerts for identified hazards.

Additionally, the system integrates AI recognition results in real-time via a web interface. It can send notifications and real-time on-site images through messaging apps like LINE, Messenger, WeChat, and Telegram for prompt alerts and reminders. The system also supports multiple languages, enabling users to receive notifications and interact with the interface in their preferred language. Supported languages include:

  • 🇹🇼 Traditional Chinese (Taiwan)
  • 🇨🇳 Simplified Chinese (Mainland China)
  • 🇫🇷 French
  • 🇬🇧 English
  • 🇹🇭 Thai
  • 🇻🇳 Vietnamese
  • 🇮🇩 Indonesian

This multi-language support makes the system accessible to a global audience, improving usability across different regions.



Hazard Detection Diagram

Contents

Hazard Detection Examples

Below are examples of real-time hazard detection by the system:

Workers without helmets or safety vests

Workers without Helmets or Safety Vests

Workers near machinery or vehicles

Workers near Machinery or Vehicles

Workers in restricted areas

Workers in Restricted Areas

Usage

Before running the application, you need to configure the system by specifying the details of the video streams and other parameters in a JSON configuration file. An example configuration file config/configuration.json should look like this:

[
  {
    "video_url": "https://cctv1.kctmc.nat.gov.tw/6e559e58/",
    "site": "Kaohsiung",
    "stream_name": "Test",
    "model_key": "yolo11n",
    "notifications": {
      "line_token_1": "language_1",
      "line_token_2": "language_2"
    },
    "detect_with_server": true,
    "expire_date": "2024-12-31T23:59:59",
    "detection_items": {
      "detect_no_safety_vest_or_helmet": true,
      "detect_near_machinery_or_vehicle": true,
      "detect_in_restricted_area": true
    }
  },
  {
    "video_url": "streaming URL",
    "site": "Factory_1",
    "stream_name": "camera_1",
    "model_key": "yolo11n",
    "notifications": {
      "line_token_3": "language_3",
      "line_token_4": "language_4"
    },
    "detect_with_server": false,
    "expire_date": "No Expire Date",
    "detection_items": {
      "detect_no_safety_vest_or_helmet": true,
      "detect_near_machinery_or_vehicle": false,
      "detect_in_restricted_area": true
    }
  }
]

Each object in the array represents a video stream configuration with the following fields:

  • video_url: The URL of the live video stream. This can include:

    • Surveillance streams
    • RTSP streams
    • Secondary streams
    • YouTube videos or live streams
    • Discord streams
  • site: The location of the monitoring system (e.g., construction site, factory).

  • stream_name: The name assigned to the camera or stream (e.g., "Front Gate", "Camera 1").

  • model_key: The key identifier for the machine learning model to use (e.g., "yolo11n").

  • notifications: A list of LINE messaging API tokens and corresponding languages for sending notifications.

    • line_token_1, line_token_2, etc.: These are the LINE API tokens.
    • language_1, language_2, etc.: The languages for the notifications (e.g., "en" for English, "zh-TW" for Traditional Chinese).

    Supported languages for notifications include:

    • zh-TW: Traditional Chinese
    • zh-CN: Simplified Chinese
    • en: English
    • fr: French
    • vi: Vietnamese
    • id: Indonesian
    • th: Thai

    For information on how to obtain a LINE token, please refer to line_notify_guide_en.

  • detect_with_server: Boolean value indicating whether to run object detection using a server API. If True, the system will use the server for object detection. If False, object detection will run locally on the machine.

  • expire_date: Expire date for the video stream configuration in ISO 8601 format (e.g., "2024-12-31T23:59:59"). If there is no expiration date, a string like "No Expire Date" can be used.

  • detection_items: Specifies the safety detection items for monitoring specific scenarios. Each item can be set to True to enable or False to disable. The available detection items are:

    • detect_no_safety_vest_or_helmet: Detects if a person is not wearing a safety vest or helmet. This is essential for monitoring compliance with safety gear requirements on sites where such equipment is mandatory for personnel protection.
    • detect_near_machinery_or_vehicle: Detects if a person is dangerously close to machinery or vehicles. This helps prevent accidents caused by close proximity to heavy equipment or moving vehicles, often encountered in construction sites or industrial areas.
    • detect_in_restricted_area: Detects if a person has entered a restricted or controlled area. Restricted areas may be dangerous for untrained personnel or may contain sensitive equipment, so this setting aids in controlling access to such zones.

Now, you could launch the hazard-detection system in Docker or Python env:

Docker

Usage for Docker

To run the hazard detection system, you need to have Docker and Docker Compose installed on your machine. Follow these steps to get the system up and running:

  1. Clone the repository to your local machine.

    git clone https://github.com/yihong1120/Construction-Hazard-Detection.git
    
  2. Navigate to the cloned directory.

    cd Construction-Hazard-Detection
    
  3. Build and run the services using Docker Compose:

    docker-compose build
  4. To run the application, use the following command:

    docker-compose up

    You can view the detection results at http://localhost

  5. To stop the services, use the following command:

    docker-compose down
Python

Usage for Python

To run the hazard detection system with Python, follow these steps:

  1. Clone the repository to your local machine:

    git clone https://github.com/yihong1120/Construction-Hazard-Detection.git
  2. Navigate to the cloned directory:

    cd Construction-Hazard-Detection
  3. Install required packages:

    pip install -r requirements.txt
  4. Install and launch MySQL service (if required):

    For Ubuntu users:

    sudo apt install mysql-server
    sudo systemctl start mysql.service

    For others, you can download and install MySQL that works in your operation system in this link.

  5. Start user management API:

    gunicorn -w 1 -b 0.0.0.0:8000 "examples.user_management.app:user-managements-app"
  6. Run object detection API:

    uvicorn examples.YOLO_server.app:sio_app --host 0.0.0.0 --port 8001
  7. Run the main application with a specific configuration file:

    python3 main.py --config config/configuration.json

    Replace config/configuration.json with the actual path to your configuration file.

  8. Start the streaming web service:

    For linux users:

    uvicorn examples.streaming_web.app:sio_app --host 0.0.0.0 --port 8002

    For windows users:

    waitress-serve --host=127.0.0.1 --port=8002 "examples.streaming_web.app:streaming-web-app"
    

Additional Information

  • The system logs are available within the Docker container and can be accessed for debugging purposes.
  • The output images with detections (if enabled) will be saved to the specified output path.
  • Notifications will be sent through LINE messaging API during the specified hours if hazards are detected.

Notes

  • Ensure that the Dockerfile is present in the root directory of the project and is properly configured as per your application's requirements.

For more information on Docker usage and commands, refer to the Docker documentation.

Dataset Information

The primary dataset for training this model is the Construction Site Safety Image Dataset from Roboflow. We have enriched this dataset with additional annotations and made it openly accessible on Roboflow. The enhanced dataset can be found here: Construction Hazard Detection on Roboflow. This dataset includes the following labels:

  • 0: 'Hardhat'
  • 1: 'Mask'
  • 2: 'NO-Hardhat'
  • 3: 'NO-Mask'
  • 4: 'NO-Safety Vest'
  • 5: 'Person'
  • 6: 'Safety Cone'
  • 7: 'Safety Vest'
  • 8: 'Machinery'
  • 9: 'Vehicle'

Models for detection

Model size
(pixels)
mAPval
50
mAPval
50-95
params
(M)
FLOPs
(B)
YOLO11n 640 58.0 34.2 2.6 6.5
YOLO11s 640 70.1 44.8 9.4 21.6
YOLO11m 640 73.3 42.6 20.1 68.0
YOLO11l 640 77.3 54.6 25.3 86.9
YOLO11x 640 82.0 61.7 56.9 194.9

Our comprehensive dataset ensures that the model is well-equipped to identify a wide range of potential hazards commonly found in construction environments.

Contributing

We welcome contributions to this project. Please follow these steps:

  1. Fork the repository.
  2. Make your changes.
  3. Submit a pull request with a clear description of your improvements.

Development Roadmap

  • Data collection and preprocessing.
  • Training YOLO model with construction site data.
  • Developing post-processing techniques for enhanced accuracy.
  • Implementing real-time analysis and alert system.
  • Testing and validation in simulated environments.
  • Deployment in actual construction sites for field testing.
  • Ongoing maintenance and updates based on user feedback.

TODO

  • Add support for WhatsApp notifications.

License

This project is licensed under the AGPL-3.0 License.