SpeedCam is an advanced vehicle speed detection system that utilizes computer vision and machine learning techniques to accurately measure and record vehicle speeds in real-time. This system is designed for traffic monitoring, law enforcement, and road safety applications. AI wrote more than 90% of the code in this project.
-
Object Detection: The system uses a YOLO-based object detection algorithm to identify vehicles in real-time video streams.
-
Speed Estimation: Once vehicles are detected, their speed is estimated using Farneback optical flow. This estimation process runs in a separate thread to ensure that real-time processing is not impacted.
-
Calibration: The system requires both camera calibration (dewapring of the captured image) and speed calibration (pixel to world speed conversion) to accurately convert pixel movements to real-world speeds.
-
User Interface: A web-based frontend allows users to monitor detections, manage calibrations, and control the detection process.
- Real-time vehicle detection and speed estimation
- Separate processing threads for detection and estimation
- Camera calibration for lens distortion correction
- Speed calibration for accurate speed measurements
- Web-based user interface for easy monitoring and control
- Macbook Pro M Series (for MLX and GPU capabilities)
- Python 3.11 or higher
- Node.js
-
Install Homebrew (for macOS users):
/bin/bash -c "$(curl -fsSL https://raw.githubusercontent.com/Homebrew/install/HEAD/install.sh)"
-
Install Python 3.11 or higher:
brew install [email protected]
-
Install pip (Python package installer):
curl https://bootstrap.pypa.io/get-pip.py -o get-pip.py python3 get-pip.py
-
Install Node.js:
brew install node
-
Clone the repository:
git clone https://github.com/your-repo/speedcam.git cd speedcam
-
Install frontend dependencies:
cd frontend npm install cd ..
-
Set up the Python environment and install backend dependencies:
python3 -m venv venv source venv/bin/activate pip install --upgrade pip pip install -r requirements.txt
-
Upgrade the database using Alembic:
cd .. // From root directory alembic upgrade head
-
Start the application:
cd frontend npm run dev
This command will:
- Start the backend server
- Start the frontend development server
-
Navigate to http://localhost:3000 in your web browser to access the application.
Before using the system, you need to create a camera calibration:
- Click "Add Calibration" in the Camera Calibrations panel.
- Enter a camera name for this camera calibration.
- Enter the row and column count of the grid image you will be using. For the sample provided in the image located in Checkerboard, rows is 6 and columns is 8.
- Adding Calibration Images:
- Via Live Camera Source:
- Select the camera source.
- Enter the row and column count of the grid image you will be using.
- Hold up the grid image at various angles in front of the camera.
- Hit the capture button to capture a new calibration image. Capture at least 8 images of a checkerboard grid pattern from different angles.
- Via Uploaded Images:
- Click on "Switch to Upload"
- Click on "Upload Images" and select your calibration images from your disk.
- Via Live Camera Source:
- As you upload images, these images will appear grayed out with an X if processing fails, and will show a checkmark if they are valid. If an image fails processing, the likely issues are:
- The grid is not fully visible in the frame.
- The grid is too far away from the camera. It should take up at least 60% of the visible area.
- The number of rows and columns selected does not match the grid used.
- The image lighting is not bright enough. Turn on an overhead light, or wait for a clear sky day.
- Once you have uploaded enough valid images, press save calibration. Your images will be processed a final time and the calibration will be ready for use.
After camera calibration, you need to create a speed calibration:
- Click Add Calibration in the Speed Calibration panel.
- Enter a new name for this calibration.
- Select the associated camera calibration
- Adding Detections:
- Via Live Camera Source:
- Select the input camera source you wish to use from the select camera option.
- When ready, click start calibration. A camera feed will be captured and vehicles detected.
- When the desired vehicles you're interested in are detected with known speeds, hit stop calibration.
- Via Recorded Videos:
- Upload a video via the choose file selector. Video processing will begin automatically.
- Via Live Camera Source:
- For each detected vehicle, either enter a known speed or delete the vehicle. You must have at least two vehicles for each direction in order for the calibration to be accepted.
- Once you are satisfied, hit Submit Calibration. The system will process the video and calculate the necessary constants to convert pixel movement to real-world speeds.
Note: The red lines overlayed on the video are the crop lines for the detection. These are used to define the area of the video that is used to detect vehicles for each lane. If occlusions are present within the crop area, the same vehicle may be detected multiple times erroneously.
- These can be dragged to change the crop area. The crop area for each lane should surround one contiguous segment of the lane, and should be as large as possible.
- The crop area must be set prior to detecting any vehicles.
- Tip: Upload a video or start detection so that you can see where to put the crop lines, then delete any detections from the initial upload and start again.
An example screencapture of an in-progress speed calibration setup:
Once calibrations are complete, you can start detecting vehicle speeds:
- Navigate to the top of the frontend UI.
- Use the status selector to choose your camera input and speed calibration.
- Optionally define a speed limit in your unit of speed which is used to compute the high level statistics. Hit enter to submit a new speed limit.
- Click "Start Detector" to begin real-time speed detection.
Once the detector is running, you can monitor and analyze the results in real-time:
-
Live Detection Feed: At the top of the UI, you'll see a live image feed showing the current detections. This allows you to visually confirm that the system is working correctly.
-
Detection Statistics: Below the live feed, you'll find a summary of detection statistics for the last seven days:
- Total number of vehicles detected
- Average speed of detected vehicles
- Number of speeding violations (based on the currently selected speed limit)
-
Speed Graph: The speed graph displays detected speeds for the last seven days, providing a quick overview of speed trends and patterns.
-
Detection Table: Further down, you'll find a scrollable table showing individual detections. Each row in the table represents a single vehicle detection, including details such as timestamp, detected speed, and lane information.
-
Data Export: The detection table includes an export feature that allows you to download your detection data:
- Click the "Export" button to save the detection data as a CSV file.
- You have the option to include images of each detection in the export.
-
Custom Filters: In the vehicle chart section, you can customize the filters to view historical results for any desired time range:
- Click on the "Filters" option in the vehicle chart section.
- Set your preferred time range and any other relevant filters.
- The chart and detection table will update to show results based on your selected filters.
This comprehensive view allows you to monitor real-time detections, analyze speed trends over time, export detailed data, and customize your view of historical data for in-depth analysis or reporting.
Feel free to contribute to this project by submitting pull requests or opening issues or by forking the code and using it for your own purposes. Just remember to reference this project in your code. Responses are not guaranteed. This is not a production ready project, and is meant only for educational purposes.
This project is licensed under the Apache License 2.0 - see the LICENSE file for details.