This project shows how to run tiny yolov2 (20 classes) with AMD's NN inference engine(Annie):
- A python convertor from yolo to caffe
- A c/c++ implementation and python wrapper for region layer of yolov2
- A sample for running yolov2 with Annie
Please install amdovx modules and modelcompiler from https://github.com/GPUOpen-ProfessionalCompute-Libraries/amdovx-modules.git.
make
Step 2. Convert Caffe to Annie python lib as shown below using NNIR ModelCompiler (amdovx-modules/utils/model_compiler/)
First convert caffe to NNIR format and compile NNIR to deployment python lib using the following steps
% python caffe2nnir.py ./models/caffemodels/yoloV2Tiny20.caffemodel <nnirOutputFolder> --input-dims 1,3,416,416
% python nnir2openvx.py [OPTIONS] <nnirInputFolder> <outputFolder> (details are in ModelCompiler page of amdovx_modules git repository)
There will be a file libannpython.so (under build) and weights.bin
python ./detectionExample/Main.py --image ./data/dog.jpg --annpythonlib <libannpython.so> --weights <weights.bin>
python ./detectionExample/Main.py --capture 0 --annpythonlib <libannpython.so> --weights <weights.bin> (live Capture)
python ./detectionExample/Main.py --video <video location> --annpythonlib <libannpython.so> --weights <weights.bin>
This runs inference and detections and results will be like this:
Use the following link to resize the images.
https://github.com/kiritigowda/help/tree/master/classificationLabelGenerator
annie-capture demo: https://github.com/kiritigowda/annie-capture-demo
python -W ignore detectionExample/Main.py --imagefolder <images1_416X416/> --cascade <images1_1024X1024/> --weights <weights.bin> --annpythonlib <libannpython.so>
(and use key press for mode change)
(or)
python ./detectionExample/Main.py --capture 0 --annpythonlib <libannpython.so> --weights <weights.bin> (live Capture)
(and use key press for mode change)
Install caffe and config the python environment path.
sh ./models/convertyo.sh
Tips:
Please ignore the error message similar as "Region layer is not supported".
The converted caffe models should end with "prototxt" and "caffemodel".
Please update parameters (biases, object names, etc) in ./src/CRegionLayer.cpp, and parameters (dim, blockwd, targetBlockwd, classe, etc) in ./detectionExample/ObjectWrapper.py.
Please read ./src/CRegionLayer.cpp and ./detectionExample/ObjectWrapper.py for details.
Press these different keys to switch between modes (uses openCV)
-
Keys '1' through 'n' - Runs through a folder corresponding to number once and goes back to live mode (currently supports keys 1,2 folders)
-
Key 'f' - Runs through a folder until asked to change
-
Key 'q' - Quits from the program
-
Key 'Space Bar' - pauses the capture until space bar pressed again
-
Key 'x' - Runs cascaded classification on bounding boxes obtained in each frame.
Press these different keys to switch between modes (uses openCV)
-
Key 'c' - Switches to camera capture mode until asked to change
-
Key 'q' - Quits from the program
-
Key 'Space Bar' - pauses the capture until space bar pressed again
-
Key 'x' - Runs cascaded classification on bounding boxes obtained in each image.
Research Only