This repository has been archived by the owner on Feb 16, 2019. It is now read-only.
-
Notifications
You must be signed in to change notification settings - Fork 50
HOME
Radha Giduthuri edited this page Jan 12, 2018
·
11 revisions
See development workflow page for developer instructions.
Here are some useful tutorial documents:
Setup and run annInferenceServer
- Recommended server configuration: EPYC(tm) processor with multiple Vega-based GPUs
- Install Ubuntu 16.04 64-bit
- Install ROCm from AMD repositories with OpenCL development kit
- Install protobuf for inference_generator -- just checkout https://github.com/google/protobuf and follow the C++ Installation Instructions
- Checkout, Build, and Install dependent projects
% git clone https://github.com/RadeonOpenCompute/rocm-cmake
% mkdir -p rocm-cmake/build
% cd rocm-cmake/build
% cmake -DCMAKE_BUILD_TYPE=Release ..
% sudo make install
% cd ../..
% git clone https://github.com/ROCmSoftwarePlatform/MIOpenGEMM
% mkdir -p MIOpenGEMM/build
% cd MIOpenGEMM/build
% cmake -DCMAKE_BUILD_TYPE=Release ..
% make -j4
% sudo make install
% cd ../..
% git clone https://github.com/ROCmSoftwarePlatform/MIOpen
% mkdir -p MIOpen/build
% cd MIOpen/build
% cmake -DMIOPEN_BACKEND=OpenCL -DCMAKE_BUILD_TYPE=Release ..
% make -j4 MIOpenDriver
% sudo make install
% cd ../..
-
amdovx-modules: Checkout, Build, and Install -- optionally you can pick the develop branch using
-b develop
in thegit clone
command-line.
% git clone --recursive https://github.com/GPUOpen-ProfessionalCompute-Libraries/amdovx-modules
% mkdir -p amdovx-modules/build
% cd amdovx-modules/build
% cmake -DCMAKE_BUILD_TYPE=Release ..
% make -j4
% sudo make install
% cd ../..
- Add
/opt/rocm/bin
toPATH
environment variable - Add
/opt/rocm/lib
toLD_LIBRARY_PATH
environment variable - Make sure that OpenCL library is in
LD_LIBRARY_PATH
environment variable (e.g., /opt/rocm/opencl/lib/x86_64) - Run
/opt/rocm/bin/annInferenceServer
Setup and run annInferenceApp
- Use another workstation for
annInferenceApp
- Build annInferenceApp
- Run
annInferenceApp
- Connect to the server (use port 28282)
- Upload CAFFE model (such as ResNet-50 for ImageNet)
- Select .prototxt, .caffemodel, input dimensions, and other optional parameters
- Click
Upload & Compile
- Run Inference
- Select synset txt file with one name of each output label per line
- Select input image folder (such as ImageNet validation data)
- Click
Run
First you need to build a C++ library from your pre-trained CAFFE model.
- Use inference_generator to generate the C++ code
% caffe2openvx weights.caffemodel <batchSize> <inputChannels> <inputHeight> <inputWidth>
% caffe2openvx deploy.prototxt <batchSize> <inputChannels> <inputHeight> <inputWidth>
(this will create annmodule.h, annmodule.cpp, anntest.cpp, CMakeLists.txt, weights(folder), bias(folder))
- Use cmake to build the C++ source code (above) to produce libannmodule.so (library) and anntest (test program). See inference_generator for further details.
- Use the anntest test program for sanity tests. This program optionally takes folder containing weights and bias sub-directories followed by filenames of input tensor data and output tensor data. The tensor data will be raw binary dump of float values.
Usage: anntest [<folder-containing-weights-and-bias> [<raw-input-tensor.dat> [<raw-output-tensor.dat>]]]
Example:
% ./anntest . input-f32.dat output-f32.dat
- Study the anntest.cpp to understand the application integration.
Integrate annmodule into you application.
- Add annmodule.h and annmodule.cpp (or libannmodule.so) into your project space.
- All the dependencies, i.e.,
openvx
,vx_nn
,MIOpenGEMM
,MIOpen
, andOpenCL
, will be available in/opt/rocm
folder. - Use
annGetTensorDimensions()
in annmodule.h to query and check neural network input and output dimensions - Use
annCreateGraph()
to create and initialize neural network -- this function returns an OpenVX graph object that is ready to process (i.e., to run inference) - Just use
vxProcessGraph()
orvxScheduleGraph()
APIs to the run inference
To deploy your application, you need to package the weights and bias folders with your application executable. Note that the application need to specify the path of installed folder to annCreateGraph()
function call.