Skip to content

Latest commit

 

History

History
100 lines (89 loc) · 3.5 KB

README.rst

File metadata and controls

100 lines (89 loc) · 3.5 KB

License

Example Go Client

This sample script assumes that the protobuf stubs will be compiled local to the client, under the nvidia_inferenceserver directory. This can be easily tweaked by uncommenting gen_go_stubs.sh to generate the stubs in ${GOPATH}/src/nvidia_inferenceserver to have something more global instead.

Usage:

# Clone repo
git clone https://github.com/triton-inference-server/server.git

# Setup "simple" model from example model_repository
cd server/docs/examples
./fetch_models.sh

# Launch (detached) Triton
docker run -d -p8000:8000 -p8001:8001 -p8002:8002 -it -v $(pwd)/model_repository:/models nvcr.io/nvidia/tritonserver:20.11-py3 tritonserver --model-store=/models
# Use client
cd ../../src/clients/go
# Compiles *.proto to *.pb.go
./gen_go_stubs.sh
go run grpc_simple_client.go

Sample Output:

$ go run grpc_simple_client.go
  FLAGS: {simple  1 localhost:8001}
  Triton Health - Live: true
  Triton Health - Ready: true
  name:"simple"  versions:"1"  platform:"tensorflow_graphdef"  inputs:{name:"INPUT0"  datatype:"INT32"  shape:-1  shape:16}  inputs:{name:"INPUT1"  datatype:"INT32"  shape:-1  shape:16}  outputs:{name:"OUTPUT0"  datatype:"INT32"  shape:-1  shape:16}  outputs:{name:"OUTPUT1"  datatype:"INT32"  shape:-1  shape:16}

  Checking Inference Outputs
  --------------------------
  0 + 1 = 1
  0 - 1 = -1
  1 + 1 = 2
  1 - 1 = 0
  2 + 1 = 3
  2 - 1 = 1
  3 + 1 = 4
  3 - 1 = 2
  4 + 1 = 5
  4 - 1 = 3
  5 + 1 = 6
  5 - 1 = 4
  6 + 1 = 7
  6 - 1 = 5
  7 + 1 = 8
  7 - 1 = 6
  8 + 1 = 9
  8 - 1 = 7
  9 + 1 = 10
  9 - 1 = 8
  10 + 1 = 11
  10 - 1 = 9
  11 + 1 = 12
  11 - 1 = 10
  12 + 1 = 13
  12 - 1 = 11
  13 + 1 = 14
  13 - 1 = 12
  14 + 1 = 15
  14 - 1 = 13
  15 + 1 = 16
  15 - 1 = 14