Skip to content

alinutzal/gnn_onnx

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

3 Commits
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

ONNX Runtime Inference

Introduction

ONNX Runtime C++ inference example for running ONNX GNN models on CUDA.

Dependencies

Modules

  • module load cmake/3.20.5
  • module load cuda/11.0.3 export CC=/usr/bin/gcc export CXX=/usr/bin/g++

Build Example

$ cd gnn_onnx/src/build
$ cmake ..
$ cmake --build .

### Run Example

```bash
$ ./build/inference  --use_cuda
Inference Execution Provider: CUDA

About

No description, website, or topics provided.

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published