Replies: 1 comment
-
You can just use tf2onnx. It is still a typical keras model. Basically like: model = keras.models.load_model("your_own_basic_model.h5")
imgs = np.random.uniform(size=[1, 112, 112, 3]).astype('float32')
preds = model(imgs)
""" Convert """
import tf2onnx
spec = (tf.TensorSpec(model.input_shape, tf.float32, name="input"),)
output_path = model.name + ".onnx"
model_proto, _ = tf2onnx.convert.from_keras(model, input_signature=spec, opset=13, output_path=output_path)
""" Run test """
import onnxruntime as rt
output_names = [n.name for n in model_proto.graph.output]
providers = ['CPUExecutionProvider']
m = rt.InferenceSession(output_path, providers=providers)
onnx_pred = m.run(output_names, {"input": imgs})
print(np.allclose(preds, onnx_pred[0], rtol=1e-5)) The last |
Beta Was this translation helpful? Give feedback.
0 replies
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
-
I have a suggestion for you.
How to create your Keras.h5 model onnx channel first or onnx channel last conversion model.
Personally, I would like to suggest this because onnx is faster and more portable in terms of inference speed.
thanks....
Beta Was this translation helpful? Give feedback.
All reactions