Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

convert float32 model to float16, but memory usage has not decreased #289

Open
kukugpt opened this issue May 16, 2024 · 5 comments
Open

Comments

@kukugpt
Copy link

kukugpt commented May 16, 2024

Tasks

Preview Give feedback
No tasks being tracked yet.
@titaiwangms
Copy link

Hi @kukugpt Could you share a minimum repro, or your original code?

@kukugpt
Copy link
Author

kukugpt commented May 17, 2024

of course, my task listed in below:

import numpy as np
from onnx import load_model, save_model
from onnxmltools.utils import float16_converter
from utils import check_model_consistency

if name == "main":
onnx_model_path = "float32_optim.onnx"
save_model_path = "_ptim_float16.onnx"
onnx_model = load_model(onnx_model_path)
trans_model = float16_converter.convert_float_to_float16(onnx_model)
save_model(trans_model, save_model_path)
check_model_consistency(onnx_model_path, save_model_path, input_type1=np.float32, input_type2=np.float16, diff=1/255.)
my model like a framework of Unet.

@kukugpt
Copy link
Author

kukugpt commented May 17, 2024

@titaiwangms

@titaiwangms
Copy link

titaiwangms commented May 17, 2024

For tool float16_converter
cc @xiaowuhu @xadupre

@xadupre
Copy link
Member

xadupre commented May 20, 2024

Can you be more specific about the memory usage? Is it when you run it? What is the model size before and after...

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants