-
Notifications
You must be signed in to change notification settings - Fork 239
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
CUDA-BEVFusion: difference in the result of onnx and pytorch model #283
Comments
This is normal. We evaluate the difference between the torch and tensor by using mAP, not absolute difference. |
This is because of the nondeterministic implementation in the BEVFusion pipeline, such as Voxelization and Spconv. |
Hi. Thanks for the reply.
This is my forward function for resnet. I found this problem for all onnx files so I only export camera backbone to onnx and compare with torch output. Based on my previous experience, the output from torch and onnx should pass the allclose test, which is not the case here.
Could you share more regarding this based on your experience? I have skipped PTQ and only export torch to onnx with opset version 13. My observation is that some numpy functions are optimized and not so accurate for float32, but I also realised a lot of people using allclose test for onnx model and it passes the test, so I am quite confused. |
The difference will be expected if you are running on fp16 (trtexec --onnx=model.onnx --fp16). Because fp16 has a lower representation precision compared to fp32. |
Hi HopeJW. Thanks for your reply. Could you clarify on the bug you mentioned on fp32? |
Hi.
I have converted BEVFusion model from torch to onnx but it fails np.testing.allclose
Is it normal?
The text was updated successfully, but these errors were encountered: