-
Notifications
You must be signed in to change notification settings - Fork 168
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
convert the model to onnx and torchscript #77
Comments
Good afternoon, Could anyone help me? I would really like to know how I could optimize that model. I'd like to use it on Jetson AGX so optimization needed for higher inference time. |
Maybe you should convert the NMS function to python code first. I can convert the model to onnx after rewriting that function. My problem is I get this error when I use Onnxruntime to inference onnx model: Also I'm new to ONNX, so I have no idea about how to fix this.== |
@Lllllp93, could you, please, share the code how you converted nms function and how did you convert the model to onnx? I've also found some website about converting complex models. Might be useful: https://www.fatalerrors.org/a/pytoch-to-tensorrt-transformation-of-complex-models.html |
That function basically refers to @lucastabelini CUDA source code, I checked the output and seems like the accuracy is aligned. `
Note: this implementation is slower than origin implementation. |
@Lllllp93 , thanks for the code! But I noticed that one of your function didn't have return so I added it.
Here is the related part of laneatt.py file code
Here is the part of code how I export it and run with onnx
But the results are different from the original nms implementation This is the visualization of the model output with original NMS: So, now I'm investigating the code why results are so different (but actually 3 lines are almost totally the same). Would be great if you point on differences between your nms implementation and @lucastabelini one. Also, it's 20 times slower than original one (just for notion). Tested on my pc with NVIDIA RTX 2080Ti 11Gb, Intel® Core™ i9-9820X CPU @ 3.30GHz × 20, 62,5 GiB RAM |
@sevocrear, The return should be: So for the difference between these two implementations, I guess maybe you visualized all lanes after nms. Notice the keep index and num_to_keep in Lane_nms() can be used to limit number of lanes, normally I just keep top 4 lanes in my task. As I mentioned before, this NMS implementation didn't take advantage of GPU, which means the time complexity is linearly related to the num of lanes. So may be you want to change to CUDA NMS for training. But for inference by CPU or ARM, I guess there is no big difference, it depends on which hardware platform the NN is deployed. In addition,I just found that the post-processing(decode) was included in the onnx model. And I can run it with onnxruntime after remove post-processing part. The comparison of two NMS implementation is showed below, FYI. |
@Lllllp93 How did you do that? Running decode with onnxruntime |
@sevocrear I mean the reason that I cannot run with onnnxruntime before is because decode was also included in onnx model. I can run it with onnxruntime after I remove decode from onnx. Sorry for the misunderstanding. |
Hey @sevocrear, how exactly have you managed to export the model to ONNX? I've tried removing NMS and exporting the model without it (I'd do it after computing the output), but I keep getting the error:
|
Hi @tersekmatija , have you removed NMS totally? In my implementation, I just changed the CUDA NMS to python NMS function (which, of course, is much slower). And then I could convert it to ONNX. But the inference speed was awful so I use python original code with some changes regarding my project. Me and @Lllllp93 described the changed parts and requirements above |
have you sloved this problem? |
hey, @Lllllp93 @sevocrear ,i used you code try to convert ,but i got some error,and i can not find the reason,please help!
|
hey @lucastabelini sorry to bother you, i have some difficulties in covert the model to onnx,so i have an idea that remove the nms program in the project,because in my project just need detect one lane . do you think this idea is ok ? |
Yes @sevocrear . I just tried removing the NMS in the forwards pass, this is the snapshot:
But I am still getting the error. Do you know which version of Python and Torch did you use if it might depend on this? |
If you only need to detect one lane boundary (i.e., one line), then you can do that. Just get the one with the highest confidence score. |
@lucastabelini ok,i will try in my project,thanks! |
|
@hwang12345 It seems that @sevocrear uses it in his forward method and he managed to export it. |
@tersekmatija i had try @sevocrear push code ,but i did not succeed. |
@hwang12345 It seems like you use torch.min() to compared different data type tensors as the error message shows, self.n_offsets in your code may be a long tensor. You can just convert it to Float or just use torch.FloatTensor(self.n_offsets-1). |
Hello. Can you provide the complete code? |
Hello, I have some difficulties converting your model both to onnx and to torchscript. I've read closed issues already but there isn't any help. Could you help me? Or, may be, someone succeeded to do that. Below I'll show code and errors I get after I try to convert the model.
2nd_model.pt is the full model saved in pytorch. Works fine after loading in python.
Converting the model to onnx
Error:
Error 2 (different):
Scripting the model
Error:
The text was updated successfully, but these errors were encountered: