Loading torch script model failed #1438
Replies: 6 comments 11 replies
-
I don't know anything about torch, but it looks like you should be loading the module in the constructor or init function, and then using it in some way in the input handler. |
Beta Was this translation helpful? Give feedback.
-
The problem still exists, I try to load my model in the component init function, it worked. But when I try to use the following code, the whole deployment is stucked again.
} void MyVehicleActor ::
} ` |
Beta Was this translation helpful? Give feedback.
-
What operating system/hardware platform are you using? WSL-2? Linux/RPI? |
Beta Was this translation helpful? Give feedback.
-
Does anything appear in the running binary's logs? If you ran the binary using |
Beta Was this translation helpful? Give feedback.
-
Are you allowed to share it in open source to push the investigation a bit further? With a dummy model if there is an IP issue maybe? And maybe configure it to open with GitPod to ease the configuration process and be ready to code/investigate? PS: an example of GitPod setup for Fprime is available here: #788 |
Beta Was this translation helpful? Give feedback.
-
When I tried to load pytorch model in my passive component, my input port handler function can not work normally, and the entire deployment workflow is stuck at this input port handler function when I run fprime-gds. Then I tried to use try&catch to catch exceptions, still nothing show up in my fprime log.
`
void MyVehicleActor::stepIn_handler( const NATIVE_INT_TYPE portNum)
{
this->log_ACTIVITY_LO_STEP_PORT_ACTIVATED();
}
`
Beta Was this translation helpful? Give feedback.
All reactions