You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Thank you for developing the open-source silent anti-spoofing project! I'm currently working on integrating it into my application and have successfully fine-tuned your base models (versions 2.7_80x80_MiniFASNetV2.pth and 4_0_0_80x80_MiniFASNetV1SE.pth) by modifying the code.
To deploy these models on Android and iOS devices, I need them in a binary format. While you offer pre-built binaries for the base models, I'd like to convert my fine-tuned weights for optimal performance. Unfortunately, converting them myself with Caffe seems impractical as the framework hasn't received updates since 2020. I have tried to convert pth -> caffe -> ncnn but this solution works for the first model (2.7_80x80_MiniFASNetV2.pth), it's not compatible with the second model (4_0_0_80x80_MiniFASNetV1SE.pth). This is because the second model uses an SE module, which is not supported by the Pytorch2Caffe library.
I would greatly appreciate your assistance in converting our fine-tuned weights into a binary format suitable for mobile deployment. Since the conversion code isn't publicly available, any guidance or support would be extremely helpful.
Hello,
Thank you for developing the open-source silent anti-spoofing project! I'm currently working on integrating it into my application and have successfully fine-tuned your base models (versions 2.7_80x80_MiniFASNetV2.pth and 4_0_0_80x80_MiniFASNetV1SE.pth) by modifying the code.
To deploy these models on Android and iOS devices, I need them in a binary format. While you offer pre-built binaries for the base models, I'd like to convert my fine-tuned weights for optimal performance. Unfortunately, converting them myself with Caffe seems impractical as the framework hasn't received updates since 2020. I have tried to convert pth -> caffe -> ncnn but this solution works for the first model (2.7_80x80_MiniFASNetV2.pth), it's not compatible with the second model (4_0_0_80x80_MiniFASNetV1SE.pth). This is because the second model uses an SE module, which is not supported by the Pytorch2Caffe library.
I would greatly appreciate your assistance in converting our fine-tuned weights into a binary format suitable for mobile deployment. Since the conversion code isn't publicly available, any guidance or support would be extremely helpful.
@zhuyingSeu , @minivision-ailab
The text was updated successfully, but these errors were encountered: