-
Notifications
You must be signed in to change notification settings - Fork 86
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
How do I train a very lightweight object detector? #37
Comments
The implementation you mentioned is probably faster, since it does the decoding with TensorFlow and not with NumPy. It should also work to use the decoder layer in my implementation. In your case, I would recommend a custom architecture. Separable convolution is a good starting point. Also reducing the depth of the architecture should make it faster, since depth can not be parallelized. Reducing the number of features should also be taken in consideration. Both may affect performance and require training from scratch. Which implementation do you use is up to you... |
It seems that SSD7 from the other repo is lightweight enough that I can configure it to fit my needs. The only concern left is to test speed on mobile. |
I was curious and implemented the SSD7 model from keras_ssd7.py
That's all, relu and depthwise conv should be cheaper, regularisation is done in in training notebook... If you want, add the layer names. |
For me, I want to train a very lightweight/fast object detection model for recognizing a single solid object e.g. a play station joystick. I tried transfer learning on tensorflow object detection API with SSDLiteMobileNetV2 but it's not fast enough because it was made to be big so that it can predict multiple classes. But I want to predict only one class which is a rigid object that won't deform or change shape at all.
That's why I'm thinking of defining MobileNetV2 to be a bit smaller and training SSD from scratch (as I think it's not possible to reuse the weights from the bigger model) so that I could achieve faster inference on a mobile phone. And maybe later I will convert the model to TF Lite.
For example, I want my model to run fast like this paper: https://arxiv.org/abs/1907.05047
@mvoelk
The text was updated successfully, but these errors were encountered: