-
Notifications
You must be signed in to change notification settings - Fork 547
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
why should fix parameters in training maskrcnn? #124
Comments
@gaosanyuan , I met similar problem to you. I want to ask you where this parameter 'FIXED_PARAMS_SHARED' is used? What's the meaning of this part? Any advice will be appreciated. Thank you very much. |
Generally, once a parameter is fixed, it will not be updated in the training process. In mxnet framework, the fixed parameter works as a parameter of the Module class. Please refer to mxnet.module.Module. @hdjsjyl |
@gaosanyuan , I get it. Thanks |
@gaosanyuan In the last train_maskrcnn stage, it was initialized with rpn2+rcnn1. But I found it actually be initialized with rcnn1 for the rpn2's feature extraction part was initialized with rcnn1 and fixed during the training. Is that right? |
In mx-maskrcnn, there are FIXED_PARAMS and FIXED_PARAMS_SHARED in the config.
It it obvious that FIXED_PARAMS_SHARED is needed, but why should need FIXED_PARAMS?
I tried in faster rcnn, the result would be better if FIXED_PARAMS is set empty.
In default, the FIXED_PARAMS is: ['conv0', 'stage1', 'gamma', 'beta'].
The text was updated successfully, but these errors were encountered: