Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

why should fix parameters in training maskrcnn? #124

Open
gaosanyuan opened this issue Feb 28, 2018 · 4 comments
Open

why should fix parameters in training maskrcnn? #124

gaosanyuan opened this issue Feb 28, 2018 · 4 comments

Comments

@gaosanyuan
Copy link

In mx-maskrcnn, there are FIXED_PARAMS and FIXED_PARAMS_SHARED in the config.
It it obvious that FIXED_PARAMS_SHARED is needed, but why should need FIXED_PARAMS?
I tried in faster rcnn, the result would be better if FIXED_PARAMS is set empty.
In default, the FIXED_PARAMS is: ['conv0', 'stage1', 'gamma', 'beta'].

@hdjsjyl
Copy link

hdjsjyl commented Mar 15, 2018

@gaosanyuan , I met similar problem to you. I want to ask you where this parameter 'FIXED_PARAMS_SHARED' is used? What's the meaning of this part? Any advice will be appreciated. Thank you very much.

@gaosanyuan
Copy link
Author

gaosanyuan commented Mar 16, 2018

Generally, once a parameter is fixed, it will not be updated in the training process. In mxnet framework, the fixed parameter works as a parameter of the Module class. Please refer to mxnet.module.Module. @hdjsjyl

@hdjsjyl
Copy link

hdjsjyl commented Mar 23, 2018

@gaosanyuan , I get it. Thanks

@zl1994
Copy link

zl1994 commented Apr 18, 2018

@gaosanyuan In the last train_maskrcnn stage, it was initialized with rpn2+rcnn1. But I found it actually be initialized with rcnn1 for the rpn2's feature extraction part was initialized with rcnn1 and fixed during the training. Is that right?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants