-
Notifications
You must be signed in to change notification settings - Fork 41
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Question about paper [# model parameter] #7
Comments
The prototypes are viewed as non-learnable parameters, as they are computed as the mean of a group of feature representations. The parameters of the feature extractors are learnable parameters, which are learned by gradient optimization. |
@wenguanwang Thank you. |
@wenguanwang https://youtu.be/Jm2wKObfES0?t=285. Then, the figure below in your presentation video only considered learnable parameters, right? |
@chwoong Yes, here we mean learnable parameters : -) |
@wenguanwang thank you for quick response. have a good day:) |
@wenguanwang @tfzhou Thank you for your excellent contributions, can you please tell me if the paper code only provides profiles for prototype learning on the Cityscapes dataset? |
Dear author,
Thank you so much for your work and code.
I have a question about the number of model parameters.
As I understand the paper, pixels are classified as the closest prototype among CK prototypes at inference time. In the end, we have to store CK prototypes, then I wonder why we don't interpret them as model parameters. Also, the number of prototypes to be stored is proportional to the number of classes. Is it just convention?
Thank you.
The text was updated successfully, but these errors were encountered: