You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I am not able to find out how class overlap between train and test data is being maintained. Typically in few shot learning evaluation, few classes are never shown to the model, and on test time we evaluate the model by giving it N examples from those classes that have not been shown to the model. I am not sure if this separation is maintained in the code.
The text was updated successfully, but these errors were encountered:
In this code, the few shot classes of testing data are same as which of validation data if the random seed is specified. Therefore, the separation is not maintained in the code. However, this is indeed an issue, and I will address it as soon as possible. Thank you pointing out this bug!
I am not able to find out how class overlap between train and test data is being maintained. Typically in few shot learning evaluation, few classes are never shown to the model, and on test time we evaluate the model by giving it N examples from those classes that have not been shown to the model. I am not sure if this separation is maintained in the code.
The text was updated successfully, but these errors were encountered: