-
Notifications
You must be signed in to change notification settings - Fork 203
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Label indices do not match class weights or paper notation #51
Comments
Same for me!May I ask if you've solved it? |
Hey, I solved it by just using the corrected weights that I posted. I opened this issue so that other people would not take the wrong weights from here. So just use the corrected weights! |
Thank you very much for your guidance!I‘m not sure about how to set class weights,could you give me some advice? |
Hi,
first of all thanks for your great work and providing this dataset!
Unfortunately I just realized that the labeling indices that you provide in the annotation files, i.e.
{'neutral': 0, 'surprise': 1, 'fear': 2, 'sadness': 3, 'joy': 4, 'disgust': 5, 'anger': 6}
, do not match how you specify them in the paper and - more importantly - the order of the weights in the README.md:[4.0, 15.0, 15.0, 3.0, 1.0, 6.0, 3.0]
. Given the occurrence counts, I'm assuming that the weight1.0
belongs to neutral, which is index0
in the annotation files but index4
in the weights.Could you correct the weights so that users won't accidentally use the wrong assignments if they don't compute the weights themselves (like me)?
The correct weights are
[1.0, 3.0, 15.0, 6.0, 3.0, 15.0, 4.0]
.Best,
Florian
The text was updated successfully, but these errors were encountered: