-
Notifications
You must be signed in to change notification settings - Fork 114
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Mismatched results between your lib vs huggingface #15
Comments
Hello, We are aware of this and have raised this issue with huggingface before. Currently the huggingface text pipeline automatically runs a softmax over the outputs when there are multiple labels, which only allows for one correct output, however, in our case (i.e. a multilabel model), a sigmoid is needed to allow multiple high scoring outputs. From their documentation:
I have now added a disclaimer to the model cards on huggingface, hopefully this helps! |
Thank you for the answer. I also saw multiple output from their system and for some examples actually they are doing better than your approach. I can give some examples if you want to discuss further more. |
Yes, would help to see some examples, which model are you trying? Also, by multiple high scoring outputs I meant the outputs don't add up to 1, so you can have 2 outputs or more with scores > 0.9, which currently doesn't seem to happen on their hosted inference API when trying the models. |
That is great. Thank you. A tricky example would be the following: Another input: You won't see the sun rise tomorrow. In addition to that, label name is also a bit different such as toxicity vs toxic. |
I see, thank you for the examples! Firstly, we have only validated our models using a sigmoid since the For example, "I will kill you" gives a toxic score of 0.514 on huggingface and a threat score of 0.458 when both of these should be high, whereas by using a sigmoid our version of the model gives a score of 0.907 for toxicity and 0.897 for threat. Secondly, I wouldn't necessarily expect our models to give a high score on your chosen inputs since they are quite subtle and from I have seen the models do struggle with more nuanced toxic examples. From the few examples I have tried, the results seem quite arbitrary on neutral or non toxic examples, so I would advise against using it in its current state. The label names correspond to the original label names used in the Jigsaw challenges. We changed them in our library to make them consistent across the 3 different models. |
Thank you so much for your clear description. I would like to contribute to it if you guys are working on it or maintain it actively for making it as a productionized version. |
Thank you for your interest in contributing, you can check out our current roadmap #26 and see if there's anything of interest! Regarding this issue, the aforementioned HuggingFace PR that would allow a sigmoid over the outputs has now been merged into master. To test it, you can install the master version of the transformers library (or wait for future versions > 4.9.2) and get the expected outputs like so: pip install 'git+https://github.com/huggingface/transformers.git' from transformers import pipeline
detoxify_pipeline = pipeline(
'text-classification',
model='unitary/toxic-bert',
tokenizer='bert-base-uncased',
function_to_apply='sigmoid',
return_all_scores=True
)
detoxify_pipeline('shut up, you idiot!')
# [[{'label': 'toxic', 'score': 0.9950607419013977},
# {'label': 'severe_toxic', 'score': 0.07963108271360397},
# {'label': 'obscene', 'score': 0.8713390231132507},
# {'label': 'threat', 'score': 0.0019536688923835754},
# {'label': 'insult', 'score': 0.9586619138717651},
# {'label': 'identity_hate', 'score': 0.014700635336339474}]] |
Hi team,
First of all, thank you very much for the library. But I need a clarification why your results are being different than huggingface's result for the same input? Can you please help me with this?
Thanks
The text was updated successfully, but these errors were encountered: