-
Notifications
You must be signed in to change notification settings - Fork 18
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Visualizations for Interpretability #12
Comments
Hi, this can be visualized using the
|
Thank you very much for the swift response. I have a couple of follow-up questions.
|
hi,i get the same question with you. Do U solve the question? I cant find the nn.MultiheadAttention mentioned above,Could U plz give me some help? thx |
Hi,
I recently came across your paper when I was looking for some multi-label classification techniques. Yours is a very interesting work, and thank you very much for making your code publicly available. A major reason I am interested in this work is the claim of interpretability. I know it has been some time since this code was written, but I have a question regarding that and it would be great if you could give some insights.
Do you remember how you generated the 3 visualisations mentioned in the paper? I noticed some configurations such as int_preds, and attns_loss in the paper. However, I am not sure how you exactly did that and it would be great to get some insights on that.
The text was updated successfully, but these errors were encountered: