You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
{{ message }}
This repository has been archived by the owner on Oct 31, 2023. It is now read-only.
I notice that there is a Multi-task DPR which can get about recall-1 = 41.07 reported in your paper. Can you provide it's model parameter file (query-encoder file and passage-encoder file) which can help me a lot because retrain a DPR across several tasks is too time-consuming. Thanks a lot!
The text was updated successfully, but these errors were encountered:
Sign up for freeto subscribe to this conversation on GitHub.
Already have an account?
Sign in.
Thanks for your great work!
I want to do some experiments on the wizard of wikipedia benchmark. I use the DPR query encoder from http://dl.fbaipublicfiles.com/KILT/dpr_multi_set_hf_bert.0 and passage indexs from http://dl.fbaipublicfiles.com/KILT/kilt_passages_2048_0.pkl and get page-level recall-1 = 26.62, recall-5 = 48.82 in the dev set which is similar in your paper KILT: a Benchmark for Knowledge Intensive Language Tasks.
I notice that there is a Multi-task DPR which can get about recall-1 = 41.07 reported in your paper. Can you provide it's model parameter file (query-encoder file and passage-encoder file) which can help me a lot because retrain a DPR across several tasks is too time-consuming. Thanks a lot!
The text was updated successfully, but these errors were encountered: