Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

The results for WIKI dataset are different from the paper #16

Open
lihuiliullh opened this issue May 10, 2023 · 3 comments
Open

The results for WIKI dataset are different from the paper #16

lihuiliullh opened this issue May 10, 2023 · 3 comments

Comments

@lihuiliullh
Copy link

In your paper, the Hit@10 for WIKI is 53.88 or 53.88. But I run the code, and get 0.833 Hit@10, which is almost the same as the Yago dataset.
I am curious how you do the experiments. Can the model distinguish between Yago and WIKI?

@Lee-zix
Copy link
Owner

Lee-zix commented May 10, 2023

Please give more details about how you run the code. For example, the snapshot of your commands and results?

@lihuiliullh
Copy link
Author

The command I use is
"main.py -d WIKI --gpu=2 --train-history-len 3 --test-history-len 3 --dilate-len 1 --lr 0.001 --n-layers 2 --evaluate-every 1 --n-hidden 200 --self-loop --decoder convtranse --encoder uvrgcn --layer-norm --weight 0.5 --entity-prediction --relation-prediction --angle 10 --discount 1 --task-weight 0.7 --gpu 0 "

image

@Lee-zix
Copy link
Owner

Lee-zix commented May 15, 2023

First, the results in the paper are under the raw setting. We also print the results under the filtered setting for convenience. But, you do not compare the results between different settings. Second, the parameters used in your command are not optimal. Please see Section 5.1.4 (Implementation Details) to get the optimal parameter for WIKI.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants