Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

LPISP distance issues24 #41

Open
jiaying96 opened this issue Dec 6, 2019 · 1 comment
Open

LPISP distance issues24 #41

jiaying96 opened this issue Dec 6, 2019 · 1 comment

Comments

@jiaying96
Copy link

jiaying96 commented Dec 6, 2019

#24
HsinYingLee:

I think there are possible several settings can be used:

1. If you would like to compare with the ground truth set and other methods, say, a collection of N images. You can translate N target images from N source images. Then you can randomly sample M pairs out of this N images to calculate the diversity.
2. Other way to compare among different methods is, given N source images, for each image, you translate M of them and calculate a diversity score among these M target images. You then average the score of all N images.
We use the first setting in our experiments. Since there's no standard setting in this kind of experiment yet, I believe any setting that makes same should be okay.

image

I dont understand the way1 “randomly sample M pairs out of this N images to calculate the diversity”,in table2 "real images .448 ± .012", for each pair, is there a ground truth image A and a ground truth image B ? in table2 " DRIT .424 ± .010", for each pair, is there a translate image A ( use method DRIT and translate from ground truth image A )and a translate image B ( use method DRIT and translate from ground truth image B ?
@HsinYingLee

@KevinMarkVine
Copy link

#24
HsinYingLee:

I think there are possible several settings can be used:

1. If you would like to compare with the ground truth set and other methods, say, a collection of N images. You can translate N target images from N source images. Then you can randomly sample M pairs out of this N images to calculate the diversity.
2. Other way to compare among different methods is, given N source images, for each image, you translate M of them and calculate a diversity score among these M target images. You then average the score of all N images.
We use the first setting in our experiments. Since there's no standard setting in this kind of experiment yet, I believe any setting that makes same should be okay.

image

I dont understand the way1 “randomly sample M pairs out of this N images to calculate the diversity”,in table2 "real images .448 ± .012", for each pair, is there a ground truth image A and a ground truth image B ? in table2 " DRIT .424 ± .010", for each pair, is there a translate image A ( use method DRIT and translate from ground truth image A )and a translate image B ( use method DRIT and translate from ground truth image B ?
@HsinYingLee

I have the same problem, how did you solve it

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants