Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Problem about contextual score #20

Open
phchen04217 opened this issue Dec 9, 2020 · 0 comments
Open

Problem about contextual score #20

phchen04217 opened this issue Dec 9, 2020 · 0 comments

Comments

@phchen04217
Copy link

Hi @menyifang
Thanks for the great work. I want to know how to evaluate the contextual score. I ran the contextual_similarity.py in the "cx" folder and modified the path to the results from test.py. Also, I make sure that the return images of DualDataset are source images(ground truth) and generated images. However, the result of the contextual score does not seem to be the average of each data. It seems that line 180: cx += float(loss_layer(ref_single, fake_single)) sums up the contextual score of one batch and does not divide by the number of iteration(8570/48) later.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant