Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

code to train #11

Open
jlim13 opened this issue Jan 26, 2021 · 5 comments
Open

code to train #11

jlim13 opened this issue Jan 26, 2021 · 5 comments

Comments

@jlim13
Copy link

jlim13 commented Jan 26, 2021

is there code to train and reproduce results?

@cpshaheen
Copy link

cpshaheen commented Feb 12, 2021

@dchen236 @Bernardo1998 - can you guys please provide the following for experimental validity:

  1. Model parameters and architecture.
  2. Code showing the training process.
  3. Code showing the evaluation process.

@kylemcdonald has opened an issue detailing a discrepancy between testing and the results published in the paper. I also have had the same results and would appreciate some clarity regarding this. It seems like the repo has been updated within the last month but neither this issue or the one Kyle raised were addressed? Don't mean to bother you two too much but it would be great if you could follow up with the addition of the code used for training and testing so that we can benefit from your findings or help refine the findings if it is the case that the reported findings were significantly off from the true metrics for some unknown reason.

Thanks and best wishes!

@jlim13
Copy link
Author

jlim13 commented Feb 19, 2021

Can we get some follow on this please? It doesn't really make sense to release a dataset paper without a script to reproduce your baselines.

@joojs
Copy link

joojs commented Feb 19, 2021

Our whole codebase is very big and uses other data which we can't release, so it will be hard to release the whole thing. We may be able to clear up the part separately, but not sure when. We provided the dataset, inference code and pretrained models, which should be sufficient for most use cases.

For the evaluation accuracy in the other issue: The result in Table 6 in arxiv (table 8 in wacv paper) was measured on the "external validation datasets". The paper explains how they were collected and evaluated in detail. We are not able to release these datasets because these are not under CC license. The pre-trained model is the one used in our experiments in the paper.

@cpshaheen
Copy link

Our whole codebase is very big and uses other data which we can't release, so it will be hard to release the whole thing. We may be able to clear up the part separately, but not sure when. We provided the dataset, inference code and pretrained models, which should be sufficient for most use cases.

For the evaluation accuracy in the other issue: The result in Table 6 in arxiv (table 8 in wacv paper) was measured on the "external validation datasets". The paper explains how they were collected and evaluated in detail. We are not able to release these datasets because these are not under CC license. The pre-trained model is the one used in our experiments in the paper.

Appreciate the response. Thanks for getting back!

@anderleich
Copy link

Any updates on this?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

4 participants