Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Some questions. #8

Open
xiaocc612 opened this issue Mar 2, 2022 · 8 comments
Open

Some questions. #8

xiaocc612 opened this issue Mar 2, 2022 · 8 comments

Comments

@xiaocc612
Copy link

I am very interested in your work. Can you provide the training code for the HMAR model and the annotation file for the AVA dataset?
Thanks.

@brjathu
Copy link
Owner

brjathu commented Mar 4, 2022

Hi the annotations for AVA will be out in a week. We will try to release the HMAR training code as soon as possible.

@xiaocc612
Copy link
Author

xiaocc612 commented Mar 5, 2022

Hi the annotations for AVA will be out in a week. We will try to release the HMAR training code as soon as possible.

Sorry to bother you again. Approximately how many sequences with shot switching did you pick, and how was ground truth generated? Thanks.

@somyagoel13
Copy link

Hi, Can the demo.py file be used for a random youtube video for 3d tracking?

@xiaocc612
Copy link
Author

Hi, Can the demo.py file be used for a random youtube video for 3d tracking?

Hi, demo works fine. Actually, this method focuses on tracking 2d People with 3D Representations rather than 3D tracking.

@brjathu
Copy link
Owner

brjathu commented Mar 10, 2022

@xiaocc612, evaluation on AVA includes ~1.3k examples with shot changes. These sequences come from the validation set of AVA. The shot change detection is done automatically and the person bounding boxes are taken from the AVA annotations, but we manually curate this set to establish that the shot change detection is correct and which bounding boxes of humans belong to the same person.

@brjathu
Copy link
Owner

brjathu commented Mar 10, 2022

@somyagoel13 yes, the demo.py works on any youtube videos. We have released a fast online version in the PHALP repo, which also uses same 3D representations.

@brjathu
Copy link
Owner

brjathu commented Mar 10, 2022

@xiaocc612 yes, our method works on monocular RGB images. We don't use any explicit depth information.

@xiaocc612
Copy link
Author

Hi the annotations for AVA will be out in a week. We will try to release the HMAR training code as soon as possible.

How long before the AVA dataset is released? Very much looking forward to it.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants