-
Notifications
You must be signed in to change notification settings - Fork 32
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Questions about encoding #19
Comments
If you already have it in 229 dim vector form, what is the case for reducing the dimensionality to 128? --- just to have control over model size, this way we can train models with 128, 256, or other large-scale models. How were the SMPL predictions from PHALP encoded into a 229 dim vector in the first place in the dataset? I assume it was a simple concatenation but wanted to know if any additional processing was done. --- yes, just simple concatenation. 207 (23 joints * 3*3) + 9 for global orientation + 10 for shape. Why subtract the SMPL poses by mean and standard deviation instead of directly using the PHALP predictions? Wouldn't that cause a skew in the image space? --- It helped train the pose model with zero-centered poses, probably because the weights are initialized to support zero-centered inputs. Wouldn't that cause a skew in the image space? --- I don't quite understand this part of the question. |
Thank you. By skew I mean you're shifting the X,y coordinates on the image right? That would impact the relationship between people in the camera frame |
Hi. I have some questions on how you encoded the SMPL parameters.
The text was updated successfully, but these errors were encountered: