Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

About dataset annotation #127

Open
Followmeczx opened this issue Apr 24, 2024 · 2 comments
Open

About dataset annotation #127

Followmeczx opened this issue Apr 24, 2024 · 2 comments

Comments

@Followmeczx
Copy link

I looked at the annotated json files for the MSCOCO, MPII, Human36M datasets. I noticed that none of the camera parameters had extrinsic parameters for rotation and translation, but only intrinsic parameters for focal length and principal point. Why is that? What if I want to get the camera calibration parameters? Can you give me some advice

@linjing7
Copy link
Contributor

Hi, we use the weak perspective camera model. You can refer to this paper for more detail.

@pasin-k
Copy link

pasin-k commented Nov 18, 2024

Hi, I wonder if I have my own image which I would like to add to train, how can I find out intrinsic parameters?

@linjing7 In the paper you mentioned, it mention assumption of initial focal length = 5000 but does not mention how it scales with cropped image.

I checked UBody dataset and still find no correlation on how each object has its own focal length value. Any documentation of how the focal length is calculated will be much appreciated.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants