Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Background only images #21

Open
anilesec opened this issue Mar 21, 2022 · 5 comments
Open

Background only images #21

anilesec opened this issue Mar 21, 2022 · 5 comments

Comments

@anilesec
Copy link

Hi @samarth-robo

Is it possible to release the background only image for the ho3d sequences? I need this to extract only the foreground of each sequence.

Thanks in advance!

@samarth-robo
Copy link
Collaborator

@anilesec I don't understand fully. Can you please elaborate?

@anilesec
Copy link
Author

Sure :)
Basically, what I mean is, for each sequence that you have captured there is static background(green color background) and foreground(which is the person manipulating the object). What I am asking is the only background image where there is no person present? I need this to compute the foreground image in each sequence.

@samarth-robo
Copy link
Collaborator

@anilesec we don't have that, sorry. But background subtraction (or foreground extraction) should not be too difficult if you start from the example we have provided for cropping the area around hand-object: https://github.com/facebookresearch/ContactPose/blob/main/docs/doc.md#image-preprocessing.

@anilesec
Copy link
Author

anilesec commented Mar 22, 2022

Thanks for the pointer. But in this case, you use a 3D model(mesh) of the object to get foreground masks. I am looking to extract foreground mesh using only images(without using 3d model/mesh of the hand or object). Isn't this as good as using GT masks to extract foreground image?

@samarth-robo
Copy link
Collaborator

@anilesec it has been a while since I looked at the code, but if you look after this line https://github.com/facebookresearch/ContactPose/blob/main/scripts/preprocess_images.py#L101, the foreground mask is derived from colour and depth thresholding. Rendering is used only to decide the threshold value for depth thresholding. That can be approximated using the object pose + approximate object dimensions. Then rendering or object/hand meshes are not needed.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants