Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

How is this reading the depth image #307

Open
ArghyaChatterjee opened this issue Feb 25, 2025 · 1 comment
Open

How is this reading the depth image #307

ArghyaChatterjee opened this issue Feb 25, 2025 · 1 comment

Comments

@ArghyaChatterjee
Copy link

ArghyaChatterjee commented Feb 25, 2025

Hello,

I am trying to implement this on my own dataset with a zedmini. The problem is that I don't see the depth image visually that you provided as part of the default test (Mustard bottle from YCBvideo dataset).

Here is the data for the YCBVideo:

  1. RGB image (640x360):

Image

  1. Depth image (640x360, ??, size=49.9 kb):

Image

Here is a data from my custom dataset of mug:

  1. RGB image (672x376):

Image

  1. Depth Image (672x376, 16 bit, size=149.9 kb):

Image

Will this pose any issue for pose estimation with foundation pose?

@QazyBi
Copy link

QazyBi commented Feb 25, 2025

To visualize the depth image, you can convert it into a grayscale image

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants