Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Value to determine camera angles #40

Open
SjoerdBraaksma opened this issue Sep 29, 2023 · 5 comments
Open

Value to determine camera angles #40

SjoerdBraaksma opened this issue Sep 29, 2023 · 5 comments

Comments

@SjoerdBraaksma
Copy link

Hello!
I would like to use this package to swgment my own pointcloud data. However, it does not contain RGB values. Also, I don't have a pre-segmented ground truth to evaluate the outcome. My questikn is twofold:

  1. is it advisable to use the number of returns/intensity value as a substitute for RGB? Do I need to rescale the values to fall into EGB value ranges?

  2. is there a metric to determine you have accumulated enough training imagesfrom different angles from your pointcloud? I was thinking something like a pointwise-conteibution metric, determining how often a point has been captured in an image? Or, across multiple trainings sequences, stop when segments become more stable across predictions?

@yhyang-myron
Copy link
Member

Hi!

  1. We use SAM to get the segment on RGB data. If you want to use some other data, maybe you should try if SAM works well either.
  2. Sorry, I didn't understand this issue very well. Do you mean the number of RGBDs used to build point clouds?

@SjoerdBraaksma
Copy link
Author

Hi yhyang-myron!

  1. I got the model to work on non-RGB data as well (although it's not as good), so that point is resolved.

  2. Diving deeper into the model, I think this question is irrelevant. Sorry for asking!

I do have one further question though:
I am following this medium post: https://medium.com/@OttoYu/point-cloud-segmentation-with-sam-in-multi-angles-add5a5c61e67
and the end result is a segmentation classification for each point, from each different angle. As you can see however, it segments sub-structures from objects (example: the chapel tower is an individual segment from the total chapel).

How would you go about to re-merge these sub-structures into the final objects, when you don't have a ground truth? Can I use the merging method you use (bi-directional merging) for that?

@yhyang-myron
Copy link
Member

Hi, Indeed, SAM may result in segmentation results with different ranges when separating objects. After roughly reviewing the page, I think you could try bi-directional merging to re-merge these sub-structures. For example, set a fusion strategy that you want to integrate the relevant parts into the largest part. Or, when saving SAM results, try to keep the one with the largest coverage area as much as possible (if it is greater than a certain miou value). Just a small suggestion, which may not be accurate.

@SjoerdBraaksma
Copy link
Author

Awesome! thank you for the quick and in-depth reply, I'm going to try some stuff out. I'll report back in this thread if we have found a nice (combination) of strategies. Amazing stuff you have made! I enjoy playing around with it.

@yhyang-myron
Copy link
Member

Thanks for your interest in our work!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants