-
Notifications
You must be signed in to change notification settings - Fork 71
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Value to determine camera angles #40
Comments
Hi!
|
Hi yhyang-myron!
I do have one further question though: How would you go about to re-merge these sub-structures into the final objects, when you don't have a ground truth? Can I use the merging method you use (bi-directional merging) for that? |
Hi, Indeed, SAM may result in segmentation results with different ranges when separating objects. After roughly reviewing the page, I think you could try bi-directional merging to re-merge these sub-structures. For example, set a fusion strategy that you want to integrate the relevant parts into the largest part. Or, when saving SAM results, try to keep the one with the largest coverage area as much as possible (if it is greater than a certain miou value). Just a small suggestion, which may not be accurate. |
Awesome! thank you for the quick and in-depth reply, I'm going to try some stuff out. I'll report back in this thread if we have found a nice (combination) of strategies. Amazing stuff you have made! I enjoy playing around with it. |
Thanks for your interest in our work! |
Hello!
I would like to use this package to swgment my own pointcloud data. However, it does not contain RGB values. Also, I don't have a pre-segmented ground truth to evaluate the outcome. My questikn is twofold:
is it advisable to use the number of returns/intensity value as a substitute for RGB? Do I need to rescale the values to fall into EGB value ranges?
is there a metric to determine you have accumulated enough training imagesfrom different angles from your pointcloud? I was thinking something like a pointwise-conteibution metric, determining how often a point has been captured in an image? Or, across multiple trainings sequences, stop when segments become more stable across predictions?
The text was updated successfully, but these errors were encountered: