Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Problems with BCSS results #27

Open
qianyuli123 opened this issue Nov 2, 2024 · 7 comments
Open

Problems with BCSS results #27

qianyuli123 opened this issue Nov 2, 2024 · 7 comments

Comments

@qianyuli123
Copy link

Hello, I recently reproduced your article. The experimental results on CRAG are consistent with those in your article. However, the results on BCSS are relatively poor. I compared the inferred mask with the manually annotated mask and found that there is a large difference in categories between them (as shown in the following figure). Is this normal? Or is there a problem with my model training? Could you please answer this question?
image

@jingweizhang-xyz
Copy link
Member

If the performance on the CRAG dataset is similar to ours while the performance on the BCSS dataset is poor, it might has something to do with the label processing. Did you use our code? Our ablations show that the performance on the BCSS should be very stable, check #6 and #3 to see if the label processing is correct. Also, label 0 represents the unlabeled region and thus should not be taken into account when calculating metrics.

@qianyuli123
Copy link
Author

Hello, I would like to train and test on your model with my dataset recently. There are 8 classes in my mask annotation, which are 0 - 7. Do I need to make any changes to your configuration in this regard? Especially the num_classes in the configuration file. And then for the mask inferred finally, does it need to be subtracted by one before evaluation? I don't quite understand these issues. Could you please teach me? Thank you very much.

@jingweizhang-xyz
Copy link
Member

First the "num_classes" in the config file should be your number of classes + 1.

If you have some unlabeled region like BCSS that will not counted into loss calculation, set "ignored_classes": (0), like in the BCSS config. (Please only use class 0 for such region, we did not test other values.) Otherwise, set "ignored_classes": None. No need to change your actual labels, they will be added by 1 by our dataset.

If you have some background region that you want them in the loss calculation but not in the metric calculation, like CRAG. You should set ""ignored_classes_metric": 1, # if we do not count background, set to 1 (bg class)", so that this class will be ignored when calculating metrics.

The predicted classes always starts from index 1. "pred_masks = torch.argmax(pred_masks[:, 1:, ...], dim=1) + 1"

@qianyuli123
Copy link
Author

OK, thank you for your explanation. However, I still have a small question. Since I have already added 1 to num_classes in the config file, why do I still need to add 1 to num_classes during model training and final evaluation (line 32 in main.py) and (line 161 in network/sam_network.py)? Is this really necessary?

@jingweizhang-xyz
Copy link
Member

That's probably not necessary. You can have a try.

@qianyuli123
Copy link
Author

After I removed the "+1" from the above two files, the program failed to run and reported the following error. It seems that it cannot be deleted.When I added +1 back, it could run normally again.
image

@jingweizhang-xyz
Copy link
Member

It is probably connected with other parts of the program.
I suggest you keep it as it is, a redundant class usually does not affect the performance.
Or, you can check all code related to this and debug it a bit.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants