-
Notifications
You must be signed in to change notification settings - Fork 6
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Problems with BCSS results #27
Comments
If the performance on the CRAG dataset is similar to ours while the performance on the BCSS dataset is poor, it might has something to do with the label processing. Did you use our code? Our ablations show that the performance on the BCSS should be very stable, check #6 and #3 to see if the label processing is correct. Also, label 0 represents the unlabeled region and thus should not be taken into account when calculating metrics. |
Hello, I would like to train and test on your model with my dataset recently. There are 8 classes in my mask annotation, which are 0 - 7. Do I need to make any changes to your configuration in this regard? Especially the num_classes in the configuration file. And then for the mask inferred finally, does it need to be subtracted by one before evaluation? I don't quite understand these issues. Could you please teach me? Thank you very much. |
First the "num_classes" in the config file should be your number of classes + 1. If you have some unlabeled region like BCSS that will not counted into loss calculation, set "ignored_classes": (0), like in the BCSS config. (Please only use class 0 for such region, we did not test other values.) Otherwise, set "ignored_classes": None. No need to change your actual labels, they will be added by 1 by our dataset. If you have some background region that you want them in the loss calculation but not in the metric calculation, like CRAG. You should set ""ignored_classes_metric": 1, # if we do not count background, set to 1 (bg class)", so that this class will be ignored when calculating metrics. The predicted classes always starts from index 1. "pred_masks = torch.argmax(pred_masks[:, 1:, ...], dim=1) + 1" |
OK, thank you for your explanation. However, I still have a small question. Since I have already added 1 to |
That's probably not necessary. You can have a try. |
It is probably connected with other parts of the program. |
Hello, I recently reproduced your article. The experimental results on CRAG are consistent with those in your article. However, the results on BCSS are relatively poor. I compared the inferred mask with the manually annotated mask and found that there is a large difference in categories between them (as shown in the following figure). Is this normal? Or is there a problem with my model training? Could you please answer this question?
The text was updated successfully, but these errors were encountered: