Replies: 1 comment
-
Have you generated mask files(.png format) corresponding to the segmentation in your JSON file? If not you have to generate it and you can save it in the same directory as jpg image root. |
Beta Was this translation helpful? Give feedback.
0 replies
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
-
Hey there!
I know there are threads regarding training a segmentation/keypoint model simultaneously. However, I didn't find a final answer yet if this is possible in general or if I simply can stopp trying it :)...
I'm trying to use the pre-trained model: mask_rcnn_X_101_32x8d_FPN_3x for identifying roofs/ridges/obstacles simultaneously. I want to use segmentation annotation (which I'm using for polygon shapes --> roofs/obstacles/some others) and keypoint annotation (which I'm using for line shapes --> ridge/valleys) simultaneously.
My annotation datasets for training and validation consequently have segmentation annotations and keypoint annotations. I already added dummy arrays/values for segmentation annotations (keypoints array and num_keypoints = 2) to have a similar data structure within the annotations. However this didn't solve the problem. My annotations look like this:
Segmentation
Keypoints:
In my training file I activated model_mask and model_keypoint. My config looks like this:
Unfortunately it is not working. Errors:
AttributeError: Cannot find field 'gt_masks' in the given Instances!
or
ValueError: expected sequence of length 2 at dim 1 (got 0)
or
ValueError: Keypoint data has 2 points, but metadata contains 1 points!
with a custom_dataset_mapper:
When only exporting the segmentation annotations and deactivating the model.keypoints or vice versa (exporting keypoint annotations only and deactivating model.mask) it works. However, multitask it seems not working...
Someone any idea on that?
Thanks in advance!
Beta Was this translation helpful? Give feedback.
All reactions