You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Currently, the axon and myelin segmentation models are trained separately and independently. The 2 cascaded models should be trained at once. This would also allow parameter sharing, like having a common image encoder for both models.
This month, the myelin segmentation should get good enough to move on to this "cascaded" training. I already expect a lot of autograd problems...
However, after this, the model should be ready for a public release.
The text was updated successfully, but these errors were encountered:
After thinking about it, I think both the axon and the myelin models should be trained together at once. We could use region-based training, as discussed in axondeepseg/axondeepseg#773. A single image encoder would embed the input, and 2 separate mask decoders would segment the axon (or the inner myelin border) and the myelin (or the outer myelin border). We would still prompt the model with the original axon centroids. Alternatively, maybe a single mask decoder could output 2 channels, but we will have to look into this.
Currently, the axon and myelin segmentation models are trained separately and independently. The 2 cascaded models should be trained at once. This would also allow parameter sharing, like having a common image encoder for both models.
This month, the myelin segmentation should get good enough to move on to this "cascaded" training. I already expect a lot of autograd problems...
However, after this, the model should be ready for a public release.
The text was updated successfully, but these errors were encountered: