You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
When we test how well Alexnet pre-trained on ILSVRC performs on PASCAL-VOC, the label ordering is different, e.g. neuron 5 does no longer represent a dog but a tree.
I can think of two ways to adjust the models to this labeling difference:
throw out the last layer and train an SVM on the second-to-last layer
freeze all layers but the last and re-train on the specific dataset
I guess it doesn't matter which approach we choose, right? @ncheney
The text was updated successfully, but these errors were encountered:
I'm indifferent to the two approaches. I'm also not sure whether it's best to retrain just the last layer or all 3 of the fully connected layers (thought I'd be interested in seeing how much better of result we get with more retraining -- or if the combination of convolutional features is robust to the different class sets).
Thought I think that this issue will come up only after we try the simpler case with training on only half of the imagenet classes and then adding in the other half.
When we test how well Alexnet pre-trained on ILSVRC performs on PASCAL-VOC, the label ordering is different, e.g. neuron 5 does no longer represent a dog but a tree.
I can think of two ways to adjust the models to this labeling difference:
I guess it doesn't matter which approach we choose, right? @ncheney
The text was updated successfully, but these errors were encountered: