-
Notifications
You must be signed in to change notification settings - Fork 43
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Network's performance decreases after adopting WS #20
Comments
Thanks for the question. Did you also use the backbones pre-trained with WS? Also, make sure every WS-Conv2d is followed by an activation normalization layer; otherwise, use a regular Conv2d. |
Thanks for your reply. I tried both
The others network components are replaced with WS. In both situations, I saw performance decreases. I have verified the network's architecture, only conv directly followed by BN are replaced by WS. Here I have a doubt, for combined convs like LinearBottleNeck followed byBN, i.e. 3x3+1x1+3x3+BN, should we replace only the last 3x3 conv with WS or all the three convs? In my code, I choose the former one. |
Sorry, it's hard for me to see where the problem might be given the details you provided. However, one thing I would recommend trying is removing weight /= std, i.e. only centering the weights. This would remove the benefits of std but would have more tolerance for different architecture designs. This strategy might also apply to the combined convolutions. |
@joe-siyuan-qiao Why is it important that WS has to be followed by a normalization layer? From what I understood, WS aims to preserve the statistics of the tensors. So even for layers without normalization, shouldn't it be useful?
|
Just for your reference, on my task, GN > GN+WC (weight centralization) >> GN+WS. |
I am training a Instance Segmentation network, before I adopt WS, I can achieve mAP 35.66 with Conv+GN, however after adopting WS, I can only achieve 35.27. Is there something wrong with my code? My code to convert the original network to WS is below, note that my original code contains a ResNet101-FPN backbone with deformable convs and depth-separable convs and linear bottlenecks introduced in MobileNet-V2
The text was updated successfully, but these errors were encountered: