You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I tried to implement ResNeXt using slim structure.
I implement it base on modifying the ResNet code.
I follow the Fig.3(b) structure in ResNeXt paper. Since ResNeXt contains parallel sub-networks in each block, I use for loop to build them.
But the ResNeXt network building and training require much more time than ResNet.
Why?
I am new to TF, but I guess maybe I am not using the right methods to implement such parallel sub-networks structures. And that's why it is slow
Is there any better way to manage such parallel sub-networks in TF?
I mean, for example, the network architecture of Fig.3(a), any efficient way to implement it directly? (not using equvalence Fig.3(b) and Fig.3(c)) (any concrete example to learn from?)
Thanks a lot.
The text was updated successfully, but these errors were encountered:
Hi,
I tried to implement ResNeXt using slim structure.
I implement it base on modifying the ResNet code.
I follow the Fig.3(b) structure in ResNeXt paper. Since ResNeXt contains parallel sub-networks in each block, I use for loop to build them.
But the ResNeXt network building and training require much more time than ResNet.
Why?
I am new to TF, but I guess maybe I am not using the right methods to implement such parallel sub-networks structures. And that's why it is slow
Is there any better way to manage such parallel sub-networks in TF?
I mean, for example, the network architecture of Fig.3(a), any efficient way to implement it directly? (not using equvalence Fig.3(b) and Fig.3(c)) (any concrete example to learn from?)
Thanks a lot.
The text was updated successfully, but these errors were encountered: