Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

ResNeXt Much Slower than ResNet in TF? #4

Open
xb-chang opened this issue May 1, 2017 · 0 comments
Open

ResNeXt Much Slower than ResNet in TF? #4

xb-chang opened this issue May 1, 2017 · 0 comments

Comments

@xb-chang
Copy link

xb-chang commented May 1, 2017

Hi,

I tried to implement ResNeXt using slim structure.

I implement it base on modifying the ResNet code.

I follow the Fig.3(b) structure in ResNeXt paper. Since ResNeXt contains parallel sub-networks in each block, I use for loop to build them.

But the ResNeXt network building and training require much more time than ResNet.
Why?

I am new to TF, but I guess maybe I am not using the right methods to implement such parallel sub-networks structures. And that's why it is slow

Is there any better way to manage such parallel sub-networks in TF?
I mean, for example, the network architecture of Fig.3(a), any efficient way to implement it directly? (not using equvalence Fig.3(b) and Fig.3(c)) (any concrete example to learn from?)

Thanks a lot.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant