-
Notifications
You must be signed in to change notification settings - Fork 2.7k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Adapted torch's new API to fix #179 #338
base: master
Are you sure you want to change the base?
Conversation
When was this change made in loadcaffe? I just wonder is it possible that the problem in #179 , posted in March, could have been caused by this. I last installed loadcaffe as recently as August and did not have this problem then. I see no major commits to loadcaffe recently. I just wonder if there is some other reason causing #cnn to fail (I think I have seen it once or twice a long time ago, not in neural-style though). |
Oh, it should be the changes of torch, sorry. |
I don't immediately see why torch would break it either. Made this minimal test script, which uses loadcaffe to load vgg19 into cnn and gets the dimension of both cnn and cnn.modules. Both work on both of my machines, the second at least has quite recent torch.
|
I get
here. |
OK, it appears that your method is safer. Just wonder what could cause the difference. Especially if this happened already in March for someone, so the real reason why #cnn fails is a mystery. Perhaps torch was built to use a different lua interpreter (my th uses luajit). |
Installed torch again by luarocks, still getting 46 and 46. |
My torch is installed from Archlinux user repository. |
I ran into the same issue and commented on the line: for i = 1, #cnn do f86babf#commitcomment-23341533 for i = 1, #cnn.models do worked. |
#cnn
will return 0 instead the number of layers now.