Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Adapted torch's new API to fix #179 #338

Open
wants to merge 1 commit into
base: master
Choose a base branch
from

Conversation

petronny
Copy link

#cnn will return 0 instead the number of layers now.

@htoyryla
Copy link

htoyryla commented Oct 13, 2016

When was this change made in loadcaffe? I just wonder is it possible that the problem in #179 , posted in March, could have been caused by this. I last installed loadcaffe as recently as August and did not have this problem then. I see no major commits to loadcaffe recently. I just wonder if there is some other reason causing #cnn to fail (I think I have seen it once or twice a long time ago, not in neural-style though).

@petronny petronny changed the title Adapted loadcaffe's new API to fix #179 Adapted torch's new API to fix #179 Oct 13, 2016
@petronny
Copy link
Author

Oh, it should be the changes of torch, sorry.
BTW, all torch stuff on my system is latest.
The usage of for i,layer in ipairs(cnn.modules) can be found in:
https://github.com/torch/demos/blob/master/person-detector/model.lua

@htoyryla
Copy link

htoyryla commented Oct 13, 2016

I don't immediately see why torch would break it either. Made this minimal test script, which uses loadcaffe to load vgg19 into cnn and gets the dimension of both cnn and cnn.modules. Both work on both of my machines, the second at least has quite recent torch.

require 'loadcaffe'
cnn = loadcaffe.load("models/VGG_ILSVRC_19_layers_deploy.prototxt", "models/VGG_ILSVRC_19_layers.caffemodel", "nn"):float()
print(#cnn)
c = cnn.modules
print(#c)

@petronny
Copy link
Author

I get

0
46

here.

@htoyryla
Copy link

htoyryla commented Oct 13, 2016

OK, it appears that your method is safer. Just wonder what could cause the difference. Especially if this happened already in March for someone, so the real reason why #cnn fails is a mystery. Perhaps torch was built to use a different lua interpreter (my th uses luajit).

@htoyryla
Copy link

Installed torch again by luarocks, still getting 46 and 46.

@petronny
Copy link
Author

My torch is installed from Archlinux user repository.
Maybe I should take a look at the PKGBUILD.

@ptekchand
Copy link

I ran into the same issue and commented on the line:

for i = 1, #cnn do 

f86babf#commitcomment-23341533
changing it to

for i = 1, #cnn.models do

worked.
If I understand correctly, one might run into this issue because a lua 5.2 compatibility/feature requirement (which wasn't enabled in luajit which was built on my installation - Windows 10 x64 MSVC 2015 x64 with the cunn backed).

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants