Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Error in Pretrained Version #8

Open
wj926 opened this issue Aug 17, 2017 · 2 comments
Open

Error in Pretrained Version #8

wj926 opened this issue Aug 17, 2017 · 2 comments

Comments

@wj926
Copy link

wj926 commented Aug 17, 2017

I was running the Pretrained Version in Pytorch,
by downloading the files in google drive and then run python generate.py --load_path ./maxlen30
the CUDA is 8.0 and python is running in 3.5
However error message RuntimeError: invalid argument 2: dimension 2 out of range of 2D tensor at /pytorch/torch/lib/TH/generic/THTensor.c:24
Came out and I am not sure how to solve this problem.

Did you had similar problems

I will upload the whole error message.

(tensorflow) slcf@slcf:~/ARAE/pytorch$ python3 generate.py --load_path ./maxlen30/
{'noprint': False, 'ngenerations': 10, 'temp': 1, 'ninterpolations': 5, 'seed': 1111, 'outf': './generated.txt', 'steps': 5, 'load_path': './maxlen30/', 'sample': False}
Loading models from./maxlen30/
Traceback (most recent call last):
File "generate.py", line 135, in
main(args)
File "generate.py", line 74, in main
maxlen=model_args['maxlen'])
File "/home/slcf/ARAE/pytorch/models.py", line 325, in generate
sample=sample)
File "/home/slcf/ARAE/pytorch/models.py", line 270, in generate
inputs = torch.cat([embedding, hidden.unsqueeze(1)], 2)
File "/usr/local/lib/python3.5/dist-packages/torch/autograd/variable.py", line 897, in cat
return Concat.apply(dim, *iterable)
File "/usr/local/lib/python3.5/dist-packages/torch/autograd/_functions/tensor.py", line 316, in forward
ctx.input_sizes = [i.size(dim) for i in inputs]
File "/usr/local/lib/python3.5/dist-packages/torch/autograd/_functions/tensor.py", line 316, in
ctx.input_sizes = [i.size(dim) for i in inputs]
RuntimeError: invalid argument 2: dimension 2 out of range of 2D tensor at /pytorch/torch/lib/TH/generic/THTensor.c:24

@kellywzhang
Copy link
Collaborator

kellywzhang commented Aug 25, 2017

Hello, so I replicated this issue when using the newest version of PyTorch. The issue seems to be just differences in the default dimensions (whether they're kept or squeezed). I didn't try it with the pre-trained model, but changing the generate method in model.py to the following fixed it for me:

    def generate(self, hidden, maxlen, sample=True, temp=1.0):
            """Generate through decoder; no backprop"""

            batch_size = hidden.size(0)

            if self.hidden_init:
                    # initialize decoder hidden state to encoder output
                    state = (hidden.unsqueeze(0), self.init_state(batch_size))
            else:
                    state = self.init_hidden(batch_size)

            # <sos>
            self.start_symbols.data.resize_(batch_size)
            self.start_symbols.data.fill_(1)

            embedding = self.embedding_decoder(self.start_symbols)
            inputs = torch.cat([embedding, hidden], 1).unsqueeze(1)

            # unroll
            all_indices = []
            for i in range(maxlen):
                    output, state = self.decoder(inputs, state)
                    overvocab = self.linear(output.squeeze(1))

            if not sample:
                    vals, indices = torch.max(overvocab, 1)
            else:
                    # sampling
                    probs = F.softmax(overvocab/temp)
                    indices = torch.multinomial(probs, 1).squeeze(1)

            all_indices.append(indices.unsqueeze(1))

            embedding = self.embedding_decoder(indices)
            inputs = torch.cat([embedding, hidden], 1).unsqueeze(1)

            max_indices = torch.cat(all_indices, 1)

            return max_indices`

I'm thinking to make an update to the code at some point, but not yet as I don't want to change something that would lead to problems when running with the old version of PyTorch.

Let me know if you run into other issues. Best!

@JulesGM
Copy link

JulesGM commented May 16, 2018

Hello. I would propose that this (#8 (comment)) be pushed to the main version, if you so desire, because it's been a while.
thanks for the help!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants