Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

AdaptiveAvgPool1d与BatchNorm1在多步传播模式下报错 #551

Open
1 of 4 tasks
Zihao0 opened this issue Jun 12, 2024 · 1 comment
Open
1 of 4 tasks

AdaptiveAvgPool1d与BatchNorm1在多步传播模式下报错 #551

Zihao0 opened this issue Jun 12, 2024 · 1 comment

Comments

@Zihao0
Copy link

Zihao0 commented Jun 12, 2024

Read before creating a new issue

  • Users who want to use SpikingJelly should first be familiar with the usage of PyTorch.
  • If you do not know much about PyTorch, we recommend that the user can learn the basic tutorials of PyTorch.
  • Do not ask for help with the basic conception of PyTorch/Machine Learning but not related to SpikingJelly. For these questions, please refer to Google or PyTorch Forums.

For faster response

You can @ the corresponding developers for your issue. Here is the division:

Features Developers
Neurons and Surrogate Functions fangwei123456
Yanqi-Chen
CUDA Acceleration fangwei123456
Yanqi-Chen
Reinforcement Learning lucifer2859
ANN to SNN Conversion DingJianhao
Lyu6PosHao
Biological Learning (e.g., STDP) AllenYolk
Others Grasshlw
lucifer2859
AllenYolk
Lyu6PosHao
DingJianhao
Yanqi-Chen
fangwei123456

We are glad to add new developers who are volunteering to help solve issues to the above table.

Issue type

  • Bug Report
  • Feature Request
  • Help wanted
  • Other

SpikingJelly version

0.0.0.0.14

Description

...

Minimal code to reproduce the error/bug

import spikingjelly
# ...
import numpy as np
import torch
from spikingjelly.activation_based import neuron, layer, functional
import torch
import torch.nn as nn


class CustomModel(nn.Module):
    def __init__(self, num_cls):
        super(CustomModel, self).__init__()
        self.T = 3
        # Layer 3
        self.fc1 = nn.Sequential(
            # nn.AdaptiveAvgPool1d(num_cls),
            layer.BatchNorm1d(20)
            # nn.Flatten(),  # 添加展平层
            # nn.Linear(64 * 6 * 6, num_cls * 10),
        )

        functional.set_step_mode(self, step_mode='m')

    def forward(self, x):
        batch_size = x.size()[0]
        # encoder layer
        x = x.unsqueeze(0).repeat(self.T, 1, 1)
        print(f"Input shape: {x.shape}")
        x = self.fc1(x)
        print(f"After fc1 shape: {x.shape}")
        return x

model = CustomModel(num_cls=2)
input_tensor = torch.randn(256, 20)  # 示例输入张量
output = model(input_tensor)
print(output.shape)  # 输出

报错

ValueError: expected x with shape [T, N, C, L], but got x with shape torch.Size([3, 256, 20])!

问题

奇怪的是,在单步模式下,输入大小为[N, C],程序可以正常运行,多步模式下将输入扩展为[T, N, C]就会发生上述报错,提示期望的输入大小为 [T, N, C, L],这是为什么呢,不应该是 [T, N, C]吗?单步模式下的输入为[N, C],多步不应该为[T, N, C]吗?
上述问题在AdaptiveAvgPool1d与BatchNorm1d都存在,单步可以运行,但多步需要多一个L维度,请问这是为什么?怎样解决?谢谢!

@Zihao0
Copy link
Author

Zihao0 commented Jun 18, 2024

问题已经解决了,如果作者愿意解释一下就更好了

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant