Questions about the details of the dataset preprocessing #121
-
Hi, fangwei! I think it quit handy and useful to use 'dvs128_gesture.DVS128Gesture' to load a requested DVS128Gesture dataset as demonstrated in the spikingjelly tutorial, which is terrific. But, I notice that most works don't utilize the whole dvs-videos as samples, instead, they split them into clips of the specific time range as illustrated in your paper, [Incorporating Learnable Membrane Time Constant to Enhance Learning of‘ICCV21]:"Specifically, they illustrate that the time resolution is reduced by accumulating the spike train within every 5 ms and the time range (us) of N-MNIST and CIFAR10- DVS are [290901, 315348] and [1149758, 1459301], re- spectively." So, I wonder if you have done the same thing, because I didn't find any codes in the spikingjelly 0.0.0.0.6 applies this preprocessing nor sutiable interfaces. |
Beta Was this translation helpful? Give feedback.
Replies: 4 comments 4 replies
-
Beta Was this translation helpful? Give feedback.
-
There are some bugs in API doc. I will fix them in next week. |
Beta Was this translation helpful? Give feedback.
-
In fact, I think you mean that we can add a mask |
Beta Was this translation helpful? Give feedback.
-
Now SpikingJelly supports for user-defined integrating method: 158235e |
Beta Was this translation helpful? Give feedback.
In fact,
the time range (us) of N-MNIST and CIFAR10- DVS are [290901, 315348] and [1149758, 1459301]
means the time range of each sample in N-MNIST and CIFAR10-DVS (the shortest sample in N-MNIST is 290901 (t[-1] - t[0] = 290901
), and the longest sample in N-MNIST is 315348), but does not mean selecting290901 < t …