-
Notifications
You must be signed in to change notification settings - Fork 4
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Add support for CNN and attention-lstm? #28
Comments
I am not sure what an attention-lstm is could you send a link? I need to implement/find a good implementation of convolution to use. (iI have written it myself before but it was very slow). |
the Long Short Term Memory Fully Convolutional Network (LSTM-FCN) and Attention LSTM-FCN (ALSTM-FCN): https://www.sciencedirect.com/science/article/pii/S0893608019301200?via%3Dihub |
Added CNN as of: It is very slow, and the user must calculate the output-shape itself. You can modify the example with this to test:
|
TODO: Improve implementation of CNN (most likely switch to img2col implementation). |
you are so great.I expect it so much. |
I have not yet implemented max-pooling, I want to try to optimize CNN first, but maxpooling isn't particularly difficult to implement so I can see if I can do that quickly. |
Yes I can work on that soon. For Convolution and Maxpooling I may borrow Caffe's implementation. |
You can see that I have started working on max-pooling here: https://github.com/josephjaspers/blackcat_tensors/blob/master/include/neural_networks/functions/Max_Pooling.h |
Yes. You are so great. I have seen the Max_Pooling.h, so i ask you that. Now we should first focus on the lstm single_predict, the same input and output, then to implement Convolution and Maxpooling.I expect it. |
Because i seen one paper, that net struct is efficient to make my forcast. Now only lstm layer the forcast result is not so good, we should make other net sturct or use the attention-lstm.But that is next to implement and test. |
The order of things I will work on is...
|
thank you very much! But i hope first to debug the single_predict function,how about it now? four days have no update.
在 2019-11-02 06:14:36,"Joseph Jaspers" <[email protected]> 写道:
The order of things I will work on is...
Optimizing Convolution (It is too slow)
MaxPooling
AttentionLSTM
I have found Caffe's implementation of Convoltution/MaxPooling so I will most likely be importing their implementation into this project.
—
You are receiving this because you authored the thread.
Reply to this email directly, view it on GitHub, or unsubscribe.
|
I am working on improving Convolution, hopefully I will have it finished by today or tomorrow. |
Hi, I just added a new version of Convolution. (It still needs testing, does not support single-predict currently). However it should be much faster than the current version. Then... |
I also just fixed a bug where set_learning_rate wouldn't actually set the learning_rate of the Layer. So perhaps re-running may improve performance. |
You are so great. I am so glad to see the updata. I will test the new code just right.So begin our work as you plan. |
Yes, it is better in performance.But it need train more epochs: |
Eventually I would like to add optimizers (like momentum and Adam), though I havent begun to work on them yet. |
I am glad to see your reply! I am so expected. I want to use the net in practical, so you should stepping up time, when you no busy, include the simgle_predict and up functions. |
Convolution (the experimental version) is now the standard version. |
Maxpooling branch. (Not complete) |
convolution and maxpooling, have been added! |
attention-lstm is better than lstm, and cnn is need for forecast to group cnn and(attention-lstm)net.
The text was updated successfully, but these errors were encountered: