-
Notifications
You must be signed in to change notification settings - Fork 12
failure while creating Trainer #4
Comments
Not sure this is your issue, but first thing I noticed is that there's actually a special reduce sum operation for sequences, As for your question on the learners, the R API should be able do anything Python can. For whatever reason this didn't show up in the documentation (could you look into that, @akzaidi?), but all the same learners like the ones you mentioned are available. Until we get those showing up in the docs you can reference learners.R. |
Hi, Error in py_call_impl(callable, dots$args, dots$keywords) : Detailed traceback: |
Could you post the corresponding Python you're porting from? That may helpful to spot what's going wrong since these R operations are just ported to the Python interface. |
Sure. I attach a simple Python program that I am trying to reproduce in R. |
I wanted to add one thing: each element of trainInputs_l is an array of size (<length_of_a_particular_time_series>, INPUT_SIZE) and each element of trainOutputs_l is an array (<length_of_a_this_particular_time_series>, OUTPUT_SIZE). This first dimension changes with every series. |
Hi,
I am trying to port my Python program for time series forecasting (where it works). It is a regression style setup with both input and output being vectors. There are many time series, with varying lengths, but every point on a series is preprocessed to an input vector (of INPUT_SIZE), and at each point we also have the vector forecast (of OUTPUT_SIZE).
I am using stacked LSTMs, but to simplify matters, only single one here, created with this function:
createLSTMnet<-function(input_var , hidden_layer_dim1, output_dim) {
r=Recurrence(LSTM(hidden_layer_dim1,
use_peepholes=LSTM_USE_PEEPHOLES,
enable_self_stabilization=LSTM_USE_STABILIZATION, name="firstCell"))(input_var)
r=Dense(output_dim, bias=TRUE, name="lastLayer")(r)
}
I also use custom loss function:
sMAPELoss<-function (z, t){
t1=op_element_select(op_less(t,-0.9), z, t, name="t1")
a=op_abs(op_minus(t1,z))
b=op_plus(op_abs(t1),op_abs(z))
op_reduce_sum(op_element_divide(a,b))
}
The most important few more lines of code:
input_var <- seq_input_variable(INPUT_SIZE, name = "input")
output_var <- seq_input_variable(OUTPUT_SIZE, name = "label")
And the last line fails with:
Error in py_call_impl(callable, dots$args, dots$keywords) :
TypeError: argument label's type Sequence[Tensor[2]] is incompatible with the type Sequence[np.float32] of the passed Variable
Could some kind soul help me please?
BTW, there is only the SGD learner available, right? No Adam, Adagrad, etc.
Any plans for brining the R functionality closer to the Python's one?
Regards,
Slawek
The text was updated successfully, but these errors were encountered: