diff --git a/dev/.documenter-siteinfo.json b/dev/.documenter-siteinfo.json index 024b966a..2c631588 100644 --- a/dev/.documenter-siteinfo.json +++ b/dev/.documenter-siteinfo.json @@ -1 +1 @@ -{"documenter":{"julia_version":"1.11.2","generation_timestamp":"2025-01-11T09:43:31","documenter_version":"1.8.0"}} \ No newline at end of file +{"documenter":{"julia_version":"1.11.2","generation_timestamp":"2025-01-13T13:56:40","documenter_version":"1.8.0"}} \ No newline at end of file diff --git a/dev/api/esn/index.html b/dev/api/esn/index.html index c4e1a3f5..ecfa0052 100644 --- a/dev/api/esn/index.html +++ b/dev/api/esn/index.html @@ -13,7 +13,7 @@ 0.987182 0.898593 0.295241 0.233098 0.789699 0.453692 0.759205 julia> esn = ESN(train_data, 10, 300; washout=10) -ESN(10 => 300)source

Training

To train an ESN model, you can use the train function. It takes the ESN model, training data, and other optional parameters as input and returns a trained model. Here's the documentation for the train function:

ReservoirComputing.trainFunction
train(esn::AbstractEchoStateNetwork, target_data, training_method = StandardRidge(0.0))

Trains an Echo State Network (ESN) using the provided target data and a specified training method.

Parameters

  • esn::AbstractEchoStateNetwork: The ESN instance to be trained.
  • target_data: Supervised training data for the ESN.
  • training_method: The method for training the ESN (default: StandardRidge(0.0)).

Example

julia> train_data = rand(Float32, 10, 100)  # 10 features, 100 time steps
+ESN(10 => 300)
source

Training

To train an ESN model, you can use the train function. It takes the ESN model, training data, and other optional parameters as input and returns a trained model. Here's the documentation for the train function:

ReservoirComputing.trainFunction
train(esn::AbstractEchoStateNetwork, target_data, training_method = StandardRidge(0.0))

Trains an Echo State Network (ESN) using the provided target data and a specified training method.

Parameters

  • esn::AbstractEchoStateNetwork: The ESN instance to be trained.
  • target_data: Supervised training data for the ESN.
  • training_method: The method for training the ESN (default: StandardRidge(0.0)).

Example

julia> train_data = rand(Float32, 10, 100)  # 10 features, 100 time steps
 10×100 Matrix{Float32}:
  0.11437   0.425367  0.585867   0.34078   …  0.0531493  0.761425  0.883164
  0.301373  0.497806  0.279603   0.802417     0.49873    0.270156  0.333333
@@ -30,4 +30,4 @@
 ESN(10 => 300)
 
 julia> output_layer = train(esn, rand(Float32, 3, 90))
-OutputLayer successfully trained with output size: 3
source

With these components and variations, you can configure and train ESN models for various time series and sequential data prediction tasks.

+OutputLayer successfully trained with output size: 3source

With these components and variations, you can configure and train ESN models for various time series and sequential data prediction tasks.

diff --git a/dev/api/esn_drivers/index.html b/dev/api/esn_drivers/index.html index 48c71ebb..48ce6245 100644 --- a/dev/api/esn_drivers/index.html +++ b/dev/api/esn_drivers/index.html @@ -1,9 +1,9 @@ ESN Drivers · ReservoirComputing.jl

ESN Drivers

ReservoirComputing.RNNType
RNN(activation_function, leaky_coefficient)
-RNN(;activation_function=tanh, leaky_coefficient=1.0)

Returns a Recurrent Neural Network (RNN) initializer for echo state networks (ESN).

Arguments

  • activation_function: The activation function used in the RNN.
  • leaky_coefficient: The leaky coefficient used in the RNN.

Keyword Arguments

  • activation_function: The activation function used in the RNN. Defaults to tanh_fast.
  • leaky_coefficient: The leaky coefficient used in the RNN. Defaults to 1.0.
source
ReservoirComputing.MRNNType
MRNN(activation_function, leaky_coefficient, scaling_factor)
+RNN(;activation_function=tanh, leaky_coefficient=1.0)

Returns a Recurrent Neural Network (RNN) initializer for echo state networks (ESN).

Arguments

  • activation_function: The activation function used in the RNN.
  • leaky_coefficient: The leaky coefficient used in the RNN.

Keyword Arguments

  • activation_function: The activation function used in the RNN. Defaults to tanh_fast.
  • leaky_coefficient: The leaky coefficient used in the RNN. Defaults to 1.0.
source
ReservoirComputing.MRNNType
MRNN(activation_function, leaky_coefficient, scaling_factor)
 MRNN(;activation_function=[tanh, sigmoid], leaky_coefficient=1.0,
-    scaling_factor=fill(leaky_coefficient, length(activation_function)))

Returns a Multiple RNN (MRNN) initializer for the Echo State Network (ESN), introduced in [Lun2015].

Arguments

  • activation_function: A vector of activation functions used in the MRNN.
  • leaky_coefficient: The leaky coefficient used in the MRNN.
  • scaling_factor: A vector of scaling factors for combining activation functions.

Keyword Arguments

  • activation_function: A vector of activation functions used in the MRNN. Defaults to [tanh, sigmoid].
  • leaky_coefficient: The leaky coefficient used in the MRNN. Defaults to 1.0.
  • scaling_factor: A vector of scaling factors for combining activation functions. Defaults to an array of the same size as activation_function with all elements set to leaky_coefficient.

This function creates an MRNN object with the specified activation functions, leaky coefficient, and scaling factors, which can be used as a reservoir driver in the ESN.

source
ReservoirComputing.GRUType
GRU(;activation_function=[NNlib.sigmoid, NNlib.sigmoid, tanh],
+    scaling_factor=fill(leaky_coefficient, length(activation_function)))

Returns a Multiple RNN (MRNN) initializer for the Echo State Network (ESN), introduced in [Lun2015].

Arguments

  • activation_function: A vector of activation functions used in the MRNN.
  • leaky_coefficient: The leaky coefficient used in the MRNN.
  • scaling_factor: A vector of scaling factors for combining activation functions.

Keyword Arguments

  • activation_function: A vector of activation functions used in the MRNN. Defaults to [tanh, sigmoid].
  • leaky_coefficient: The leaky coefficient used in the MRNN. Defaults to 1.0.
  • scaling_factor: A vector of scaling factors for combining activation functions. Defaults to an array of the same size as activation_function with all elements set to leaky_coefficient.

This function creates an MRNN object with the specified activation functions, leaky coefficient, and scaling factors, which can be used as a reservoir driver in the ESN.

source
ReservoirComputing.GRUType
GRU(;activation_function=[NNlib.sigmoid, NNlib.sigmoid, tanh],
     inner_layer = fill(DenseLayer(), 2),
     reservoir = fill(RandSparseReservoir(), 2),
     bias = fill(DenseLayer(), 2),
-    variant = FullyGated())

Returns a Gated Recurrent Unit (GRU) reservoir driver for Echo State Network (ESN). This driver is based on the GRU architecture [Cho2014].

Arguments

  • activation_function: An array of activation functions for the GRU layers. By default, it uses sigmoid activation functions for the update gate, reset gate, and tanh for the hidden state.
  • inner_layer: An array of inner layers used in the GRU architecture. By default, it uses two dense layers.
  • reservoir: An array of reservoir layers. By default, it uses two random sparse reservoirs.
  • bias: An array of bias layers for the GRU. By default, it uses two dense layers.
  • variant: The GRU variant to use. By default, it uses the "FullyGated" variant.
source

The GRU driver also provides the user with the choice of the possible variants:

ReservoirComputing.FullyGatedType
FullyGated()

Returns a Fully Gated Recurrent Unit (FullyGated) initializer for the Echo State Network (ESN).

Returns the standard gated recurrent unit [Cho2014] as a driver for the echo state network (ESN).

source

Please refer to the original papers for more detail about these architectures.

  • Lun2015Lun, Shu-Xian, et al. "A novel model of leaky integrator echo state network for time-series prediction." Neurocomputing 159 (2015): 58-66.
  • Cho2014Cho, Kyunghyun, et al. "Learning phrase representations using RNN encoder-decoder for statistical machine translation." arXiv preprint arXiv:1406.1078 (2014).
  • Cho2014Cho, Kyunghyun, et al. "Learning phrase representations using RNN encoder-decoder for statistical machine translation." arXiv preprint arXiv:1406.1078 (2014).
  • Zhou2016Zhou, Guo-Bing, et al. "Minimal gated unit for recurrent neural networks." International Journal of Automation and Computing 13.3 (2016): 226-234.
+ variant = FullyGated())

Returns a Gated Recurrent Unit (GRU) reservoir driver for Echo State Network (ESN). This driver is based on the GRU architecture [Cho2014].

Arguments

source

The GRU driver also provides the user with the choice of the possible variants:

ReservoirComputing.FullyGatedType
FullyGated()

Returns a Fully Gated Recurrent Unit (FullyGated) initializer for the Echo State Network (ESN).

Returns the standard gated recurrent unit [Cho2014] as a driver for the echo state network (ESN).

source
ReservoirComputing.MinimalType
Minimal()

Returns a minimal GRU ESN initializer as described in [Zhou2016].

source

Please refer to the original papers for more detail about these architectures.

diff --git a/dev/api/esn_variations/index.html b/dev/api/esn_variations/index.html index aa5cc865..93ce279a 100644 --- a/dev/api/esn_variations/index.html +++ b/dev/api/esn_variations/index.html @@ -2,4 +2,4 @@ ESN Variations · ReservoirComputing.jl

Echo State Networks variations

Deep ESN

ReservoirComputing.DeepESNType
DeepESN(train_data, in_size, res_size; kwargs...)

Constructs a Deep Echo State Network (ESN) model for processing sequential data through a layered architecture of reservoirs. This constructor allows for the creation of a deep learning model that benefits from the dynamic memory and temporal processing capabilities of ESNs, enhanced by the depth provided by multiple reservoir layers.

Parameters

  • train_data: The training dataset used for the ESN. This should be structured as sequential data where applicable.
  • in_size: The size of the input layer, i.e., the number of input units to the ESN.
  • res_size: The size of each reservoir, i.e., the number of neurons in each hidden layer of the ESN.

Optional Keyword Arguments

  • depth: The number of reservoir layers in the Deep ESN. Default is 2.
  • input_layer: A function or an array of functions to initialize the input matrices for each layer. Default is scaled_rand for each layer.
  • bias: A function or an array of functions to initialize the bias vectors for each layer. Default is zeros32 for each layer.
  • reservoir: A function or an array of functions to initialize the reservoir matrices for each layer. Default is rand_sparse for each layer.
  • reservoir_driver: The driving system for the reservoir. Default is an RNN model.
  • nla_type: The type of non-linear activation used in the reservoir. Default is NLADefault().
  • states_type: Defines the type of states used in the ESN (e.g., standard states). Default is StandardStates().
  • washout: The number of initial timesteps to be discarded in the ESN's training phase. Default is 0.
  • rng: Random number generator used for initializing weights. Default is Utils.default_rng().
  • matrix_type: The type of matrix used for storing the training data. Default is inferred from train_data.

Example

train_data = rand(Float32, 3, 100)
 
 # Create a DeepESN with specific parameters
-deepESN = DeepESN(train_data, 3, 100; depth=3, washout=100)
source

Hybrid ESN

ReservoirComputing.HybridESNType
HybridESN(model, train_data, in_size, res_size; kwargs...)

Construct a Hybrid Echo State Network (ESN) model that integrates traditional Echo State Networks with a predefined knowledge model [Pathak2018].

Parameters

  • model: A KnowledgeModel instance representing the knowledge-based model to be integrated with the ESN.
  • train_data: The training dataset used for the ESN. This data can be preprocessed or raw data depending on the nature of the problem and the preprocessing steps considered.
  • in_size: The size of the input layer, i.e., the number of input units to the ESN.
  • res_size: The size of the reservoir, i.e., the number of neurons in the hidden layer of the ESN.

Optional Keyword Arguments

  • input_layer: A function to initialize the input matrix. Default is scaled_rand.
  • reservoir: A function to initialize the reservoir matrix. Default is rand_sparse.
  • bias: A function to initialize the bias vector. Default is zeros32.
  • reservoir_driver: The driving system for the reservoir. Default is an RNN model.
  • nla_type: The type of non-linear activation used in the reservoir. Default is NLADefault().
  • states_type: Defines the type of states used in the ESN. Default is StandardStates().
  • washout: The number of initial timesteps to be discarded in the ESN's training phase. Default is 0.
  • rng: Random number generator used for initializing weights. Default is Utils.default_rng().
  • T: The data type for the matrices (e.g., Float32).
  • matrix_type: The type of matrix used for storing the training data. Default is inferred from train_data.
source
ReservoirComputing.KnowledgeModelType

KnowledgeModel(prior_model, u0, tspan, datasize)

Constructs a Hybrid variation of Echo State Networks (ESNs) [Pathak2018] integrating a knowledge-based model (prior_model) with ESNs.

Parameters

  • prior_model: A knowledge-based model function for integration with ESNs.
  • u0: Initial conditions for the model.
  • tspan: Time span as a tuple, indicating the duration for model operation.
  • datasize: The size of the data to be processed.
source
  • Pathak2018Jaideep Pathak et al. "Hybrid Forecasting of Chaotic Processes: Using Machine Learning in Conjunction with a Knowledge-Based Model" (2018).
  • Pathak2018Jaideep Pathak et al. "Hybrid Forecasting of Chaotic Processes: Using Machine Learning in Conjunction with a Knowledge-Based Model" (2018).
+deepESN = DeepESN(train_data, 3, 100; depth=3, washout=100)source

Hybrid ESN

ReservoirComputing.HybridESNType
HybridESN(model, train_data, in_size, res_size; kwargs...)

Construct a Hybrid Echo State Network (ESN) model that integrates traditional Echo State Networks with a predefined knowledge model [Pathak2018].

Parameters

  • model: A KnowledgeModel instance representing the knowledge-based model to be integrated with the ESN.
  • train_data: The training dataset used for the ESN. This data can be preprocessed or raw data depending on the nature of the problem and the preprocessing steps considered.
  • in_size: The size of the input layer, i.e., the number of input units to the ESN.
  • res_size: The size of the reservoir, i.e., the number of neurons in the hidden layer of the ESN.

Optional Keyword Arguments

  • input_layer: A function to initialize the input matrix. Default is scaled_rand.
  • reservoir: A function to initialize the reservoir matrix. Default is rand_sparse.
  • bias: A function to initialize the bias vector. Default is zeros32.
  • reservoir_driver: The driving system for the reservoir. Default is an RNN model.
  • nla_type: The type of non-linear activation used in the reservoir. Default is NLADefault().
  • states_type: Defines the type of states used in the ESN. Default is StandardStates().
  • washout: The number of initial timesteps to be discarded in the ESN's training phase. Default is 0.
  • rng: Random number generator used for initializing weights. Default is Utils.default_rng().
  • T: The data type for the matrices (e.g., Float32).
  • matrix_type: The type of matrix used for storing the training data. Default is inferred from train_data.
source
ReservoirComputing.KnowledgeModelType

KnowledgeModel(prior_model, u0, tspan, datasize)

Constructs a Hybrid variation of Echo State Networks (ESNs) [Pathak2018] integrating a knowledge-based model (prior_model) with ESNs.

Parameters

  • prior_model: A knowledge-based model function for integration with ESNs.
  • u0: Initial conditions for the model.
  • tspan: Time span as a tuple, indicating the duration for model operation.
  • datasize: The size of the data to be processed.
source
diff --git a/dev/api/inits/index.html b/dev/api/inits/index.html index 0fc742b5..8484ac33 100644 --- a/dev/api/inits/index.html +++ b/dev/api/inits/index.html @@ -9,7 +9,7 @@ -0.0184405 0.0567368 0.0190222 0.0944272 0.0679244 0.0148647 -0.0799005 -0.0891089 -0.0444782 - -0.0970182 0.0934286 0.03553source
ReservoirComputing.weighted_initFunction
weighted_init([rng::AbstractRNG=Utils.default_rng()], [T=Float32], dims...;
+ -0.0970182   0.0934286   0.03553
source
ReservoirComputing.weighted_initFunction
weighted_init([rng::AbstractRNG=Utils.default_rng()], [T=Float32], dims...;
     scaling=0.1)

Create and return a matrix representing a weighted input layer. This initializer generates a weighted input matrix with random non-zero elements distributed uniformly within the range [-scaling, scaling] [Lu2017].

Arguments

  • rng: Random number generator. Default is Utils.default_rng() from WeightInitializers.
  • T: Type of the elements in the reservoir matrix. Default is Float32.
  • dims: Dimensions of the matrix. Should follow res_size x in_size.
  • scaling: The scaling factor for the weight distribution. Defaults to 0.1.

Examples

julia> res_input = weighted_init(8, 3)
 6×3 Matrix{Float32}:
   0.0452399   0.0          0.0
@@ -17,8 +17,8 @@
   0.0        -0.0386004    0.0
   0.0         0.00981022   0.0
   0.0         0.0          0.0577838
-  0.0         0.0         -0.0562827
source
ReservoirComputing.informed_initFunction
informed_init([rng::AbstractRNG=Utils.default_rng()], [T=Float32], dims...;
-    scaling=0.1, model_in_size, gamma=0.5)

Create an input layer for informed echo state networks [Pathak2018].

Arguments

  • rng: Random number generator. Default is Utils.default_rng() from WeightInitializers.
  • T: Type of the elements in the reservoir matrix. Default is Float32.
  • dims: Dimensions of the matrix. Should follow res_size x in_size.
  • scaling: The scaling factor for the input matrix. Default is 0.1.
  • model_in_size: The size of the input model.
  • gamma: The gamma value. Default is 0.5.

Examples

source
ReservoirComputing.minimal_initFunction
minimal_init([rng::AbstractRNG=Utils.default_rng()], [T=Float32], dims...;
+  0.0         0.0         -0.0562827
source
ReservoirComputing.informed_initFunction
informed_init([rng::AbstractRNG=Utils.default_rng()], [T=Float32], dims...;
+    scaling=0.1, model_in_size, gamma=0.5)

Create an input layer for informed echo state networks [Pathak2018].

Arguments

  • rng: Random number generator. Default is Utils.default_rng() from WeightInitializers.
  • T: Type of the elements in the reservoir matrix. Default is Float32.
  • dims: Dimensions of the matrix. Should follow res_size x in_size.
  • scaling: The scaling factor for the input matrix. Default is 0.1.
  • model_in_size: The size of the input model.
  • gamma: The gamma value. Default is 0.5.

Examples

source
ReservoirComputing.minimal_initFunction
minimal_init([rng::AbstractRNG=Utils.default_rng()], [T=Float32], dims...;
     sampling_type=:bernoulli, weight=0.1, irrational=pi, start=1, p=0.5)

Create a layer matrix with uniform weights determined by weight. The sign difference is randomly determined by the sampling chosen.

Arguments

  • rng: Random number generator. Default is Utils.default_rng() from WeightInitializers.
  • T: Type of the elements in the reservoir matrix. Default is Float32.
  • dims: Dimensions of the matrix. Should follow res_size x in_size.
  • weight: The weight used to fill the layer matrix. Default is 0.1.
  • sampling_type: The sampling parameters used to generate the input matrix. Default is :bernoulli.
  • irrational: Irrational number chosen for sampling if sampling_type=:irrational. Default is pi.
  • start: Starting value for the irrational sample. Default is 1
  • p: Probability for the Bernoulli sampling. Lower probability increases negative value. Higher probability increases positive values. Default is 0.5

Examples

julia> res_input = minimal_init(8, 3)
 8×3 Matrix{Float32}:
   0.1  -0.1   0.1
@@ -61,14 +61,14 @@
   0.1   0.1  0.1
   0.1  -0.1  0.1
  -0.1   0.1  0.1
-  0.1   0.1  0.1
source

Reservoirs

ReservoirComputing.rand_sparseFunction
rand_sparse([rng::AbstractRNG=Utils.default_rng()], [T=Float32], dims...;
+  0.1   0.1  0.1
source

Reservoirs

ReservoirComputing.rand_sparseFunction
rand_sparse([rng::AbstractRNG=Utils.default_rng()], [T=Float32], dims...;
     radius=1.0, sparsity=0.1, std=1.0)

Create and return a random sparse reservoir matrix. The matrix will be of size specified by dims, with specified sparsity and scaled spectral radius according to radius.

Arguments

  • rng: Random number generator. Default is Utils.default_rng() from WeightInitializers.
  • T: Type of the elements in the reservoir matrix. Default is Float32.
  • dims: Dimensions of the reservoir matrix.
  • radius: The desired spectral radius of the reservoir. Defaults to 1.0.
  • sparsity: The sparsity level of the reservoir matrix, controlling the fraction of zero elements. Defaults to 0.1.

Examples

julia> res_matrix = rand_sparse(5, 5; sparsity=0.5)
 5×5 Matrix{Float32}:
  0.0        0.0        0.0        0.0      0.0
  0.0        0.794565   0.0        0.26164  0.0
  0.0        0.0       -0.931294   0.0      0.553706
  0.723235  -0.524727   0.0        0.0      0.0
- 1.23723    0.0        0.181824  -1.5478   0.465328
source
ReservoirComputing.delay_lineFunction
delay_line([rng::AbstractRNG=Utils.default_rng()], [T=Float32], dims...;
+ 1.23723    0.0        0.181824  -1.5478   0.465328
source
ReservoirComputing.delay_lineFunction
delay_line([rng::AbstractRNG=Utils.default_rng()], [T=Float32], dims...;
     weight=0.1)

Create and return a delay line reservoir matrix [Rodan2010].

Arguments

  • rng: Random number generator. Default is Utils.default_rng() from WeightInitializers.
  • T: Type of the elements in the reservoir matrix. Default is Float32.
  • dims: Dimensions of the reservoir matrix.
  • weight: Determines the value of all connections in the reservoir. Default is 0.1.

Examples

julia> res_matrix = delay_line(5, 5)
 5×5 Matrix{Float32}:
  0.0  0.0  0.0  0.0  0.0
@@ -83,7 +83,7 @@
  1.0  0.0  0.0  0.0  0.0
  0.0  1.0  0.0  0.0  0.0
  0.0  0.0  1.0  0.0  0.0
- 0.0  0.0  0.0  1.0  0.0
source
ReservoirComputing.delay_line_backwardFunction
delay_line_backward([rng::AbstractRNG=Utils.default_rng()], [T=Float32], dims...;
+ 0.0  0.0  0.0  1.0  0.0
source
ReservoirComputing.delay_line_backwardFunction
delay_line_backward([rng::AbstractRNG=Utils.default_rng()], [T=Float32], dims...;
     weight = 0.1, fb_weight = 0.2)

Create a delay line backward reservoir with the specified by dims and weights. Creates a matrix with backward connections as described in [Rodan2010].

Arguments

  • rng: Random number generator. Default is Utils.default_rng() from WeightInitializers.
  • T: Type of the elements in the reservoir matrix. Default is Float32.
  • dims: Dimensions of the reservoir matrix.
  • weight: The weight determines the absolute value of forward connections in the reservoir. Default is 0.1
  • fb_weight: Determines the absolute value of backward connections in the reservoir. Default is 0.2

Examples

julia> res_matrix = delay_line_backward(5, 5)
 5×5 Matrix{Float32}:
  0.0  0.2  0.0  0.0  0.0
@@ -98,7 +98,7 @@
  0.1  0.0  0.2  0.0  0.0
  0.0  0.1  0.0  0.2  0.0
  0.0  0.0  0.1  0.0  0.2
- 0.0  0.0  0.0  0.1  0.0
source
ReservoirComputing.cycle_jumpsFunction
cycle_jumps([rng::AbstractRNG=Utils.default_rng()], [T=Float32], dims...; 
+ 0.0  0.0  0.0  0.1  0.0
source
ReservoirComputing.cycle_jumpsFunction
cycle_jumps([rng::AbstractRNG=Utils.default_rng()], [T=Float32], dims...; 
     cycle_weight = 0.1, jump_weight = 0.1, jump_size = 3)

Create a cycle jumps reservoir with the specified dimensions, cycle weight, jump weight, and jump size.

Arguments

  • rng: Random number generator. Default is Utils.default_rng() from WeightInitializers.
  • T: Type of the elements in the reservoir matrix. Default is Float32.
  • dims: Dimensions of the reservoir matrix.
  • cycle_weight: The weight of cycle connections. Default is 0.1.
  • jump_weight: The weight of jump connections. Default is 0.1.
  • jump_size: The number of steps between jump connections. Default is 3.

Examples

julia> res_matrix = cycle_jumps(5, 5)
 5×5 Matrix{Float32}:
  0.0  0.0  0.0  0.1  0.1
@@ -113,7 +113,7 @@
  0.1  0.0  0.0  0.0  0.0
  0.1  0.1  0.0  0.0  0.1
  0.0  0.0  0.1  0.0  0.0
- 0.0  0.0  0.1  0.1  0.0
source
ReservoirComputing.simple_cycleFunction
simple_cycle([rng::AbstractRNG=Utils.default_rng()], [T=Float32], dims...; 
+ 0.0  0.0  0.1  0.1  0.0
source
ReservoirComputing.simple_cycleFunction
simple_cycle([rng::AbstractRNG=Utils.default_rng()], [T=Float32], dims...; 
     weight = 0.1)

Create a simple cycle reservoir with the specified dimensions and weight.

Arguments

  • rng: Random number generator. Default is Utils.default_rng() from WeightInitializers.
  • T: Type of the elements in the reservoir matrix. Default is Float32.
  • dims: Dimensions of the reservoir matrix.
  • weight: Weight of the connections in the reservoir matrix. Default is 0.1.

Examples

julia> res_matrix = simple_cycle(5, 5)
 5×5 Matrix{Float32}:
  0.0  0.0  0.0  0.0  0.1
@@ -128,11 +128,11 @@
  11.0   0.0   0.0   0.0   0.0
   0.0  11.0   0.0   0.0   0.0
   0.0   0.0  11.0   0.0   0.0
-  0.0   0.0   0.0  11.0   0.0
source
ReservoirComputing.pseudo_svdFunction
pseudo_svd([rng::AbstractRNG=Utils.default_rng()], [T=Float32], dims...; 
+  0.0   0.0   0.0  11.0   0.0
source
ReservoirComputing.pseudo_svdFunction
pseudo_svd([rng::AbstractRNG=Utils.default_rng()], [T=Float32], dims...; 
     max_value=1.0, sparsity=0.1, sorted = true, reverse_sort = false)

Returns an initializer to build a sparse reservoir matrix with the given sparsity by using a pseudo-SVD approach as described in [yang].

Arguments

  • rng: Random number generator. Default is Utils.default_rng() from WeightInitializers.
  • T: Type of the elements in the reservoir matrix. Default is Float32.
  • dims: Dimensions of the reservoir matrix.
  • max_value: The maximum absolute value of elements in the matrix. Default is 1.0
  • sparsity: The desired sparsity level of the reservoir matrix. Default is 0.1
  • sorted: A boolean indicating whether to sort the singular values before creating the diagonal matrix. Default is true.
  • reverse_sort: A boolean indicating whether to reverse the sorted singular values. Default is false.

Examples

julia> res_matrix = pseudo_svd(5, 5)
 5×5 Matrix{Float32}:
  0.306998  0.0       0.0       0.0       0.0
  0.0       0.325977  0.0       0.0       0.0
  0.0       0.0       0.549051  0.0       0.0
  0.0       0.0       0.0       0.726199  0.0
- 0.0       0.0       0.0       0.0       1.0
source
+ 0.0 0.0 0.0 0.0 1.0source
diff --git a/dev/api/predict/index.html b/dev/api/predict/index.html index 58c2989e..f4cda665 100644 --- a/dev/api/predict/index.html +++ b/dev/api/predict/index.html @@ -1,2 +1,2 @@ -Prediction Types · ReservoirComputing.jl

Prediction Types

ReservoirComputing.GenerativeType
Generative(prediction_len)

A prediction strategy that enables models to generate autonomous multi-step forecasts by recursively feeding their own outputs back as inputs for subsequent prediction steps.

Parameters

  • prediction_len: The number of future steps to predict.

Description

The Generative prediction method allows a model to perform multi-step forecasting by using its own previous predictions as inputs for future predictions.

At each step, the model takes the current input, generates a prediction, and then incorporates that prediction into the input for the next step. This recursive process continues until the specified number of prediction steps (prediction_len) is reached.

source
ReservoirComputing.PredictiveType
Predictive(prediction_data)

A prediction strategy for supervised learning tasks, where a model predicts labels based on a provided set of input features (prediction_data).

Parameters

  • prediction_data: The input data used for prediction, feature x sample

Description

The Predictive prediction method uses the provided input data (prediction_data) to produce corresponding labels or outputs based on the learned relationships in the model.

source
+Prediction Types · ReservoirComputing.jl

Prediction Types

ReservoirComputing.GenerativeType
Generative(prediction_len)

A prediction strategy that enables models to generate autonomous multi-step forecasts by recursively feeding their own outputs back as inputs for subsequent prediction steps.

Parameters

  • prediction_len: The number of future steps to predict.

Description

The Generative prediction method allows a model to perform multi-step forecasting by using its own previous predictions as inputs for future predictions.

At each step, the model takes the current input, generates a prediction, and then incorporates that prediction into the input for the next step. This recursive process continues until the specified number of prediction steps (prediction_len) is reached.

source
ReservoirComputing.PredictiveType
Predictive(prediction_data)

A prediction strategy for supervised learning tasks, where a model predicts labels based on a provided set of input features (prediction_data).

Parameters

  • prediction_data: The input data used for prediction, feature x sample

Description

The Predictive prediction method uses the provided input data (prediction_data) to produce corresponding labels or outputs based on the learned relationships in the model.

source
diff --git a/dev/api/reca/index.html b/dev/api/reca/index.html index cf732093..c4775e3f 100644 --- a/dev/api/reca/index.html +++ b/dev/api/reca/index.html @@ -4,6 +4,6 @@ generations = 8, input_encoding=RandomMapping(), nla_type = NLADefault(), - states_type = StandardStates())

[1] Yilmaz, Ozgur. “Reservoir computing using cellular automata.” arXiv preprint arXiv:1410.0162 (2014).

[2] Nichele, Stefano, and Andreas Molund. “Deep reservoir computing using cellular automata.” arXiv preprint arXiv:1703.02806 (2017).

source

The input encodings are the equivalent of the input matrices of the ESNs. These are the available encodings:

ReservoirComputing.RandomMappingType
RandomMapping(permutations, expansion_size)
+    states_type = StandardStates())

[1] Yilmaz, Ozgur. “Reservoir computing using cellular automata.” arXiv preprint arXiv:1410.0162 (2014).

[2] Nichele, Stefano, and Andreas Molund. “Deep reservoir computing using cellular automata.” arXiv preprint arXiv:1703.02806 (2017).

source

The input encodings are the equivalent of the input matrices of the ESNs. These are the available encodings:

ReservoirComputing.RandomMappingType
RandomMapping(permutations, expansion_size)
 RandomMapping(permutations; expansion_size=40)
-RandomMapping(;permutations=8, expansion_size=40)

Random mapping of the input data directly in the reservoir. The expansion_size determines the dimension of the single reservoir, and permutations determines the number of total reservoirs that will be connected, each with a different mapping. The detail of this implementation can be found in [1].

[1] Nichele, Stefano, and Andreas Molund. “Deep reservoir computing using cellular automata.” arXiv preprint arXiv:1703.02806 (2017).

source

The training and prediction follow the same workflow as the ESN. It is important to note that currently we were unable to find any papers using these models with a Generative approach for the prediction, so full support is given only to the Predictive method.

+RandomMapping(;permutations=8, expansion_size=40)

Random mapping of the input data directly in the reservoir. The expansion_size determines the dimension of the single reservoir, and permutations determines the number of total reservoirs that will be connected, each with a different mapping. The detail of this implementation can be found in [1].

[1] Nichele, Stefano, and Andreas Molund. “Deep reservoir computing using cellular automata.” arXiv preprint arXiv:1703.02806 (2017).

source

The training and prediction follow the same workflow as the ESN. It is important to note that currently we were unable to find any papers using these models with a Generative approach for the prediction, so full support is given only to the Predictive method.

diff --git a/dev/api/states/index.html b/dev/api/states/index.html index 6b72d95b..f91c513a 100644 --- a/dev/api/states/index.html +++ b/dev/api/states/index.html @@ -32,7 +32,7 @@ 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 - 0.0 0.0 0.0 0.0 0.0source
ReservoirComputing.ExtendedStatesType
ExtendedStates()

The ExtendedStates struct is used to extend the reservoir states by vertically concatenating the input data (during training) and the prediction data (during the prediction phase).

Example

julia> states = ExtendedStates()
+ 0.0  0.0  0.0  0.0  0.0
source
ReservoirComputing.ExtendedStatesType
ExtendedStates()

The ExtendedStates struct is used to extend the reservoir states by vertically concatenating the input data (during training) and the prediction data (during the prediction phase).

Example

julia> states = ExtendedStates()
 ExtendedStates()
 
 julia> test_vec = zeros(Float32, 5)
@@ -71,7 +71,7 @@
  0.0  0.0  0.0  0.0  0.0
  3.0  3.0  3.0  3.0  3.0
  3.0  3.0  3.0  3.0  3.0
- 3.0  3.0  3.0  3.0  3.0
source
ReservoirComputing.PaddedStatesType
PaddedStates(padding)
+ 3.0  3.0  3.0  3.0  3.0
source
ReservoirComputing.PaddedStatesType
PaddedStates(padding)
 PaddedStates(;padding=1.0)

Creates an instance of the PaddedStates struct with specified padding value (default 1.0). The states of the reservoir are padded by vertically concatenating the padding value.

Example

julia> states = PaddedStates(1.0)
 PaddedStates{Float64}(1.0)
 
@@ -107,7 +107,7 @@
  0.0  0.0  0.0  0.0  0.0
  0.0  0.0  0.0  0.0  0.0
  0.0  0.0  0.0  0.0  0.0
- 1.0  1.0  1.0  1.0  1.0
source
ReservoirComputing.PaddedExtendedStatesType
PaddedExtendedStates(padding)
+ 1.0  1.0  1.0  1.0  1.0
source
ReservoirComputing.PaddedExtendedStatesType
PaddedExtendedStates(padding)
 PaddedExtendedStates(;padding=1.0)

Constructs a PaddedExtendedStates struct, which first extends the reservoir states with training or prediction data,then pads them with a specified value (defaulting to 1.0).

Example

julia> states = PaddedExtendedStates(1.0)
 PaddedExtendedStates{Float64}(1.0)
 
@@ -149,7 +149,7 @@
  1.0  1.0  1.0  1.0  1.0
  3.0  3.0  3.0  3.0  3.0
  3.0  3.0  3.0  3.0  3.0
- 3.0  3.0  3.0  3.0  3.0
source

Non Linear Transformations

ReservoirComputing.NLADefaultType
NLADefault()

NLADefault represents the default non-linear algorithm option. When used, it leaves the input array unchanged.

Example

julia> nlat = NLADefault()
+ 3.0  3.0  3.0  3.0  3.0
source

Non Linear Transformations

ReservoirComputing.NLADefaultType
NLADefault()

NLADefault represents the default non-linear algorithm option. When used, it leaves the input array unchanged.

Example

julia> nlat = NLADefault()
 NLADefault()
 
 julia> x_old = [0, 1, 2, 3, 4, 5, 6, 7, 8, 9]
@@ -202,7 +202,7 @@
  10  11  12
  13  14  15
  16  17  18
- 19  20  21
source
ReservoirComputing.NLAT1Type
NLAT1()

NLAT1 implements the T₁ transformation algorithm introduced in [Chattopadhyay] and [Pathak]. The T₁ algorithm squares elements of the input array, targeting every second row.

\[\tilde{r}_{i,j} = + 19 20 21

source
ReservoirComputing.NLAT1Type
NLAT1()

NLAT1 implements the T₁ transformation algorithm introduced in [Chattopadhyay] and [Pathak]. The T₁ algorithm squares elements of the input array, targeting every second row.

\[\tilde{r}_{i,j} = \begin{cases} r_{i,j} \times r_{i,j}, & \text{if } j \text{ is odd}; \\ r_{i,j}, & \text{if } j \text{ is even}. @@ -260,7 +260,7 @@ 169 196 225 16 17 18 361 400 441 -

source
ReservoirComputing.NLAT2Type
NLAT2()

NLAT2 implements the T₂ transformation algorithm as defined in [Chattopadhyay]. This transformation algorithm modifies the reservoir states by multiplying each odd-indexed row (starting from the second row) with the product of its two preceding rows.

\[\tilde{r}_{i,j} = +

source
ReservoirComputing.NLAT2Type
NLAT2()

NLAT2 implements the T₂ transformation algorithm as defined in [Chattopadhyay]. This transformation algorithm modifies the reservoir states by multiplying each odd-indexed row (starting from the second row) with the product of its two preceding rows.

\[\tilde{r}_{i,j} = \begin{cases} r_{i,j-1} \times r_{i,j-2}, & \text{if } j > 1 \text{ is odd}; \\ r_{i,j}, & \text{if } j \text{ is 1 or even}. @@ -318,7 +318,7 @@ 70 88 108 16 17 18 19 20 21 -

source
ReservoirComputing.NLAT3Type
NLAT3()

Implements the T₃ transformation algorithm as detailed in [Chattopadhyay]. This algorithm modifies the reservoir's states by multiplying each odd-indexed row (beginning from the second row) with the product of the immediately preceding and the immediately following rows.

\[\tilde{r}_{i,j} = +

source
ReservoirComputing.NLAT3Type
NLAT3()

Implements the T₃ transformation algorithm as detailed in [Chattopadhyay]. This algorithm modifies the reservoir's states by multiplying each odd-indexed row (beginning from the second row) with the product of the immediately preceding and the immediately following rows.

\[\tilde{r}_{i,j} = \begin{cases} r_{i,j-1} \times r_{i,j+1}, & \text{if } j > 1 \text{ is odd}; \\ r_{i,j}, & \text{if } j = 1 \text{ or even.} @@ -376,5 +376,5 @@ 160 187 216 16 17 18 19 20 21 -

source

Internals

ReservoirComputing.create_statesFunction
create_states(reservoir_driver::AbstractReservoirDriver, train_data, washout,
-    reservoir_matrix, input_matrix, bias_vector)

Create and return the trained Echo State Network (ESN) states according to the specified reservoir driver.

Arguments

  • reservoir_driver: The reservoir driver that determines how the ESN states evolve over time.
  • train_data: The training data used to train the ESN.
  • washout: The number of initial time steps to discard during training to allow the reservoir dynamics to wash out the initial conditions.
  • reservoir_matrix: The reservoir matrix representing the dynamic, recurrent part of the ESN.
  • input_matrix: The input matrix that defines the connections between input features and reservoir nodes.
  • bias_vector: The bias vector to be added at each time step during the reservoir update.
source
+source

Internals

ReservoirComputing.create_statesFunction
create_states(reservoir_driver::AbstractReservoirDriver, train_data, washout,
+    reservoir_matrix, input_matrix, bias_vector)

Create and return the trained Echo State Network (ESN) states according to the specified reservoir driver.

Arguments

  • reservoir_driver: The reservoir driver that determines how the ESN states evolve over time.
  • train_data: The training data used to train the ESN.
  • washout: The number of initial time steps to discard during training to allow the reservoir dynamics to wash out the initial conditions.
  • reservoir_matrix: The reservoir matrix representing the dynamic, recurrent part of the ESN.
  • input_matrix: The input matrix that defines the connections between input features and reservoir nodes.
  • bias_vector: The bias vector to be added at each time step during the reservoir update.
source
diff --git a/dev/api/training/index.html b/dev/api/training/index.html index e320387f..7648fa86 100644 --- a/dev/api/training/index.html +++ b/dev/api/training/index.html @@ -1,3 +1,3 @@ Training Algorithms · ReservoirComputing.jl

Training Algorithms

Linear Models

ReservoirComputing.StandardRidgeType
StandardRidge([Type], [reg])

Returns a training method for train based on ridge regression. The equations for ridge regression are as follows:

\[\mathbf{w} = (\mathbf{X}^\top \mathbf{X} + -\lambda \mathbf{I})^{-1} \mathbf{X}^\top \mathbf{y}\]

Arguments

  • Type: type of the regularization argument. Default is inferred internally, there's usually no need to tweak this
  • reg: regularization coefficient. Default is set to 0.0 (linear regression).

```

source

Gaussian Regression

Currently, v0.10, is unavailable.

Support Vector Regression

Support Vector Regression is possible using a direct call to LIBSVM regression methods. Instead of a wrapper, please refer to the use of LIBSVM.AbstractSVR in the original library.

+\lambda \mathbf{I})^{-1} \mathbf{X}^\top \mathbf{y}\]

Arguments

```

source

Gaussian Regression

Currently, v0.10, is unavailable.

Support Vector Regression

Support Vector Regression is possible using a direct call to LIBSVM regression methods. Instead of a wrapper, please refer to the use of LIBSVM.AbstractSVR in the original library.

diff --git a/dev/assets/Manifest.toml b/dev/assets/Manifest.toml index d5b910e4..0caa96d2 100644 --- a/dev/assets/Manifest.toml +++ b/dev/assets/Manifest.toml @@ -26,14 +26,13 @@ uuid = "1520ce14-60c1-5f80-bbc7-55ef81b5835c" version = "0.4.5" [[deps.Accessors]] -deps = ["CompositionsBase", "ConstructionBase", "InverseFunctions", "MacroTools"] -git-tree-sha1 = "e93c42e833e6d4bd28be7b3b56d8deb99fd51f25" +deps = ["CompositionsBase", "ConstructionBase", "Dates", "InverseFunctions", "MacroTools"] +git-tree-sha1 = "0ba8f4c1f06707985ffb4804fdad1bf97b233897" uuid = "7d9f7c33-5ae7-4f3b-8dc6-eff91059b697" -version = "0.1.40" +version = "0.1.41" [deps.Accessors.extensions] AxisKeysExt = "AxisKeys" - DatesExt = "Dates" IntervalSetsExt = "IntervalSets" LinearAlgebraExt = "LinearAlgebra" StaticArraysExt = "StaticArrays" @@ -43,7 +42,6 @@ version = "0.1.40" [deps.Accessors.weakdeps] AxisKeys = "94b1ba4f-4ee9-5380-92f1-94cde586c3c5" - Dates = "ade2ca70-3891-5945-98fb-dc099432e06a" IntervalSets = "8197267c-284f-5f27-9208-e0e47529a953" LinearAlgebra = "37e2e46d-f89d-539d-b4ee-838fcccc9c8e" Requires = "ae029012-a4dd-5104-9daa-d747884805df" @@ -202,21 +200,21 @@ version = "1.3.0" [[deps.BoundaryValueDiffEqFIRK]] deps = ["ADTypes", "Adapt", "ArrayInterface", "BandedMatrices", "BoundaryValueDiffEqCore", "ConcreteStructs", "DiffEqBase", "FastAlmostBandedMatrices", "FastClosures", "ForwardDiff", "LinearAlgebra", "LinearSolve", "Logging", "PreallocationTools", "PrecompileTools", "Preferences", "RecursiveArrayTools", "Reexport", "SciMLBase", "Setfield", "SparseArrays", "SparseDiffTools"] -git-tree-sha1 = "6305d58ba2f53faec7bf44dfba5b591b524254b0" +git-tree-sha1 = "feee7d8530e65c0ac38cc81321348c0a92c66f91" uuid = "85d9eb09-370e-4000-bb32-543851f73618" -version = "1.2.0" +version = "1.3.0" [[deps.BoundaryValueDiffEqMIRK]] deps = ["ADTypes", "Adapt", "ArrayInterface", "BandedMatrices", "BoundaryValueDiffEqCore", "ConcreteStructs", "DiffEqBase", "FastAlmostBandedMatrices", "FastClosures", "ForwardDiff", "LinearAlgebra", "LinearSolve", "Logging", "PreallocationTools", "PrecompileTools", "Preferences", "RecursiveArrayTools", "Reexport", "SciMLBase", "Setfield", "SparseArrays", "SparseDiffTools"] -git-tree-sha1 = "ed6802d8a97a0847060d25261b7561da83a4f044" +git-tree-sha1 = "b642f3b968efa51abba41aaf29d2ad869631cf0c" uuid = "1a22d4ce-7765-49ea-b6f2-13c8438986a6" -version = "1.2.0" +version = "1.3.0" [[deps.BoundaryValueDiffEqMIRKN]] deps = ["ADTypes", "Adapt", "ArrayInterface", "BandedMatrices", "BoundaryValueDiffEqCore", "ConcreteStructs", "DiffEqBase", "FastAlmostBandedMatrices", "FastClosures", "ForwardDiff", "LineSearch", "LinearAlgebra", "LinearSolve", "Logging", "PreallocationTools", "PrecompileTools", "Preferences", "RecursiveArrayTools", "Reexport", "SciMLBase", "Setfield", "SparseArrays", "SparseDiffTools"] -git-tree-sha1 = "7ed2fe617a23d25db64ba31ae0dd226663d03bad" +git-tree-sha1 = "8cb77e53a8695cef049e5d0556e6936f5a7503d6" uuid = "9255f1d6-53bf-473e-b6bd-23f1ff009da4" -version = "1.0.0" +version = "1.1.0" [[deps.BoundaryValueDiffEqShooting]] deps = ["ADTypes", "Adapt", "ArrayInterface", "BandedMatrices", "BoundaryValueDiffEqCore", "ConcreteStructs", "DiffEqBase", "FastAlmostBandedMatrices", "FastClosures", "ForwardDiff", "LinearAlgebra", "LinearSolve", "Logging", "OrdinaryDiffEq", "PreallocationTools", "PrecompileTools", "Preferences", "RecursiveArrayTools", "Reexport", "SciMLBase", "Setfield", "SparseArrays", "SparseDiffTools"] @@ -1032,9 +1030,9 @@ version = "18.1.7+0" [[deps.LZO_jll]] deps = ["Artifacts", "JLLWrappers", "Libdl"] -git-tree-sha1 = "854a9c268c43b77b0a27f22d7fab8d33cdb3a731" +git-tree-sha1 = "1c602b1127f4751facb671441ca72715cc95938a" uuid = "dd4b983a-f0e5-5f8d-a1b7-129d4a5fb1ac" -version = "2.10.2+3" +version = "2.10.3+0" [[deps.LaTeXStrings]] git-tree-sha1 = "dda21b8cbd6a6c40d9d02a73230f9d70fed6918c" @@ -1152,15 +1150,15 @@ version = "1.51.1+0" [[deps.Libiconv_jll]] deps = ["Artifacts", "JLLWrappers", "Libdl"] -git-tree-sha1 = "61dfdba58e585066d8bce214c5a51eaa0539f269" +git-tree-sha1 = "be484f5c92fad0bd8acfef35fe017900b0b73809" uuid = "94ce4f54-9a6c-5748-9c1c-f9c7231a4531" -version = "1.17.0+1" +version = "1.18.0+0" [[deps.Libmount_jll]] deps = ["Artifacts", "JLLWrappers", "Libdl"] -git-tree-sha1 = "84eef7acd508ee5b3e956a2ae51b05024181dee0" +git-tree-sha1 = "89211ea35d9df5831fca5d33552c02bd33878419" uuid = "4b2f31a3-9ecc-558c-b454-b3730dcb73e9" -version = "2.40.2+2" +version = "2.40.3+0" [[deps.Libtiff_jll]] deps = ["Artifacts", "JLLWrappers", "JpegTurbo_jll", "LERC_jll", "Libdl", "XZ_jll", "Zlib_jll", "Zstd_jll"] @@ -1170,9 +1168,9 @@ version = "4.7.1+0" [[deps.Libuuid_jll]] deps = ["Artifacts", "JLLWrappers", "Libdl"] -git-tree-sha1 = "edbf5309f9ddf1cab25afc344b1e8150b7c832f9" +git-tree-sha1 = "e888ad02ce716b319e6bdb985d2ef300e7089889" uuid = "38a345b3-de98-5d2b-a5d3-14cd9215e700" -version = "2.40.2+2" +version = "2.40.3+0" [[deps.LineSearch]] deps = ["ADTypes", "CommonSolve", "ConcreteStructs", "FastClosures", "LinearAlgebra", "MaybeInplace", "SciMLBase", "SciMLJacobianOperators", "StaticArraysCore"] @@ -2021,7 +2019,7 @@ version = "1.3.0" [[deps.ReservoirComputing]] deps = ["Adapt", "CellularAutomata", "Compat", "LinearAlgebra", "NNlib", "Random", "Reexport", "StatsBase", "WeightInitializers"] -path = "/var/lib/buildkite-agent/builds/gpuci-8/julialang/reservoircomputing-dot-jl" +path = "/var/lib/buildkite-agent/builds/gpuci-16/julialang/reservoircomputing-dot-jl" uuid = "7c2d2b1e-3dd4-11ea-355a-8f6a8116e294" version = "0.10.6" @@ -2541,9 +2539,9 @@ version = "1.36.0+0" [[deps.WeightInitializers]] deps = ["ArgCheck", "ConcreteStructs", "GPUArraysCore", "LinearAlgebra", "Random", "SpecialFunctions", "Statistics"] -git-tree-sha1 = "f887eb7140a52db2487ea22f71f612a8caacf7d5" +git-tree-sha1 = "47b97ba36fcbbf9cc6707ed6ab60cf24fe50b981" uuid = "d49dbf32-c5c2-4618-8acc-27bb2598ef2d" -version = "1.1.0" +version = "1.1.1" [deps.WeightInitializers.extensions] WeightInitializersAMDGPUExt = ["AMDGPU", "GPUArrays"] @@ -2601,9 +2599,9 @@ version = "1.8.6+3" [[deps.Xorg_libXau_jll]] deps = ["Artifacts", "JLLWrappers", "Libdl"] -git-tree-sha1 = "2b0e27d52ec9d8d483e2ca0b72b3cb1a8df5c27a" +git-tree-sha1 = "e9216fdcd8514b7072b43653874fd688e4c6c003" uuid = "0c0b7dd1-d40b-584c-a123-a41640f87eec" -version = "1.0.11+3" +version = "1.0.12+0" [[deps.Xorg_libXcursor_jll]] deps = ["Artifacts", "JLLWrappers", "Libdl", "Xorg_libXfixes_jll", "Xorg_libXrender_jll"] @@ -2613,9 +2611,9 @@ version = "1.2.3+0" [[deps.Xorg_libXdmcp_jll]] deps = ["Artifacts", "JLLWrappers", "Libdl"] -git-tree-sha1 = "02054ee01980c90297412e4c809c8694d7323af3" +git-tree-sha1 = "89799ae67c17caa5b3b5a19b8469eeee474377db" uuid = "a3789734-cfe1-5b06-b2d0-1dd0d9d62d05" -version = "1.1.4+3" +version = "1.1.5+0" [[deps.Xorg_libXext_jll]] deps = ["Artifacts", "JLLWrappers", "Libdl", "Xorg_libX11_jll"] @@ -2655,9 +2653,9 @@ version = "0.9.11+1" [[deps.Xorg_libpthread_stubs_jll]] deps = ["Artifacts", "JLLWrappers", "Libdl"] -git-tree-sha1 = "fee57a273563e273f0f53275101cd41a8153517a" +git-tree-sha1 = "c57201109a9e4c0585b208bb408bc41d205ac4e9" uuid = "14d82f49-176c-5ed1-bb49-ad3f5cbd8c74" -version = "0.1.1+3" +version = "0.1.2+0" [[deps.Xorg_libxcb_jll]] deps = ["Artifacts", "JLLWrappers", "Libdl", "XSLT_jll", "Xorg_libXau_jll", "Xorg_libXdmcp_jll", "Xorg_libpthread_stubs_jll"] @@ -2721,9 +2719,9 @@ version = "2.39.0+0" [[deps.Xorg_xtrans_jll]] deps = ["Artifacts", "JLLWrappers", "Libdl"] -git-tree-sha1 = "b9ead2d2bdb27330545eb14234a2e300da61232e" +git-tree-sha1 = "6dba04dbfb72ae3ebe5418ba33d087ba8aa8cb00" uuid = "c5fb5394-a638-5e4d-96e5-b29de1b5cf10" -version = "1.5.0+3" +version = "1.5.1+0" [[deps.Zlib_jll]] deps = ["Libdl"] diff --git a/dev/esn_tutorials/change_layers/index.html b/dev/esn_tutorials/change_layers/index.html index 5df074a7..565eb389 100644 --- a/dev/esn_tutorials/change_layers/index.html +++ b/dev/esn_tutorials/change_layers/index.html @@ -36,4 +36,4 @@ output = esn(Predictive(testing_input), wout) println(msd(testing_target, output)) end
0.00350414394710908
-0.003952170425937309

As it is possible to see, changing layers in ESN models is straightforward. Be sure to check the API documentation for a full list of reservoir and layers.

Bibliography

+0.003952170425937309

As it is possible to see, changing layers in ESN models is straightforward. Be sure to check the API documentation for a full list of reservoir and layers.

Bibliography

diff --git a/dev/esn_tutorials/deep_esn/f1f92c44.svg b/dev/esn_tutorials/deep_esn/91144331.svg similarity index 94% rename from dev/esn_tutorials/deep_esn/f1f92c44.svg rename to dev/esn_tutorials/deep_esn/91144331.svg index 48badc9a..c8663c2e 100644 --- a/dev/esn_tutorials/deep_esn/f1f92c44.svg +++ b/dev/esn_tutorials/deep_esn/91144331.svg @@ -1,92 +1,92 @@ - + - + - + - + - + - - - - - - - - - - - - - - - + + + + + + + + + + + + + + + - + - - - - - - - - - - - - - - - + + + + + + + + + + + + + + + - + - - - - - - - - - - - - - - - - - - - - - - - - - - - + + + + + + + + + + + + + + + + + + + + + + + + + + + - + - + diff --git a/dev/esn_tutorials/deep_esn/index.html b/dev/esn_tutorials/deep_esn/index.html index b9209bd2..adefecee 100644 --- a/dev/esn_tutorials/deep_esn/index.html +++ b/dev/esn_tutorials/deep_esn/index.html @@ -57,4 +57,4 @@ plot(p1, p2, p3; plot_title="Lorenz System Coordinates", layout=(3, 1), xtickfontsize=12, ytickfontsize=12, xguidefontsize=15, yguidefontsize=15, - legendfontsize=12, titlefontsize=20)Example block output

Documentation

  • 1Gallicchio, Claudio, and Alessio Micheli. "Deep echo state network (deepesn): A brief survey." arXiv preprint arXiv:1712.04323 (2017).
+ legendfontsize=12, titlefontsize=20)Example block output

Documentation

  • 1Gallicchio, Claudio, and Alessio Micheli. "Deep echo state network (deepesn): A brief survey." arXiv preprint arXiv:1712.04323 (2017).
diff --git a/dev/esn_tutorials/different_drivers/a62c1b7a.svg b/dev/esn_tutorials/different_drivers/6e38d321.svg similarity index 94% rename from dev/esn_tutorials/different_drivers/a62c1b7a.svg rename to dev/esn_tutorials/different_drivers/6e38d321.svg index ef5a61e9..a25cdc9b 100644 --- a/dev/esn_tutorials/different_drivers/a62c1b7a.svg +++ b/dev/esn_tutorials/different_drivers/6e38d321.svg @@ -1,56 +1,56 @@ - + - + - + - + - + - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + - + - + diff --git a/dev/esn_tutorials/different_drivers/index.html b/dev/esn_tutorials/different_drivers/index.html index e64282bf..cb04994c 100644 --- a/dev/esn_tutorials/different_drivers/index.html +++ b/dev/esn_tutorials/different_drivers/index.html @@ -92,7 +92,7 @@ linewidth=2.5, xtickfontsize=12, ytickfontsize=12, - size=(1080, 720))Example block output

It is interesting to see a comparison of the GRU driven ESN and the standard RNN driven ESN. Using the same parameters defined before it is possible to do the following

using StatsBase
+    size=(1080, 720))
Example block output

It is interesting to see a comparison of the GRU driven ESN and the standard RNN driven ESN. Using the same parameters defined before it is possible to do the following

using StatsBase
 
 esn_rnn = ESN(training_input, 1, res_size;
     reservoir=rand_sparse(; radius=res_radius),
@@ -103,4 +103,4 @@
 
 println(msd(testing_target, output))
 println(msd(testing_target, output_rnn))
11.97119927926169
-6.816207758268541
  • 1Lun, Shu-Xian, et al. "A novel model of leaky integrator echo state network for time-series prediction." Neurocomputing 159 (2015): 58-66.
  • 2Cho, Kyunghyun, et al. “Learning phrase representations using RNN encoder-decoder for statistical machine translation.” arXiv preprint arXiv:1406.1078 (2014).
  • 3Wang, Xinjie, Yaochu Jin, and Kuangrong Hao. "A Gated Recurrent Unit based Echo State Network." 2020 International Joint Conference on Neural Networks (IJCNN). IEEE, 2020.
  • 4Di Sarli, Daniele, Claudio Gallicchio, and Alessio Micheli. "Gated Echo State Networks: a preliminary study." 2020 International Conference on INnovations in Intelligent SysTems and Applications (INISTA). IEEE, 2020.
  • 5Dey, Rahul, and Fathi M. Salem. "Gate-variants of gated recurrent unit (GRU) neural networks." 2017 IEEE 60th international midwest symposium on circuits and systems (MWSCAS). IEEE, 2017.
  • 6Zhou, Guo-Bing, et al. "Minimal gated unit for recurrent neural networks." International Journal of Automation and Computing 13.3 (2016): 226-234.
  • 7Hübner, Uwe, Nimmi B. Abraham, and Carlos O. Weiss. "Dimensions and entropies of chaotic intensity pulsations in a single-mode far-infrared NH 3 laser." Physical Review A 40.11 (1989): 6354.
+6.816207758268541
  • 1Lun, Shu-Xian, et al. "A novel model of leaky integrator echo state network for time-series prediction." Neurocomputing 159 (2015): 58-66.
  • 2Cho, Kyunghyun, et al. “Learning phrase representations using RNN encoder-decoder for statistical machine translation.” arXiv preprint arXiv:1406.1078 (2014).
  • 3Wang, Xinjie, Yaochu Jin, and Kuangrong Hao. "A Gated Recurrent Unit based Echo State Network." 2020 International Joint Conference on Neural Networks (IJCNN). IEEE, 2020.
  • 4Di Sarli, Daniele, Claudio Gallicchio, and Alessio Micheli. "Gated Echo State Networks: a preliminary study." 2020 International Conference on INnovations in Intelligent SysTems and Applications (INISTA). IEEE, 2020.
  • 5Dey, Rahul, and Fathi M. Salem. "Gate-variants of gated recurrent unit (GRU) neural networks." 2017 IEEE 60th international midwest symposium on circuits and systems (MWSCAS). IEEE, 2017.
  • 6Zhou, Guo-Bing, et al. "Minimal gated unit for recurrent neural networks." International Journal of Automation and Computing 13.3 (2016): 226-234.
  • 7Hübner, Uwe, Nimmi B. Abraham, and Carlos O. Weiss. "Dimensions and entropies of chaotic intensity pulsations in a single-mode far-infrared NH 3 laser." Physical Review A 40.11 (1989): 6354.
diff --git a/dev/esn_tutorials/different_training/index.html b/dev/esn_tutorials/different_training/index.html index c1605d28..1fe6ca7a 100644 --- a/dev/esn_tutorials/different_training/index.html +++ b/dev/esn_tutorials/different_training/index.html @@ -1,2 +1,2 @@ -Using Different Training Methods · ReservoirComputing.jl +Using Different Training Methods · ReservoirComputing.jl diff --git a/dev/esn_tutorials/hybrid/3fc7ec16.svg b/dev/esn_tutorials/hybrid/5d4edf2d.svg similarity index 93% rename from dev/esn_tutorials/hybrid/3fc7ec16.svg rename to dev/esn_tutorials/hybrid/5d4edf2d.svg index 396585b2..4a1dcf2b 100644 --- a/dev/esn_tutorials/hybrid/3fc7ec16.svg +++ b/dev/esn_tutorials/hybrid/5d4edf2d.svg @@ -1,90 +1,90 @@ - + - + - + - + - + - - - - - - - - - - - - - - - + + + + + + + + + + + + + + + - + - - - - - - - - - - - - - - - + + + + + + + + + + + + + + + - + - - - - - - - - - - - - - - - - - - - - - - - - - + + + + + + + + + + + + + + + + + + + + + + + + + - + - + diff --git a/dev/esn_tutorials/hybrid/index.html b/dev/esn_tutorials/hybrid/index.html index 4954f0de..2a64a825 100644 --- a/dev/esn_tutorials/hybrid/index.html +++ b/dev/esn_tutorials/hybrid/index.html @@ -58,4 +58,4 @@ plot(p1, p2, p3; plot_title="Lorenz System Coordinates", layout=(3, 1), xtickfontsize=12, ytickfontsize=12, xguidefontsize=15, yguidefontsize=15, - legendfontsize=12, titlefontsize=20)Example block output

Bibliography

  • 1Pathak, Jaideep, et al. "Hybrid forecasting of chaotic processes: Using machine learning in conjunction with a knowledge-based model." Chaos: An Interdisciplinary Journal of Nonlinear Science 28.4 (2018): 041101.
+ legendfontsize=12, titlefontsize=20)Example block output

Bibliography

  • 1Pathak, Jaideep, et al. "Hybrid forecasting of chaotic processes: Using machine learning in conjunction with a knowledge-based model." Chaos: An Interdisciplinary Journal of Nonlinear Science 28.4 (2018): 041101.
diff --git a/dev/esn_tutorials/lorenz_basic/64e2ad7a.svg b/dev/esn_tutorials/lorenz_basic/ba9a7b1a.svg similarity index 94% rename from dev/esn_tutorials/lorenz_basic/64e2ad7a.svg rename to dev/esn_tutorials/lorenz_basic/ba9a7b1a.svg index 68414812..9cbfb496 100644 --- a/dev/esn_tutorials/lorenz_basic/64e2ad7a.svg +++ b/dev/esn_tutorials/lorenz_basic/ba9a7b1a.svg @@ -1,92 +1,92 @@ - + - + - + - + - + - - - - - - - - - - - - - - - + + + + + + + + + + + + + + + - + - - - - - - - - - - - - - - - + + + + + + + + + + + + + + + - + - - - - - - - - - - - - - - - - - - - - - - - - - - - + + + + + + + + + + + + + + + + + + + + + + + + + + + - + - + diff --git a/dev/esn_tutorials/lorenz_basic/index.html b/dev/esn_tutorials/lorenz_basic/index.html index f2be9c0d..7b5dec3c 100644 --- a/dev/esn_tutorials/lorenz_basic/index.html +++ b/dev/esn_tutorials/lorenz_basic/index.html @@ -64,4 +64,4 @@ plot(p1, p2, p3; plot_title="Lorenz System Coordinates", layout=(3, 1), xtickfontsize=12, ytickfontsize=12, xguidefontsize=15, yguidefontsize=15, - legendfontsize=12, titlefontsize=20)Example block output

Bibliography

  • 1Pathak, Jaideep, et al. "Using machine learning to replicate chaotic attractors and calculate Lyapunov exponents from data." Chaos: An Interdisciplinary Journal of Nonlinear Science 27.12 (2017): 121102.
  • 2Lukoševičius, Mantas. "A practical guide to applying echo state networks." Neural networks: Tricks of the trade. Springer, Berlin, Heidelberg, 2012. 659-686.
  • 3Lu, Zhixin, et al. "Reservoir observers: Model-free inference of unmeasured variables in chaotic systems." Chaos: An Interdisciplinary Journal of Nonlinear Science 27.4 (2017): 041102.
+ legendfontsize=12, titlefontsize=20)Example block output

Bibliography

  • 1Pathak, Jaideep, et al. "Using machine learning to replicate chaotic attractors and calculate Lyapunov exponents from data." Chaos: An Interdisciplinary Journal of Nonlinear Science 27.12 (2017): 121102.
  • 2Lukoševičius, Mantas. "A practical guide to applying echo state networks." Neural networks: Tricks of the trade. Springer, Berlin, Heidelberg, 2012. 659-686.
  • 3Lu, Zhixin, et al. "Reservoir observers: Model-free inference of unmeasured variables in chaotic systems." Chaos: An Interdisciplinary Journal of Nonlinear Science 27.4 (2017): 041102.
diff --git a/dev/general/different_training/index.html b/dev/general/different_training/index.html index 4864d79d..d22a0dd2 100644 --- a/dev/general/different_training/index.html +++ b/dev/general/different_training/index.html @@ -4,4 +4,4 @@ regression::Any solver::Any regression_kwargs::Any -end

To call the ridge regression using the MLJLinearModels APIs, you can use LinearModel(;regression=LinearRegression). You can also choose a specific solver by calling, for example, LinearModel(regression=LinearRegression, solver=Analytical()). For all the available solvers, please refer to the MLJLinearModels documentation.

To change the regularization coefficient in the ridge example, using for example lambda = 0.1, it is needed to pass it in the regression_kwargs like so LinearModel(;regression=LinearRegression, solver=Analytical(), regression_kwargs=(lambda=lambda)). The nomenclature of the coefficients must follow the MLJLinearModels APIs, using lambda, gamma for LassoRegression and delta, lambda, gamma for HuberRegression. Again, please check the relevant documentation if in doubt. When using MLJLinearModels based regressors, do remember to specify using MLJLinearModels.

Support Vector Regression

Contrary to the LinearModels, no wrappers are needed for support vector regression. By using LIBSVM.jl, LIBSVM wrappers in Julia, it is possible to call both epsilonSVR() or nuSVR() directly in train(). For the full range of kernels provided and the parameters to call, we refer the user to the official documentation. Like before, if one intends to use LIBSVM regressors, it is necessary to specify using LIBSVM.

+end

To call the ridge regression using the MLJLinearModels APIs, you can use LinearModel(;regression=LinearRegression). You can also choose a specific solver by calling, for example, LinearModel(regression=LinearRegression, solver=Analytical()). For all the available solvers, please refer to the MLJLinearModels documentation.

To change the regularization coefficient in the ridge example, using for example lambda = 0.1, it is needed to pass it in the regression_kwargs like so LinearModel(;regression=LinearRegression, solver=Analytical(), regression_kwargs=(lambda=lambda)). The nomenclature of the coefficients must follow the MLJLinearModels APIs, using lambda, gamma for LassoRegression and delta, lambda, gamma for HuberRegression. Again, please check the relevant documentation if in doubt. When using MLJLinearModels based regressors, do remember to specify using MLJLinearModels.

Support Vector Regression

Contrary to the LinearModels, no wrappers are needed for support vector regression. By using LIBSVM.jl, LIBSVM wrappers in Julia, it is possible to call both epsilonSVR() or nuSVR() directly in train(). For the full range of kernels provided and the parameters to call, we refer the user to the official documentation. Like before, if one intends to use LIBSVM regressors, it is necessary to specify using LIBSVM.

diff --git a/dev/general/predictive_generative/index.html b/dev/general/predictive_generative/index.html index 0d168882..8728c3a8 100644 --- a/dev/general/predictive_generative/index.html +++ b/dev/general/predictive_generative/index.html @@ -1,2 +1,2 @@ -Generative vs Predictive · ReservoirComputing.jl

Generative vs Predictive

The library provides two different methods for prediction, denoted as Predictive() and Generative(). These methods correspond to the two major applications of Reservoir Computing models found in the literature. This section aims to clarify the differences between these two methods before providing further details on their usage in the library.

Predictive

In the first method, users can utilize Reservoir Computing models in a manner similar to standard Machine Learning models. This involves using a set of features as input and a set of labels as outputs. In this case, both the feature and label sets can consist of vectors of different dimensions. Specifically, let's denote the feature set as $X=\{x_1,...,x_n\}$ where $x_i \in \mathbb{R}^{N}$, and the label set as $Y=\{y_1,...,y_n\}$ where $y_i \in \mathbb{R}^{M}$.

To make predictions using this method, you need to provide the feature set that you want to predict the labels for. For example, you can call Predictive(X) using the feature set $X$ as input. This method allows for both one-step-ahead and multi-step-ahead predictions.

Generative

The generative method provides a different approach to forecasting with Reservoir Computing models. It enables you to extend the forecasting capabilities of the model by allowing predicted results to be fed back into the model to generate the next prediction. This autonomy allows the model to make predictions without the need for a feature dataset as input.

To use the generative method, you only need to specify the number of time steps that you intend to forecast. For instance, you can call Generative(100) to generate predictions for the next one hundred time steps.

The key distinction between these methods lies in how predictions are made. The predictive method relies on input feature sets to make predictions, while the generative method allows for autonomous forecasting by feeding predicted results back into the model.

+Generative vs Predictive · ReservoirComputing.jl

Generative vs Predictive

The library provides two different methods for prediction, denoted as Predictive() and Generative(). These methods correspond to the two major applications of Reservoir Computing models found in the literature. This section aims to clarify the differences between these two methods before providing further details on their usage in the library.

Predictive

In the first method, users can utilize Reservoir Computing models in a manner similar to standard Machine Learning models. This involves using a set of features as input and a set of labels as outputs. In this case, both the feature and label sets can consist of vectors of different dimensions. Specifically, let's denote the feature set as $X=\{x_1,...,x_n\}$ where $x_i \in \mathbb{R}^{N}$, and the label set as $Y=\{y_1,...,y_n\}$ where $y_i \in \mathbb{R}^{M}$.

To make predictions using this method, you need to provide the feature set that you want to predict the labels for. For example, you can call Predictive(X) using the feature set $X$ as input. This method allows for both one-step-ahead and multi-step-ahead predictions.

Generative

The generative method provides a different approach to forecasting with Reservoir Computing models. It enables you to extend the forecasting capabilities of the model by allowing predicted results to be fed back into the model to generate the next prediction. This autonomy allows the model to make predictions without the need for a feature dataset as input.

To use the generative method, you only need to specify the number of time steps that you intend to forecast. For instance, you can call Generative(100) to generate predictions for the next one hundred time steps.

The key distinction between these methods lies in how predictions are made. The predictive method relies on input feature sets to make predictions, while the generative method allows for autonomous forecasting by feeding predicted results back into the model.

diff --git a/dev/general/states_variation/index.html b/dev/general/states_variation/index.html index 34e66241..1a219f1e 100644 --- a/dev/general/states_variation/index.html +++ b/dev/general/states_variation/index.html @@ -2,4 +2,4 @@ Altering States · ReservoirComputing.jl

Altering States

In ReservoirComputing models, it's possible to perform alterations on the reservoir states during the training stage. These alterations can improve prediction results or replicate results found in the literature. Alterations are categorized into two possibilities: padding or extending the states, and applying non-linear algorithms to the states.

Padding and Extending States

Extending States

Extending the states involves appending the corresponding input values to the reservoir states. If $\textbf{x}(t)$ represents the reservoir state at time $t$ corresponding to the input $\textbf{u}(t)$, the extended state is represented as $[\textbf{x}(t); \textbf{u}(t)]$, where $[;]$ denotes vertical concatenation. This procedure is commonly used in Echo State Networks. You can extend the states in every ReservoirComputing.jl model by using the states_type keyword argument and calling the ExtendedStates() method. No additional arguments are needed.

Padding States

Padding the states involves appending a constant value, such as 1.0, to each state. In the notation introduced earlier, padded states can be represented as $[\textbf{x}(t); 1.0]$. This approach is detailed in "A practical guide to applying echo state networks." by Lukoševičius, Mantas. To pad the states, you can use the states_type keyword argument and call the PaddedStates(padding) method, where padding represents the value to be concatenated to the states. By default, the padding value is set to 1.0, so most of the time, calling PaddedStates() will suffice.

Additionally, you can pad the extended states by using the PaddedExtendedStates(padding) method, which also has a default padding value of 1.0.

You can choose not to apply any of these changes to the states by calling StandardStates(), which is the default choice for the states.

Non-Linear Algorithms

First introduced in [1] and expanded in [2], non-linear algorithms are nonlinear combinations of the columns of the matrix states. There are three such algorithms implemented in ReservoirComputing.jl, and you can choose which one to use with the nla_type keyword argument. The default value is set to NLADefault(), which means no non-linear algorithm is applied.

The available non-linear algorithms are:

  • NLAT1()
  • NLAT2()
  • NLAT3()

These algorithms perform specific operations on the reservoir states. To provide a better understanding of what they do, let $\textbf{x}_{i, j}$ be elements of the state matrix, with $i=1,...,T \ j=1,...,N$ where $T$ is the length of the training and $N$ is the reservoir size.

NLAT1

\[\tilde{\textbf{x}}_{i,j} = \textbf{x}_{i,j} \times \textbf{x}_{i,j} \ \ \text{if \textit{j} is odd} \\ \tilde{\textbf{x}}_{i,j} = \textbf{x}_{i,j} \ \ \text{if \textit{j} is even}\]

NLAT2

\[\tilde{\textbf{x}}_{i,j} = \textbf{x}_{i,j-1} \times \textbf{x}_{i,j-2} \ \ \text{if \textit{j} > 1 is odd} \\ \tilde{\textbf{x}}_{i,j} = \textbf{x}_{i,j} \ \ \text{if \textit{j} is 1 or even}\]

NLAT3

\[\tilde{\textbf{x}}_{i,j} = \textbf{x}_{i,j-1} \times \textbf{x}_{i,j+1} \ \ \text{if \textit{j} > 1 is odd} \\ -\tilde{\textbf{x}}_{i,j} = \textbf{x}_{i,j} \ \ \text{if \textit{j} is 1 or even}\]

  • 1Pathak, Jaideep, et al. "Using machine learning to replicate chaotic attractors and calculate Lyapunov exponents from data." Chaos: An Interdisciplinary Journal of Nonlinear Science 27.12 (2017): 121102.
  • 2Chattopadhyay, Ashesh, Pedram Hassanzadeh, and Devika Subramanian. "Data-driven predictions of a multiscale Lorenz 96 chaotic system using machine-learning methods: reservoir computing, artificial neural network, and long short-term memory network." Nonlinear Processes in Geophysics 27.3 (2020): 373-389.
+\tilde{\textbf{x}}_{i,j} = \textbf{x}_{i,j} \ \ \text{if \textit{j} is 1 or even}\]

  • 1Pathak, Jaideep, et al. "Using machine learning to replicate chaotic attractors and calculate Lyapunov exponents from data." Chaos: An Interdisciplinary Journal of Nonlinear Science 27.12 (2017): 121102.
  • 2Chattopadhyay, Ashesh, Pedram Hassanzadeh, and Devika Subramanian. "Data-driven predictions of a multiscale Lorenz 96 chaotic system using machine-learning methods: reservoir computing, artificial neural network, and long short-term memory network." Nonlinear Processes in Geophysics 27.3 (2020): 373-389.
diff --git a/dev/index.html b/dev/index.html index 8a24c935..c7dd56a8 100644 --- a/dev/index.html +++ b/dev/index.html @@ -11,14 +11,14 @@ number = {288}, pages = {1--8}, url = {http://jmlr.org/papers/v23/22-0611.html} -}

Reproducibility

The documentation of this SciML package was built using these direct dependencies,
Status `/var/lib/buildkite-agent/builds/gpuci-8/julialang/reservoircomputing-dot-jl/docs/Project.toml`
+}

Reproducibility

The documentation of this SciML package was built using these direct dependencies,
Status `/var/lib/buildkite-agent/builds/gpuci-16/julialang/reservoircomputing-dot-jl/docs/Project.toml`
  [878138dc] CellularAutomata v0.0.2
   [0c46a032] DifferentialEquations v7.15.0
   [e30172f5] Documenter v1.8.0
   [1dea7af3] OrdinaryDiffEq v6.90.1
   [91a5bcdd] Plots v1.40.9
   [31e2f376] PredefinedDynamicalSystems v1.3.0
-  [7c2d2b1e] ReservoirComputing v0.10.6 `/var/lib/buildkite-agent/builds/gpuci-8/julialang/reservoircomputing-dot-jl`
+  [7c2d2b1e] ReservoirComputing v0.10.6 `/var/lib/buildkite-agent/builds/gpuci-16/julialang/reservoircomputing-dot-jl`
   [2913bbd2] StatsBase v0.34.4
   [37e2e46d] LinearAlgebra v1.11.0
   [9a3f8284] Random v1.11.0
@@ -36,11 +36,11 @@
   JULIA_CPU_THREADS = 2
   JULIA_DEPOT_PATH = /root/.cache/julia-buildkite-plugin/depots/01852978-cea0-41b9-93ac-ff3dc03e5dc5
   LD_LIBRARY_PATH = /usr/local/nvidia/lib:/usr/local/nvidia/lib64
-  JULIA_PKG_SERVER =
A more complete overview of all dependencies and their versions is also provided.
Status `/var/lib/buildkite-agent/builds/gpuci-8/julialang/reservoircomputing-dot-jl/docs/Manifest.toml`
+  JULIA_PKG_SERVER =
A more complete overview of all dependencies and their versions is also provided.
Status `/var/lib/buildkite-agent/builds/gpuci-16/julialang/reservoircomputing-dot-jl/docs/Manifest.toml`
   [47edcb42] ADTypes v1.11.0
   [a4c015fc] ANSIColoredPrinters v0.0.1
   [1520ce14] AbstractTrees v0.4.5
-  [7d9f7c33] Accessors v0.1.40
+  [7d9f7c33] Accessors v0.1.41
   [79e6a3ab] Adapt v4.1.1
   [66dad0bd] AliasTables v1.1.3
   [a95523ee] AlmostBlockDiagonals v0.1.10
@@ -55,9 +55,9 @@
   [764a87c0] BoundaryValueDiffEq v5.13.0
   [7227322d] BoundaryValueDiffEqAscher v1.2.0
   [56b672f2] BoundaryValueDiffEqCore v1.3.0
-  [85d9eb09] BoundaryValueDiffEqFIRK v1.2.0
-  [1a22d4ce] BoundaryValueDiffEqMIRK v1.2.0
-  [9255f1d6] BoundaryValueDiffEqMIRKN v1.0.0
+  [85d9eb09] BoundaryValueDiffEqFIRK v1.3.0
+  [1a22d4ce] BoundaryValueDiffEqMIRK v1.3.0
+  [9255f1d6] BoundaryValueDiffEqMIRKN v1.1.0
   [ed55bfe0] BoundaryValueDiffEqShooting v1.3.0
   [70df07ce] BracketingNonlinearSolve v1.1.0
   [fa961155] CEnum v0.5.0
@@ -236,7 +236,7 @@
   [2792f1a3] RegistryInstances v0.1.0
   [05181044] RelocatableFolders v1.0.1
   [ae029012] Requires v1.3.0
-  [7c2d2b1e] ReservoirComputing v0.10.6 `/var/lib/buildkite-agent/builds/gpuci-8/julialang/reservoircomputing-dot-jl`
+  [7c2d2b1e] ReservoirComputing v0.10.6 `/var/lib/buildkite-agent/builds/gpuci-16/julialang/reservoircomputing-dot-jl`
   [ae5879a3] ResettableStacks v1.1.1
   [79098fc4] Rmath v0.8.0
   [f2b01f46] Roots v2.2.4
@@ -291,7 +291,7 @@
   [41fe7b60] Unzip v0.2.0
   [3d5dd08c] VectorizationBase v0.21.71
   [19fa3120] VertexSafeGraphs v0.2.0
-  [d49dbf32] WeightInitializers v1.1.0
+  [d49dbf32] WeightInitializers v1.1.1
   [6e34b625] Bzip2_jll v1.0.8+4
   [83423d85] Cairo_jll v1.18.2+1
   [ee1fde0b] Dbus_jll v1.14.10+0
@@ -313,15 +313,15 @@
   [c1c5ebd0] LAME_jll v3.100.2+0
   [88015f11] LERC_jll v4.0.1+0
   [1d63c593] LLVMOpenMP_jll v18.1.7+0
-  [dd4b983a] LZO_jll v2.10.2+3
+  [dd4b983a] LZO_jll v2.10.3+0
  [e9f186c6] Libffi_jll v3.2.2+2
   [d4300ac3] Libgcrypt_jll v1.11.0+0
   [7e76a0d4] Libglvnd_jll v1.7.0+0
   [7add5ba3] Libgpg_error_jll v1.51.1+0
-  [94ce4f54] Libiconv_jll v1.17.0+1
-  [4b2f31a3] Libmount_jll v2.40.2+2
+  [94ce4f54] Libiconv_jll v1.18.0+0
+  [4b2f31a3] Libmount_jll v2.40.3+0
   [89763e89] Libtiff_jll v4.7.1+0
-  [38a345b3] Libuuid_jll v2.40.2+2
+  [38a345b3] Libuuid_jll v2.40.3+0
   [856f044c] MKL_jll v2024.2.0+0
   [e7412a2a] Ogg_jll v1.3.5+1
   [458c3c95] OpenSSL_jll v3.0.15+3
@@ -344,16 +344,16 @@
   [f67eecfb] Xorg_libICE_jll v1.1.1+0
   [c834827a] Xorg_libSM_jll v1.2.4+0
   [4f6342f7] Xorg_libX11_jll v1.8.6+3
-  [0c0b7dd1] Xorg_libXau_jll v1.0.11+3
+  [0c0b7dd1] Xorg_libXau_jll v1.0.12+0
   [935fb764] Xorg_libXcursor_jll v1.2.3+0
-  [a3789734] Xorg_libXdmcp_jll v1.1.4+3
+  [a3789734] Xorg_libXdmcp_jll v1.1.5+0
   [1082639a] Xorg_libXext_jll v1.3.6+3
   [d091e8ba] Xorg_libXfixes_jll v6.0.0+0
   [a51aa0fd] Xorg_libXi_jll v1.8.2+0
   [d1454406] Xorg_libXinerama_jll v1.1.5+0
   [ec84b674] Xorg_libXrandr_jll v1.5.4+0
   [ea2f1a96] Xorg_libXrender_jll v0.9.11+1
-  [14d82f49] Xorg_libpthread_stubs_jll v0.1.1+3
+  [14d82f49] Xorg_libpthread_stubs_jll v0.1.2+0
   [c7cfdc94] Xorg_libxcb_jll v1.17.0+3
   [cc61e674] Xorg_libxkbfile_jll v1.1.2+1
   [e920d4aa] Xorg_xcb_util_cursor_jll v0.1.4+0
@@ -364,7 +364,7 @@
   [c22f9ab0] Xorg_xcb_util_wm_jll v0.4.1+1
   [35661453] Xorg_xkbcomp_jll v1.4.6+1
   [33bec58e] Xorg_xkeyboard_config_jll v2.39.0+0
-  [c5fb5394] Xorg_xtrans_jll v1.5.0+3
+  [c5fb5394] Xorg_xtrans_jll v1.5.1+0
   [3161d3a3] Zstd_jll v1.5.7+0
   [35ca27e7] eudev_jll v3.2.9+0
   [214eeab7] fzf_jll v0.56.3+0
@@ -430,4 +430,4 @@
   [8e850b90] libblastrampoline_jll v5.11.0+0
   [8e850ede] nghttp2_jll v1.59.0+0
   [3f19e933] p7zip_jll v17.4.0+2
-Info Packages marked with  have new versions available but compatibility constraints restrict them from upgrading. To see why use `status --outdated -m`

You can also download the manifest file and the project file.

+Info Packages marked with have new versions available but compatibility constraints restrict them from upgrading. To see why use `status --outdated -m`

You can also download the manifest file and the project file.

diff --git a/dev/reca_tutorials/reca/index.html b/dev/reca_tutorials/reca/index.html index 78c94a36..db761bde 100644 --- a/dev/reca_tutorials/reca/index.html +++ b/dev/reca_tutorials/reca/index.html @@ -13,4 +13,4 @@ input_encoding=RandomMapping(16, 40))
RECA{Matrix{Float64}, CellularAutomata.DCA{Int64, Vector{Int64}, Int64}, RandomMaps{Int64, Int64, Int64, Matrix{Int64}, Int64}, Matrix{Float64}, StandardStates}([0.0 0.0 … 0.0 0.0; 1.0 1.0 … 0.0 0.0; 0.0 0.0 … 1.0 1.0; 0.0 0.0 … 0.0 0.0], CellularAutomata.DCA{Int64, Vector{Int64}, Int64}(90, [0, 1, 0, 1, 1, 0, 1, 0], 2, 1), RandomMaps{Int64, Int64, Int64, Matrix{Int64}, Int64}(16, 40, 16, [26 19 21 30; 27 8 26 28; … ; 4 34 39 12; 27 12 16 35], 10240, 640), NLADefault(), [0.0 0.0 … 0.0 1.0; 0.0 1.0 … 0.0 0.0; … ; 0.0 0.0 … 0.0 0.0; 0.0 0.0 … 0.0 0.0], StandardStates())

After this, the training can be performed with the chosen method.

output_layer = train(reca, output, StandardRidge(0.00001))
OutputLayer successfully trained with output size: 4

The prediction in this case will be a Predictive() with the input data equal to the training data. In addition, to test the 5 bit memory task, a conversion from Float to Bool is necessary (at the moment, we are aware of a bug that doesn't allow boolean input data to the RECA models):

prediction = reca(Predictive(input), output_layer)
 final_pred = convert(AbstractArray{Float32}, prediction .> 0.5)
 
-final_pred == output
true
  • 1Yilmaz, Ozgur. "Reservoir computing using cellular automata." arXiv preprint arXiv:1410.0162 (2014).
  • 2Margem, Mrwan, and Ozgür Yilmaz. "An experimental study on cellular automata reservoir in pathological sequence learning tasks." (2017).
  • 3Nichele, Stefano, and Andreas Molund. "Deep reservoir computing using cellular automata." arXiv preprint arXiv:1703.02806 (2017).
  • 4Hochreiter, Sepp, and Jürgen Schmidhuber. "Long short-term memory." Neural computation 9.8 (1997): 1735-1780.
+final_pred == output
true
  • 1Yilmaz, Ozgur. "Reservoir computing using cellular automata." arXiv preprint arXiv:1410.0162 (2014).
  • 2Margem, Mrwan, and Ozgür Yilmaz. "An experimental study on cellular automata reservoir in pathological sequence learning tasks." (2017).
  • 3Nichele, Stefano, and Andreas Molund. "Deep reservoir computing using cellular automata." arXiv preprint arXiv:1703.02806 (2017).
  • 4Hochreiter, Sepp, and Jürgen Schmidhuber. "Long short-term memory." Neural computation 9.8 (1997): 1735-1780.