Skip to content

Commit

Permalink
format
Browse files Browse the repository at this point in the history
  • Loading branch information
MartinuzziFrancesco committed Feb 27, 2024
1 parent a56e043 commit bd2b9a0
Show file tree
Hide file tree
Showing 5 changed files with 135 additions and 92 deletions.
1 change: 0 additions & 1 deletion src/esn/echostatenetwork.jl
Original file line number Diff line number Diff line change
Expand Up @@ -276,4 +276,3 @@ end
function pad_esnstate!(variation, states_type, x_pad, x, args...)
x_pad = pad_state!(states_type, x_pad, x)
end

46 changes: 26 additions & 20 deletions src/esn/esn.jl
Original file line number Diff line number Diff line change
Expand Up @@ -18,27 +18,30 @@ end
Creates an Echo State Network (ESN) using specified parameters and training data, suitable for various machine learning tasks.
# Parameters
- `train_data`: Matrix of training data (columns as time steps, rows as features).
- `variation`: Variation of ESN (default: `Default()`).
- `input_layer`: Input layer of ESN (default: `DenseLayer()`).
- `reservoir`: Reservoir of the ESN (default: `RandSparseReservoir(100)`).
- `bias`: Bias vector for each time step (default: `NullLayer()`).
- `reservoir_driver`: Mechanism for evolving reservoir states (default: `RNN()`).
- `nla_type`: Non-linear activation type (default: `NLADefault()`).
- `states_type`: Format for storing states (default: `StandardStates()`).
- `washout`: Initial time steps to discard (default: `0`).
- `matrix_type`: Type of matrices used internally (default: type of `train_data`).
- `train_data`: Matrix of training data (columns as time steps, rows as features).
- `variation`: Variation of ESN (default: `Default()`).
- `input_layer`: Input layer of ESN (default: `DenseLayer()`).
- `reservoir`: Reservoir of the ESN (default: `RandSparseReservoir(100)`).
- `bias`: Bias vector for each time step (default: `NullLayer()`).
- `reservoir_driver`: Mechanism for evolving reservoir states (default: `RNN()`).
- `nla_type`: Non-linear activation type (default: `NLADefault()`).
- `states_type`: Format for storing states (default: `StandardStates()`).
- `washout`: Initial time steps to discard (default: `0`).
- `matrix_type`: Type of matrices used internally (default: type of `train_data`).
# Returns
- An initialized ESN instance with specified parameters.
- An initialized ESN instance with specified parameters.
# Examples
```julia
using ReservoirComputing
train_data = rand(10, 100) # 10 features, 100 time steps
esn = ESN(train_data, reservoir=RandSparseReservoir(200), washout=10)
esn = ESN(train_data, reservoir = RandSparseReservoir(200), washout = 10)
```
"""
function ESN(train_data,
Expand Down Expand Up @@ -90,36 +93,39 @@ end
Trains an Echo State Network (ESN) using the provided target data and a specified training method.
# Parameters
- `esn::AbstractEchoStateNetwork`: The ESN instance to be trained.
- `target_data`: Supervised training data for the ESN.
- `training_method`: The method for training the ESN (default: `StandardRidge(0.0)`).
- `esn::AbstractEchoStateNetwork`: The ESN instance to be trained.
- `target_data`: Supervised training data for the ESN.
- `training_method`: The method for training the ESN (default: `StandardRidge(0.0)`).
# Returns
- The trained ESN model. Its type and structure depend on `training_method` and the ESN's implementation.
- The trained ESN model. Its type and structure depend on `training_method` and the ESN's implementation.
# Returns
The trained ESN model. The exact type and structure of the return value depends on the
`training_method` and the specific ESN implementation.
```julia
using ReservoirComputing
# Initialize an ESN instance and target data
esn = ESN(train_data, reservoir=RandSparseReservoir(200), washout=10)
esn = ESN(train_data, reservoir = RandSparseReservoir(200), washout = 10)
target_data = rand(size(train_data, 2))
# Train the ESN using the default training method
trained_esn = train(esn, target_data)
# Train the ESN using a custom training method
trained_esn = train(esn, target_data, training_method=StandardRidge(1.0))
trained_esn = train(esn, target_data, training_method = StandardRidge(1.0))
```
# Notes
- When using a `Hybrid` variation, the function extends the state matrix with data from the
- When using a `Hybrid` variation, the function extends the state matrix with data from the
physical model included in the `variation`.
- The training is handled by a lower-level `_train` function which takes the new state matrix
- The training is handled by a lower-level `_train` function which takes the new state matrix
and performs the actual training using the specified `training_method`.
"""
function train(esn::AbstractEchoStateNetwork,
Expand Down
78 changes: 48 additions & 30 deletions src/esn/esn_input_layers.jl
Original file line number Diff line number Diff line change
Expand Up @@ -4,18 +4,22 @@
Create and return a matrix with random values, uniformly distributed within a range defined by `scaling`. This function is useful for initializing matrices, such as the layers of a neural network, with scaled random values.
# Arguments
- `rng`: An instance of `AbstractRNG` for random number generation.
- `T`: The data type for the elements of the matrix.
- `dims`: Dimensions of the matrix. It must be a 2-element tuple specifying the number of rows and columns (e.g., `(res_size, in_size)`).
- `scaling`: A scaling factor to define the range of the uniform distribution. The matrix elements will be randomly chosen from the range `[-scaling, scaling]`. Defaults to `T(0.1)`.
- `rng`: An instance of `AbstractRNG` for random number generation.
- `T`: The data type for the elements of the matrix.
- `dims`: Dimensions of the matrix. It must be a 2-element tuple specifying the number of rows and columns (e.g., `(res_size, in_size)`).
- `scaling`: A scaling factor to define the range of the uniform distribution. The matrix elements will be randomly chosen from the range `[-scaling, scaling]`. Defaults to `T(0.1)`.
# Returns
A matrix of type with dimensions specified by `dims`. Each element of the matrix is a random number uniformly distributed between `-scaling` and `scaling`.
# Example
```julia
rng = Random.default_rng()
matrix = scaled_rand(rng, Float64, (100, 50); scaling=0.2)
matrix = scaled_rand(rng, Float64, (100, 50); scaling = 0.2)
```
"""
function scaled_rand(rng::AbstractRNG,
::Type{T},
Expand All @@ -32,20 +36,25 @@ end
Create and return a matrix representing a weighted input layer for Echo State Networks (ESNs). This initializer generates a weighted input matrix with random non-zero elements distributed uniformly within the range [-`scaling`, `scaling`], inspired by the approach in [^Lu].
# Arguments
- `rng`: An instance of `AbstractRNG` for random number generation.
- `T`: The data type for the elements of the matrix.
- `dims`: A 2-element tuple specifying the approximate reservoir size and input size (e.g., `(approx_res_size, in_size)`).
- `scaling`: The scaling factor for the weight distribution. Defaults to `T(0.1)`.
- `rng`: An instance of `AbstractRNG` for random number generation.
- `T`: The data type for the elements of the matrix.
- `dims`: A 2-element tuple specifying the approximate reservoir size and input size (e.g., `(approx_res_size, in_size)`).
- `scaling`: The scaling factor for the weight distribution. Defaults to `T(0.1)`.
# Returns
A matrix representing the weighted input layer as defined in [^Lu2017]. The matrix dimensions will be adjusted to ensure each input unit connects to an equal number of reservoir units.
# Example
```julia
rng = Random.default_rng()
input_layer = weighted_init(rng, Float64, (3, 300); scaling=0.2)
input_layer = weighted_init(rng, Float64, (3, 300); scaling = 0.2)
```
# References
[^Lu2017]: Lu, Zhixin, et al.
"Reservoir observers: Model-free inference of unmeasured variables in chaotic systems."
Chaos: An Interdisciplinary Journal of Nonlinear Science 27.4 (2017): 041102.
Expand Down Expand Up @@ -76,20 +85,22 @@ Create and return a sparse layer matrix for use in neural network models.
The matrix will be of size specified by `dims`, with the specified `sparsity` and `scaling`.
# Arguments
- `rng`: An instance of `AbstractRNG` for random number generation.
- `T`: The data type for the elements of the matrix.
- `dims`: Dimensions of the resulting sparse layer matrix.
- `scaling`: The scaling factor for the sparse layer matrix. Defaults to 0.1.
- `sparsity`: The sparsity level of the sparse layer matrix, controlling the fraction of zero elements. Defaults to 0.1.
- `rng`: An instance of `AbstractRNG` for random number generation.
- `T`: The data type for the elements of the matrix.
- `dims`: Dimensions of the resulting sparse layer matrix.
- `scaling`: The scaling factor for the sparse layer matrix. Defaults to 0.1.
- `sparsity`: The sparsity level of the sparse layer matrix, controlling the fraction of zero elements. Defaults to 0.1.
# Returns
A sparse layer matrix.
A sparse layer matrix.
# Example
```julia
rng = Random.default_rng()
input_layer = sparse_init(rng, Float64, (3, 300); scaling=0.2, sparsity=0.1)
input_layer = sparse_init(rng, Float64, (3, 300); scaling = 0.2, sparsity = 0.1)
```
"""
function sparse_init(rng::AbstractRNG, ::Type{T}, dims::Integer...;
Expand All @@ -109,22 +120,25 @@ end
Create a layer of a neural network.
# Arguments
- `rng::AbstractRNG`: The random number generator.
- `T::Type`: The data type.
- `dims::Integer...`: The dimensions of the layer.
- `scaling::T = T(0.1)`: The scaling factor for the input matrix.
- `model_in_size`: The size of the input model.
- `gamma::T = T(0.5)`: The gamma value.
- `rng::AbstractRNG`: The random number generator.
- `T::Type`: The data type.
- `dims::Integer...`: The dimensions of the layer.
- `scaling::T = T(0.1)`: The scaling factor for the input matrix.
- `model_in_size`: The size of the input model.
- `gamma::T = T(0.5)`: The gamma value.
# Returns
- `input_matrix`: The created input matrix for the layer.
- `input_matrix`: The created input matrix for the layer.
# Example
```julia
rng = Random.default_rng()
dims = (100, 200)
model_in_size = 50
input_matrix = informed_init(rng, Float64, dims; model_in_size=model_in_size)
input_matrix = informed_init(rng, Float64, dims; model_in_size = model_in_size)
```
"""
function informed_init(rng::AbstractRNG, ::Type{T}, dims::Integer...;

Check warning on line 144 in src/esn/esn_input_layers.jl

View check run for this annotation

Codecov / codecov/patch

src/esn/esn_input_layers.jl#L144

Added line #L144 was not covered by tests
Expand Down Expand Up @@ -169,21 +183,25 @@ end
Create a layer matrix using the provided random number generator and sampling parameters.
# Arguments
- `rng::AbstractRNG`: The random number generator used to generate random numbers.
- `dims::Integer...`: The dimensions of the layer matrix.
- `weight`: The weight used to fill the layer matrix. Default is 0.1.
- `sampling`: The sampling parameters used to generate the input matrix. Default is IrrationalSample(irrational = pi, start = 1).
- `rng::AbstractRNG`: The random number generator used to generate random numbers.
- `dims::Integer...`: The dimensions of the layer matrix.
- `weight`: The weight used to fill the layer matrix. Default is 0.1.
- `sampling`: The sampling parameters used to generate the input matrix. Default is IrrationalSample(irrational = pi, start = 1).
# Returns
The layer matrix generated using the provided random number generator and sampling parameters.
# Example
```julia
using Random
rng = Random.default_rng()
dims = (3, 2)
weight = 0.5
layer_matrix = irrational_sample_init(rng, Float64, dims; weight = weight, sampling = IrrationalSample(irrational = sqrt(2), start = 1))
layer_matrix = irrational_sample_init(rng, Float64, dims; weight = weight,
sampling = IrrationalSample(irrational = sqrt(2), start = 1))
```
"""
function minimal_init(rng::AbstractRNG, ::Type{T}, dims::Integer...;
Expand Down
Loading

0 comments on commit bd2b9a0

Please sign in to comment.