Skip to content

Commit

Permalink
Merge pull request #232 from SciML/fm/qol
Browse files Browse the repository at this point in the history
Documentation fixes
  • Loading branch information
MartinuzziFrancesco authored Jan 1, 2025
2 parents 1bca743 + 2c05a8a commit bc6772f
Show file tree
Hide file tree
Showing 67 changed files with 304 additions and 259 deletions.
8 changes: 7 additions & 1 deletion .JuliaFormatter.toml
100755 → 100644
Original file line number Diff line number Diff line change
@@ -1,3 +1,9 @@
style = "sciml"
format_markdown = true
format_docstrings = true
whitespace_in_kwargs = false
margin = 92
indent = 4
format_docstrings = true
separate_kwargs_with_semicolon = true
always_for_in = true
annotate_untyped_fields_with_any = false
Empty file modified .buildkite/documentation.yml
100755 → 100644
Empty file.
Empty file modified .buildkite/pipeline.yml
100755 → 100644
Empty file.
Empty file modified .github/dependabot.yml
100755 → 100644
Empty file.
Empty file modified .github/workflows/CompatHelper.yml
100755 → 100644
Empty file.
Empty file modified .github/workflows/Downgrade.yml
100755 → 100644
Empty file.
Empty file modified .github/workflows/FormatCheck.yml
100755 → 100644
Empty file.
Empty file modified .github/workflows/Invalidations.yml
100755 → 100644
Empty file.
Empty file modified .github/workflows/TagBot.yml
100755 → 100644
Empty file.
Empty file modified .github/workflows/Tests.yml
100755 → 100644
Empty file.
2 changes: 1 addition & 1 deletion .gitignore
100755 → 100644
Original file line number Diff line number Diff line change
Expand Up @@ -2,6 +2,6 @@
*.jl.cov
*.jl.mem
.DS_Store
/Manifest.toml
Manifest.toml
/dev/
docs/build
Empty file modified .typos.toml
100755 → 100644
Empty file.
Empty file modified CITATION.bib
100755 → 100644
Empty file.
Empty file modified LICENSE
100755 → 100644
Empty file.
2 changes: 1 addition & 1 deletion Project.toml
100755 → 100644
Original file line number Diff line number Diff line change
@@ -1,7 +1,7 @@
name = "ReservoirComputing"
uuid = "7c2d2b1e-3dd4-11ea-355a-8f6a8116e294"
authors = ["Francesco Martinuzzi"]
version = "0.10.4"
version = "0.10.5"

[deps]
Adapt = "79e6a3ab-5dfb-504d-930d-738a2a938a0e"
Expand Down
20 changes: 10 additions & 10 deletions README.md
100755 → 100644
Original file line number Diff line number Diff line change
Expand Up @@ -11,11 +11,11 @@
[![Build status](https://badge.buildkite.com/db8f91b89a10ad79bbd1d9fdb1340e6f6602a1c0ed9496d4d0.svg)](https://buildkite.com/julialang/reservoircomputing-dot-jl)
[![ColPrac: Contributor's Guide on Collaborative Practices for Community Packages](https://img.shields.io/badge/ColPrac-Contributor%27s%20Guide-blueviolet)](https://github.com/SciML/ColPrac)
[![SciML Code Style](https://img.shields.io/static/v1?label=code%20style&message=SciML&color=9558b2&labelColor=389826)](https://github.com/SciML/SciMLStyle)

</div>

# ReservoirComputing.jl
ReservoirComputing.jl provides an efficient, modular and easy to use implementation of Reservoir Computing models such as Echo State Networks (ESNs). For information on using this package please refer to the [stable documentation](https://docs.sciml.ai/ReservoirComputing/stable/). Use the [in-development documentation](https://docs.sciml.ai/ReservoirComputing/dev/) to take a look at at not yet released features.

## Quick Example

To illustrate the workflow of this library we will showcase how it is possible to train an ESN to learn the dynamics of the Lorenz system. As a first step we will need to gather the data. For the `Generative` prediction we need the target data to be one step ahead of the training data:
Expand All @@ -36,7 +36,7 @@ function lorenz(du, u, p, t)
end
#solve and take data
prob = ODEProblem(lorenz, u0, tspan, p)
data = Array(solve(prob, ABM54(), dt = 0.02))
data = Array(solve(prob, ABM54(); dt=0.02))

shift = 300
train_len = 5000
Expand All @@ -55,9 +55,9 @@ Now that we have the data we can initialize the ESN with the chosen parameters.
input_size = 3
res_size = 300
esn = ESN(input_data, input_size, res_size;
reservoir = rand_sparse(; radius = 1.2, sparsity = 6 / res_size),
input_layer = weighted_init,
nla_type = NLAT2())
reservoir=rand_sparse(; radius=1.2, sparsity=6 / res_size),
input_layer=weighted_init,
nla_type=NLAT2())
```

The echo state network can now be trained and tested. If not specified, the training will always be ordinary least squares regression. The full range of training methods is detailed in the documentation.
Expand All @@ -71,8 +71,8 @@ The data is returned as a matrix, `output` in the code above, that contains the

```julia
using Plots
plot(transpose(output), layout = (3, 1), label = "predicted")
plot!(transpose(test), layout = (3, 1), label = "actual")
plot(transpose(output); layout=(3, 1), label="predicted")
plot!(transpose(test); layout=(3, 1), label="actual")
```

![lorenz_basic](https://user-images.githubusercontent.com/10376688/166227371-8bffa318-5c49-401f-9c64-9c71980cb3f7.png)
Expand All @@ -82,9 +82,9 @@ One can also visualize the phase space of the attractor and the comparison with
```julia
plot(transpose(output)[:, 1],
transpose(output)[:, 2],
transpose(output)[:, 3],
label = "predicted")
plot!(transpose(test)[:, 1], transpose(test)[:, 2], transpose(test)[:, 3], label = "actual")
transpose(output)[:, 3];
label="predicted")
plot!(transpose(test)[:, 1], transpose(test)[:, 2], transpose(test)[:, 3]; label="actual")
```

![lorenz_attractor](https://user-images.githubusercontent.com/10376688/81470281-5a34b580-91ea-11ea-9eea-d2b266da19f4.png)
Expand Down
Empty file modified docs/Project.toml
100755 → 100644
Empty file.
18 changes: 9 additions & 9 deletions docs/make.jl
100755 → 100644
Original file line number Diff line number Diff line change
Expand Up @@ -7,13 +7,13 @@ ENV["PLOTS_TEST"] = "true"
ENV["GKSwstype"] = "100"
include("pages.jl")

makedocs(modules = [ReservoirComputing],
sitename = "ReservoirComputing.jl",
clean = true, doctest = false, linkcheck = true,
warnonly = [:missing_docs],
format = Documenter.HTML(assets = ["assets/favicon.ico"],
canonical = "https://docs.sciml.ai/ReservoirComputing/stable/"),
pages = pages)
makedocs(; modules=[ReservoirComputing],
sitename="ReservoirComputing.jl",
clean=true, doctest=false, linkcheck=true,
warnonly=[:missing_docs],
format=Documenter.HTML(; assets=["assets/favicon.ico"],
canonical="https://docs.sciml.ai/ReservoirComputing/stable/"),
pages=pages)

deploydocs(repo = "github.com/SciML/ReservoirComputing.jl.git";
push_preview = true)
deploydocs(; repo="github.com/SciML/ReservoirComputing.jl.git",
push_preview=true)
3 changes: 2 additions & 1 deletion docs/pages.jl
100755 → 100644
Original file line number Diff line number Diff line change
Expand Up @@ -17,7 +17,8 @@ pages = [
"States Modifications" => "api/states.md",
"Prediction Types" => "api/predict.md",
"Echo State Networks" => "api/esn.md",
#"ESN Layers" => "api/esn_layers.md",
"ESN Layers" => "api/inits.md",
"ESN Drivers" => "api/esn_drivers.md",
"ESN variatoins" => "api/esn_variations.md",
"ReCA" => "api/reca.md"]
]
Empty file modified docs/src/api/esn.md
100755 → 100644
Empty file.
Empty file modified docs/src/api/esn_drivers.md
100755 → 100644
Empty file.
14 changes: 14 additions & 0 deletions docs/src/api/esn_variations.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,14 @@
# Echo State Networks variations

## Deep ESN

```@docs
DeepESN
```

## Hybrid ESN

```@docs
HybridESN
KnowledgeModel
```
20 changes: 20 additions & 0 deletions docs/src/api/inits.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,20 @@
# Echo State Networks Initializers
## Input layers

```@docs
scaled_rand
weighted_init
informed_init
minimal_init
```

## Reservoirs

```@docs
rand_sparse
delay_line
delay_line_backward
cycle_jumps
simple_cycle
pseudo_svd
```
Empty file modified docs/src/api/predict.md
100755 → 100644
Empty file.
Empty file modified docs/src/api/reca.md
100755 → 100644
Empty file.
6 changes: 6 additions & 0 deletions docs/src/api/states.md
100755 → 100644
Original file line number Diff line number Diff line change
Expand Up @@ -17,3 +17,9 @@
NLAT2
NLAT3
```

## Internals

```@docs
ReservoirComputing.create_states
```
Empty file modified docs/src/api/training.md
100755 → 100644
Empty file.
Empty file modified docs/src/assets/favicon.ico
100755 → 100644
Empty file.
Empty file modified docs/src/assets/logo.png
100755 → 100644
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
14 changes: 7 additions & 7 deletions docs/src/esn_tutorials/change_layers.md
100755 → 100644
Original file line number Diff line number Diff line change
@@ -1,6 +1,6 @@
# Using different layers

A great deal of efforts in the ESNs field are devoted to finding an ideal construction for the reservoir matrices. ReservoirComputing.jl offers multiple implementation of reservoir and input matrices initializations found in the literature. The API is standardized, and follows by [WeightInitializers.jl](https://github.com/LuxDL/WeightInitializers.jl):
A great deal of efforts in the ESNs field are devoted to finding an ideal construction for the reservoir matrices. ReservoirComputing.jl offers multiple implementation of reservoir and input matrices initializations found in the literature. The API is standardized, and follows by [WeightInitializers.jl](https://github.com/LuxDL/Lux.jl/tree/main/lib/WeightInitializers):

```julia
weights = init(rng, dims...)
Expand Down Expand Up @@ -49,15 +49,15 @@ Now it is possible to define the input layers and reservoirs we want to compare
using ReservoirComputing, StatsBase
res_size = 300
input_layer = [minimal_init(; weight = 0.85, sampling_type = :irrational),
minimal_init(; weight = 0.95, sampling_type = :irrational)]
reservoirs = [simple_cycle(; weight = 0.7),
cycle_jumps(; cycle_weight = 0.7, jump_weight = 0.2, jump_size = 5)]
input_layer = [minimal_init(; weight=0.85, sampling_type=:irrational),
minimal_init(; weight=0.95, sampling_type=:irrational)]
reservoirs = [simple_cycle(; weight=0.7),
cycle_jumps(; cycle_weight=0.7, jump_weight=0.2, jump_size=5)]
for i in 1:length(reservoirs)
esn = ESN(training_input, 2, res_size;
input_layer = input_layer[i],
reservoir = reservoirs[i])
input_layer=input_layer[i],
reservoir=reservoirs[i])
wout = train(esn, training_target, StandardRidge(0.001))
output = esn(Predictive(testing_input), wout)
println(msd(testing_target, output))
Expand Down
Empty file modified docs/src/esn_tutorials/data/santafe_laser.txt
100755 → 100644
Empty file.
40 changes: 20 additions & 20 deletions docs/src/esn_tutorials/deep_esn.md
100755 → 100644
Original file line number Diff line number Diff line change
Expand Up @@ -2,7 +2,7 @@

Deep Echo State Network architectures started to gain some traction recently. In this guide, we illustrate how it is possible to use ReservoirComputing.jl to build a deep ESN.

The network implemented in this library is taken from [^1]. It works by stacking reservoirs on top of each other, feeding the output from one into the next. The states are obtained by merging all the inner states of the stacked reservoirs. For a more in-depth explanation, refer to the paper linked above. The full script for this example can be found [here](https://github.com/MartinuzziFrancesco/reservoir-computing-examples/blob/main/deep-esn/deepesn.jl). This example was run on Julia v1.7.2.
The network implemented in this library is taken from [^1]. It works by stacking reservoirs on top of each other, feeding the output from one into the next. The states are obtained by merging all the inner states of the stacked reservoirs. For a more in-depth explanation, refer to the paper linked above.

## Lorenz Example

Expand All @@ -20,7 +20,7 @@ end
#solve and take data
prob = ODEProblem(lorenz!, [1.0, 0.0, 0.0], (0.0, 200.0))
data = solve(prob, ABM54(), dt = 0.02)
data = solve(prob, ABM54(); dt=0.02)
data = reduce(hcat, data.u)
#determine shift length, training length and prediction length
Expand All @@ -41,15 +41,15 @@ The construction of the ESN is also really similar. The only difference is that
```@example deep_lorenz
using ReservoirComputing
reservoirs = [rand_sparse(; radius = 1.1, sparsity = 0.1),
rand_sparse(; radius = 1.2, sparsity = 0.1),
rand_sparse(; radius = 1.4, sparsity = 0.1)]
reservoirs = [rand_sparse(; radius=1.1, sparsity=0.1),
rand_sparse(; radius=1.2, sparsity=0.1),
rand_sparse(; radius=1.4, sparsity=0.1)]
esn = DeepESN(input_data, 3, 200;
reservoir = reservoirs,
reservoir_driver = RNN(),
nla_type = NLADefault(),
states_type = StandardStates())
reservoir=reservoirs,
reservoir_driver=RNN(),
nla_type=NLADefault(),
states_type=StandardStates())
```

The input layer and bias can also be given as vectors, but of course, they have to be of the same size of the reservoirs vector. If they are not passed as a vector, the value passed will be used for all the layers in the deep ESN.
Expand All @@ -75,17 +75,17 @@ lorenz_maxlyap = 0.9056
predict_ts = ts[(shift + train_len + 1):(shift + train_len + predict_len)]
lyap_time = (predict_ts .- predict_ts[1]) * (1 / lorenz_maxlyap)
p1 = plot(lyap_time, [test_data[1, :] output[1, :]], label = ["actual" "predicted"],
ylabel = "x(t)", linewidth = 2.5, xticks = false, yticks = -15:15:15);
p2 = plot(lyap_time, [test_data[2, :] output[2, :]], label = ["actual" "predicted"],
ylabel = "y(t)", linewidth = 2.5, xticks = false, yticks = -20:20:20);
p3 = plot(lyap_time, [test_data[3, :] output[3, :]], label = ["actual" "predicted"],
ylabel = "z(t)", linewidth = 2.5, xlabel = "max(λ)*t", yticks = 10:15:40);
plot(p1, p2, p3, plot_title = "Lorenz System Coordinates",
layout = (3, 1), xtickfontsize = 12, ytickfontsize = 12, xguidefontsize = 15,
yguidefontsize = 15,
legendfontsize = 12, titlefontsize = 20)
p1 = plot(lyap_time, [test_data[1, :] output[1, :]]; label=["actual" "predicted"],
ylabel="x(t)", linewidth=2.5, xticks=false, yticks=-15:15:15);
p2 = plot(lyap_time, [test_data[2, :] output[2, :]]; label=["actual" "predicted"],
ylabel="y(t)", linewidth=2.5, xticks=false, yticks=-20:20:20);
p3 = plot(lyap_time, [test_data[3, :] output[3, :]]; label=["actual" "predicted"],
ylabel="z(t)", linewidth=2.5, xlabel="max(λ)*t", yticks=10:15:40);
plot(p1, p2, p3; plot_title="Lorenz System Coordinates",
layout=(3, 1), xtickfontsize=12, ytickfontsize=12, xguidefontsize=15,
yguidefontsize=15,
legendfontsize=12, titlefontsize=20)
```

## Documentation
Expand Down
Loading

0 comments on commit bc6772f

Please sign in to comment.