From 6fdb7dfadbdca58fa7b7c72107cc253b79bda0d3 Mon Sep 17 00:00:00 2001 From: "Documenter.jl" Date: Wed, 27 Nov 2024 22:21:33 +0000 Subject: [PATCH] build based on 15c47dd --- dev/.documenter-siteinfo.json | 2 +- dev/about/index.html | 2 +- dev/index.html | 2 +- dev/lib/Architecture/index.html | 6 +++--- dev/lib/ControllerFormats/index.html | 2 +- dev/lib/FileFormats/index.html | 4 ++-- 6 files changed, 9 insertions(+), 9 deletions(-) diff --git a/dev/.documenter-siteinfo.json b/dev/.documenter-siteinfo.json index 21a6913..3418790 100644 --- a/dev/.documenter-siteinfo.json +++ b/dev/.documenter-siteinfo.json @@ -1 +1 @@ -{"documenter":{"julia_version":"1.11.1","generation_timestamp":"2024-11-18T16:10:35","documenter_version":"1.8.0"}} \ No newline at end of file +{"documenter":{"julia_version":"1.11.1","generation_timestamp":"2024-11-27T22:21:28","documenter_version":"1.8.0"}} \ No newline at end of file diff --git a/dev/about/index.html b/dev/about/index.html index 088eaac..3dec87e 100644 --- a/dev/about/index.html +++ b/dev/about/index.html @@ -1,4 +1,4 @@ About · ControllerFormats.jl

About

This page contains some general information about this project and recommendations about contributing.

Contributing

If you like this package, consider contributing!

Creating an issue in the ControllerFormats GitHub issue tracker to report a bug, open a discussion about existing functionality, or suggest new functionality is appreciated.

If you have written code and would like it to be peer reviewed and added to the library, you can fork the repository and send a pull request (see below).

You are also welcome to get in touch with us in the JuliaReach Zulip channel.

Below we give some general comments about contributing to this package. The JuliaReach development documentation describes coding guidelines; take a look when in doubt about the coding style that is expected for the code that is finally merged into the library.

Branches and pull requests (PR)

We use a standard pull-request policy: You work in a private branch and eventually add a pull request, which is then reviewed by other programmers and merged into the master branch.

Each pull request should be based on a branch with the name of the author followed by a descriptive name, e.g., mforets/my_feature. If the branch is associated to a previous discussion in an issue, we use the number of the issue for easier lookup, e.g., mforets/7.

Unit testing and continuous integration (CI)

This project is synchronized with GitHub Actions such that each PR gets tested before merging (and the build is automatically triggered after each new commit). For the maintainability of this project, it is important to make all unit tests pass.

To run the unit tests locally, you can do:

julia> using Pkg
 
-julia> Pkg.test("ControllerFormats")

We also advise adding new unit tests when adding new features to ensure long-term support of your contributions.

Contributing to the documentation

New functions and types should be documented according to the JuliaReach development documentation.

You can view the source-code documentation from inside the REPL by typing ? followed by the name of the type or function.

The documentation you are currently reading is written in Markdown, and it relies on the package Documenter.jl to produce the final layout. The sources for creating this documentation are found in docs/src. You can easily include the documentation that you wrote for your functions or types there (see the source code or Documenter's guide for examples).

To generate the documentation locally, run docs/make.jl, e.g., by executing the following command in the terminal:

$ julia --color=yes docs/make.jl

Credits

Here we list the names of the maintainers of the ControllerFormats.jl library, as well as past and present contributors (in alphabetic order).

Core developers

Contributors

+julia> Pkg.test("ControllerFormats")

We also advise adding new unit tests when adding new features to ensure long-term support of your contributions.

Contributing to the documentation

New functions and types should be documented according to the JuliaReach development documentation.

You can view the source-code documentation from inside the REPL by typing ? followed by the name of the type or function.

The documentation you are currently reading is written in Markdown, and it relies on the package Documenter.jl to produce the final layout. The sources for creating this documentation are found in docs/src. You can easily include the documentation that you wrote for your functions or types there (see the source code or Documenter's guide for examples).

To generate the documentation locally, run docs/make.jl, e.g., by executing the following command in the terminal:

$ julia --color=yes docs/make.jl

Credits

Here we list the names of the maintainers of the ControllerFormats.jl library, as well as past and present contributors (in alphabetic order).

Core developers

Contributors

diff --git a/dev/index.html b/dev/index.html index c95e2d2..d552273 100644 --- a/dev/index.html +++ b/dev/index.html @@ -1,2 +1,2 @@ -Home · ControllerFormats.jl

ControllerFormats.jl

This light-weight Julia library contains basic representations of controllers (currently deep neural networks) as well as functionality to parse them from various file formats like MAT, YAML and ONNX.

The library originated from the package ClosedLoopReachability, which performs formal analysis of a given trained neural network. This motivates that ControllerFormats.jl does not provide support for typical other tasks such as network training, and some of the supported file formats are only used by some similar analysis tool.

  • Flux.jl is a comprehensive Julia framework for machine learning. It also offers a representation of neural networks.
  • MLJ.jl is a large Julia library of machine-learning models such as neural networks and decision trees.
  • NNet offers a representation of neural networks and a parser for the NNet format.
+Home · ControllerFormats.jl

ControllerFormats.jl

This light-weight Julia library contains basic representations of controllers (currently deep neural networks) as well as functionality to parse them from various file formats like MAT, YAML and ONNX.

The library originated from the package ClosedLoopReachability, which performs formal analysis of a given trained neural network. This motivates that ControllerFormats.jl does not provide support for typical other tasks such as network training, and some of the supported file formats are only used by some similar analysis tool.

  • Flux.jl is a comprehensive Julia framework for machine learning. It also offers a representation of neural networks.
  • MLJ.jl is a large Julia library of machine-learning models such as neural networks and decision trees.
  • NNet offers a representation of neural networks and a parser for the NNet format.
diff --git a/dev/lib/Architecture/index.html b/dev/lib/Architecture/index.html index e9fd942..f9c53b1 100644 --- a/dev/lib/Architecture/index.html +++ b/dev/lib/Architecture/index.html @@ -1,5 +1,5 @@ -Architecture module · ControllerFormats.jl

Architecture module

Neural networks

An artificial neural network can be used as a controller.

General interface

ControllerFormats.Architecture.AbstractNeuralNetworkType
AbstractNeuralNetwork

Abstract type for neural networks.

Notes

Subtypes should implement the following method:

  • layers(::AbstractNeuralNetwork) - return a list of the layers

The following standard methods are implemented:

  • length(::AbstractNeuralNetwork)
  • getindex(::AbstractNeuralNetwork, indices)
  • lastindex(::AbstractNeuralNetwork)
  • ==(::AbstractNeuralNetwork, ::AbstractNeuralNetwork)
source

The following non-standard methods are implemented:

ControllerFormats.Architecture.dimMethod
dim(N::AbstractNeuralNetwork)

Return the input and output dimension of a neural network.

Input

  • N – neural network

Output

The pair $(i, o)$ where $i$ is the input dimension and $o$ is the output dimension of N.

Notes

This function is not exported due to name conflicts with other related packages.

source

Implementation

ControllerFormats.Architecture.FeedforwardNetworkType
FeedforwardNetwork{L} <: AbstractNeuralNetwork

Standard implementation of a feedforward neural network which stores the layer operations.

Fields

Notes

The field layers contains the layer operations, so the number of layers is length(layers) + 1.

Conversion from a Flux.Chain is supported.

source

Layer operations

ControllerFormats.Architecture.AbstractLayerOpType
AbstractLayerOp

Abstract type for layer operations.

Notes

An AbstractLayerOp represents a layer operation. A classical example is a "dense layer operation" with an affine map followed by an activation function.

source

The following non-standard methods are useful to implement:

ControllerFormats.Architecture.dimMethod
dim(L::AbstractLayerOp)

Return the input and output dimension of a layer operation.

Input

  • N – neural network

Output

The pair $(i, o)$ where $i$ is the input dimension and $o$ is the output dimension of N.

Notes

This function is not exported due to name conflicts with other related packages.

source

More specific layer interfaces

ControllerFormats.Architecture.AbstractPoolingLayerOpType
AbstractPoolingLayerOp <: AbstractLayerOp

Abstract type for pooling layer operations.

Notes

Pooling is an operation on a three-dimensional tensor that iterates over the first two dimensions in a window and aggregates the values, thus reducing the output dimension.

Implementation

The following (unexported) functions should be implemented:

  • window(::AbstractPoolingLayerOp) – return the pair $(p, q)$ representing the window size
  • aggregation(::AbstractPoolingLayerOp) – return the aggregation function (applied to a tensor)
source

Implementation

ControllerFormats.Architecture.DenseLayerOpType
DenseLayerOp{F, M, B} <: AbstractLayerOp

A dense layer operation is an affine map followed by an activation function.

Fields

  • weights – weight matrix
  • bias – bias vector
  • activation – activation function

Notes

Conversion from a Flux.Dense is supported.

source
ControllerFormats.Architecture.ConvolutionalLayerOpType
ConvolutionalLayerOp{F, M, B} <: AbstractLayerOp

A convolutional layer operation is a series of filters, each of which computes a small affine map followed by an activation function.

Fields

  • weights – vector with one weight matrix for each filter
  • bias – vector with one bias value for each filter
  • activation – activation function

Notes

Conversion from a Flux.Conv is supported.

source
ControllerFormats.Architecture.FlattenLayerOpType
FlattenLayerOp <: AbstractLayerOp

A flattening layer operation converts a multidimensional tensor into a vector.

Notes

The implementation uses row-major ordering for convenience with the machine-learning literature.

julia> T = reshape([1, 3, 2, 4, 5, 7, 6, 8], (2, 2, 2))
+Architecture module · ControllerFormats.jl

Architecture module

Neural networks

An artificial neural network can be used as a controller.

General interface

ControllerFormats.Architecture.AbstractNeuralNetworkType
AbstractNeuralNetwork

Abstract type for neural networks.

Notes

Subtypes should implement the following method:

  • layers(::AbstractNeuralNetwork) - return a list of the layers

The following standard methods are implemented:

  • length(::AbstractNeuralNetwork)
  • getindex(::AbstractNeuralNetwork, indices)
  • lastindex(::AbstractNeuralNetwork)
  • ==(::AbstractNeuralNetwork, ::AbstractNeuralNetwork)
source

The following non-standard methods are implemented:

ControllerFormats.Architecture.dimMethod
dim(N::AbstractNeuralNetwork)

Return the input and output dimension of a neural network.

Input

  • N – neural network

Output

The pair $(i, o)$ where $i$ is the input dimension and $o$ is the output dimension of N.

Notes

This function is not exported due to name conflicts with other related packages.

source

Implementation

ControllerFormats.Architecture.FeedforwardNetworkType
FeedforwardNetwork{L} <: AbstractNeuralNetwork

Standard implementation of a feedforward neural network which stores the layer operations.

Fields

Notes

The field layers contains the layer operations, so the number of layers is length(layers) + 1.

Conversion from a Flux.Chain is supported.

source

Layer operations

ControllerFormats.Architecture.AbstractLayerOpType
AbstractLayerOp

Abstract type for layer operations.

Notes

An AbstractLayerOp represents a layer operation. A classical example is a "dense layer operation" with an affine map followed by an activation function.

source

The following non-standard methods are useful to implement:

ControllerFormats.Architecture.dimMethod
dim(L::AbstractLayerOp)

Return the input and output dimension of a layer operation.

Input

  • N – neural network

Output

The pair $(i, o)$ where $i$ is the input dimension and $o$ is the output dimension of N.

Notes

This function is not exported due to name conflicts with other related packages.

source

More specific layer interfaces

ControllerFormats.Architecture.AbstractPoolingLayerOpType
AbstractPoolingLayerOp <: AbstractLayerOp

Abstract type for pooling layer operations.

Notes

Pooling is an operation on a three-dimensional tensor that iterates over the first two dimensions in a window and aggregates the values, thus reducing the output dimension.

Implementation

The following (unexported) functions should be implemented:

  • window(::AbstractPoolingLayerOp) – return the pair $(p, q)$ representing the window size
  • aggregation(::AbstractPoolingLayerOp) – return the aggregation function (applied to a tensor)
source

Implementation

ControllerFormats.Architecture.DenseLayerOpType
DenseLayerOp{F, M, B} <: AbstractLayerOp

A dense layer operation is an affine map followed by an activation function.

Fields

  • weights – weight matrix
  • bias – bias vector
  • activation – activation function

Notes

Conversion from a Flux.Dense is supported.

source
ControllerFormats.Architecture.ConvolutionalLayerOpType
ConvolutionalLayerOp{F, M, B} <: AbstractLayerOp

A convolutional layer operation is a series of filters, each of which computes a small affine map followed by an activation function.

Fields

  • weights – vector with one weight matrix for each filter
  • bias – vector with one bias value for each filter
  • activation – activation function

Notes

Conversion from a Flux.Conv is supported.

source
ControllerFormats.Architecture.FlattenLayerOpType
FlattenLayerOp <: AbstractLayerOp

A flattening layer operation converts a multidimensional tensor into a vector.

Notes

The implementation uses row-major ordering for convenience with the machine-learning literature.

julia> T = reshape([1, 3, 2, 4, 5, 7, 6, 8], (2, 2, 2))
 2×2×2 Array{Int64, 3}:
 [:, :, 1] =
  1  2
@@ -18,7 +18,7 @@
  5
  6
  7
- 8
source

Activation functions

The following strings can be parsed as activation functions:

ControllerFormats.FileFormats.available_activations
Dict{String, ActivationFunction} with 12 entries:
+ 8
source

Activation functions

The following strings can be parsed as activation functions:

ControllerFormats.FileFormats.available_activations
Dict{String, ActivationFunction} with 12 entries:
   "ReLU"    => ReLU
   "logsig"  => Sigmoid
   "relu"    => ReLU
@@ -30,4 +30,4 @@
   "Tanh"    => Tanh
   "linear"  => Id
   "tanh"    => Tanh
-  "Linear"  => Id
+ "Linear" => Id diff --git a/dev/lib/ControllerFormats/index.html b/dev/lib/ControllerFormats/index.html index c877f77..28ac6f4 100644 --- a/dev/lib/ControllerFormats/index.html +++ b/dev/lib/ControllerFormats/index.html @@ -1,2 +1,2 @@ -ControllerFormats module · ControllerFormats.jl
+ControllerFormats module · ControllerFormats.jl
diff --git a/dev/lib/FileFormats/index.html b/dev/lib/FileFormats/index.html index ea99379..d75c784 100644 --- a/dev/lib/FileFormats/index.html +++ b/dev/lib/FileFormats/index.html @@ -1,3 +1,3 @@ -FileFormats module · ControllerFormats.jl

FileFormats module

Reading neural networks

ControllerFormats.FileFormats.read_MATFunction
read_MAT(filename::String; act_key::String)

Read a neural network stored in MATLAB's MAT format. This function requires to load the MAT.jl library.

Input

  • filename – name of the MAT file
  • act_key – key used for the activation functions
  • net_key – (optional; default: nothing) key used for the neural network

Output

A FeedforwardNetwork.

Notes

The MATLAB file encodes a dictionary. If net_key is given, then the dictionary contains another dictionary under this key. Otherwise the outer dictionary directly contains the following:

  • A vector of weight matrices (under the name "W")
  • A vector of bias vectors (under the name "b")
  • A vector of strings for the activation functions (under the name passed via act_key)
source
ControllerFormats.FileFormats.read_NNetFunction
read_NNet(filename::String)

Read a neural network stored in NNet format.

Input

  • filename – name of the NNet file

Output

A FeedforwardNetwork.

Notes

The format assumes that all layers but the output layer use ReLU activation (the output layer uses the identity activation).

The format looks like this (each line may optionally be terminated by a comma):

  1. Header text, each line beginning with "//"
  2. Comma-separated line with four values: number of layer operations, number of inputs, number of outputs, maximum layer size
  3. Comma-separated line with the layer sizes
  4. Flag that is no longer used
  5. Minimum values of inputs
  6. Maximum values of inputs
  7. Mean values of inputs and one value for all outputs
  8. Range values of inputs and one value for all outputs
  9. Blocks of lines describing the weight matrix and bias vector for a layer; each matrix row is written as a comma-separated line, and each vector entry is written in its own line

The code follows this implementation.

source
ControllerFormats.FileFormats.read_ONNXFunction
read_ONNX(filename::String; [input_dimension=nothing])

Read a neural network stored in ONNX format. This function requires to load the ONNX.jl library.

Input

  • filename – name of the ONNX file
  • input_dimension – (optional; default: nothing) input dimension (required by ONNX.jl parser); see the notes below

Output

A FeedforwardNetwork.

Notes

This implementation assumes the following structure:

  1. First comes the input vector (which is ignored).
  2. Next come the weight matrices W (transposed) and bias vectors b in pairs in the order in which they are applied.
  3. Next come the affine maps and the activation functions in the order in which they are applied. The last layer does not have an activation function.

Some of these assumptions are currently not validated. Hence it may happen that this function returns a result that is incorrect.

If the argument input_dimension is not provided, the file is parsed an additional time to read the correct number (which is inefficient).

source
ControllerFormats.FileFormats.read_POLARFunction
read_POLAR(filename::String)

Read a neural network stored in POLAR format.

Input

  • filename – name of the POLAR file

Output

A FeedforwardNetwork.

Notes

The POLAR format uses the same parameter format as Sherlock (see read_Sherlock) but allows for general activation functions.

In addition, the last two lines are:

0.0
-1.0

The reference parser and writer can be found here.

source

Writing neural networks

ControllerFormats.FileFormats.write_NNetFunction
write_NNet(N::FeedforwardNetwork, filename::String)

Write a neural network to a file in NNet format.

Input

  • N – feedforward neural network
  • filename – name of the output file

Output

nothing. The network is written to the output file.

Notes

The NNet format assumes that all layers but the output layer use ReLU activation (the output layer uses the identity activation).

Some non-important part of the output (such as the input domain) is not correctly written and instead set to 0.

See read_NNet for the documentation of the format.

source
ControllerFormats.FileFormats.write_POLARFunction
write_POLAR(N::FeedforwardNetwork, filename::String)

Write a neural network to a file in POLAR format.

Input

  • N – feedforward neural network
  • filename – name of the output file

Output

nothing. The network is written to the output file.

source
ControllerFormats.FileFormats.write_SherlockFunction
write_Sherlock(N::FeedforwardNetwork, filename::String)

Write a neural network to a file in Sherlock format.

Input

  • N – feedforward neural network
  • filename – name of the output file

Output

nothing. The network is written to the output file.

Notes

The Sherlock format requires that all activation functions are ReLU.

source
+FileFormats module · ControllerFormats.jl

FileFormats module

Reading neural networks

ControllerFormats.FileFormats.read_MATFunction
read_MAT(filename::String; act_key::String)

Read a neural network stored in MATLAB's MAT format. This function requires to load the MAT.jl library.

Input

  • filename – name of the MAT file
  • act_key – key used for the activation functions
  • net_key – (optional; default: nothing) key used for the neural network

Output

A FeedforwardNetwork.

Notes

The MATLAB file encodes a dictionary. If net_key is given, then the dictionary contains another dictionary under this key. Otherwise the outer dictionary directly contains the following:

  • A vector of weight matrices (under the name "W")
  • A vector of bias vectors (under the name "b")
  • A vector of strings for the activation functions (under the name passed via act_key)
source
ControllerFormats.FileFormats.read_NNetFunction
read_NNet(filename::String)

Read a neural network stored in NNet format.

Input

  • filename – name of the NNet file

Output

A FeedforwardNetwork.

Notes

The format assumes that all layers but the output layer use ReLU activation (the output layer uses the identity activation).

The format looks like this (each line may optionally be terminated by a comma):

  1. Header text, each line beginning with "//"
  2. Comma-separated line with four values: number of layer operations, number of inputs, number of outputs, maximum layer size
  3. Comma-separated line with the layer sizes
  4. Flag that is no longer used
  5. Minimum values of inputs
  6. Maximum values of inputs
  7. Mean values of inputs and one value for all outputs
  8. Range values of inputs and one value for all outputs
  9. Blocks of lines describing the weight matrix and bias vector for a layer; each matrix row is written as a comma-separated line, and each vector entry is written in its own line

The code follows this implementation.

source
ControllerFormats.FileFormats.read_ONNXFunction
read_ONNX(filename::String; [input_dimension=nothing])

Read a neural network stored in ONNX format. This function requires to load the ONNX.jl library.

Input

  • filename – name of the ONNX file
  • input_dimension – (optional; default: nothing) input dimension (required by ONNX.jl parser); see the notes below

Output

A FeedforwardNetwork.

Notes

This implementation assumes the following structure:

  1. First comes the input vector (which is ignored).
  2. Next come the weight matrices W (transposed) and bias vectors b in pairs in the order in which they are applied.
  3. Next come the affine maps and the activation functions in the order in which they are applied. The last layer does not have an activation function.

Some of these assumptions are currently not validated. Hence it may happen that this function returns a result that is incorrect.

If the argument input_dimension is not provided, the file is parsed an additional time to read the correct number (which is inefficient).

source
ControllerFormats.FileFormats.read_POLARFunction
read_POLAR(filename::String)

Read a neural network stored in POLAR format.

Input

  • filename – name of the POLAR file

Output

A FeedforwardNetwork.

Notes

The POLAR format uses the same parameter format as Sherlock (see read_Sherlock) but allows for general activation functions.

In addition, the last two lines are:

0.0
+1.0

The reference parser and writer can be found here.

source

Writing neural networks

ControllerFormats.FileFormats.write_NNetFunction
write_NNet(N::FeedforwardNetwork, filename::String)

Write a neural network to a file in NNet format.

Input

  • N – feedforward neural network
  • filename – name of the output file

Output

nothing. The network is written to the output file.

Notes

The NNet format assumes that all layers but the output layer use ReLU activation (the output layer uses the identity activation).

Some non-important part of the output (such as the input domain) is not correctly written and instead set to 0.

See read_NNet for the documentation of the format.

source
ControllerFormats.FileFormats.write_POLARFunction
write_POLAR(N::FeedforwardNetwork, filename::String)

Write a neural network to a file in POLAR format.

Input

  • N – feedforward neural network
  • filename – name of the output file

Output

nothing. The network is written to the output file.

source
ControllerFormats.FileFormats.write_SherlockFunction
write_Sherlock(N::FeedforwardNetwork, filename::String)

Write a neural network to a file in Sherlock format.

Input

  • N – feedforward neural network
  • filename – name of the output file

Output

nothing. The network is written to the output file.

Notes

The Sherlock format requires that all activation functions are ReLU.

source