Skip to content

Commit

Permalink
Merge branch 'brainpy:master' into master
Browse files Browse the repository at this point in the history
  • Loading branch information
AkitsuFaye authored Sep 14, 2023
2 parents d062dca + 5430c11 commit 8b67a14
Show file tree
Hide file tree
Showing 81 changed files with 3,566 additions and 1,251 deletions.
59 changes: 59 additions & 0 deletions .github/workflows/docker.yml
Original file line number Diff line number Diff line change
@@ -0,0 +1,59 @@
name: Docker

on:
release:
types: [published]
pull_request:
paths:
- docker/**
- .github/workflows/docker.yml


jobs:
docker-build-push:
if: |
github.repository_owner == 'brainpy' ||
github.event_name != 'release'
runs-on: ubuntu-22.04
strategy:
matrix:
include:
- context: "docker/"
base: "brainpy/brainpy"
env:
TARGET_PLATFORMS: linux/amd64
REGISTRY: ghcr.io
IMAGE_NAME: ${{ github.repository }}
DOCKER_TAG_NAME: |
${{
(github.event_name == 'release' && github.event.release.tag_name) ||
'pull-request-test'
}}
steps:
- name: Checkout
uses: actions/checkout@v4

- name: Login to DockerHub
if: github.event_name != 'pull_request'
uses: docker/login-action@v2
with:
username: ${{ secrets.DOCKERHUB_USERNAME }}
password: ${{ secrets.DOCKERHUB_TOKEN }}

- name: Docker Build & Push (version tag)
uses: docker/build-push-action@v4
with:
context: ${{ matrix.context }}
tags: ${{ matrix.base }}:${{ env.DOCKER_TAG_NAME }}
push: ${{ github.event_name != 'pull_request' }}
platforms: ${{ env.TARGET_PLATFORMS }}

- name: Docker Build & Push (latest tag)
if: |
(github.event_name == 'release' && ! github.event.release.prerelease)
uses: docker/build-push-action@v4
with:
context: ${{ matrix.context }}
tags: ${{ matrix.base }}:latest
push: ${{ github.event_name != 'pull_request' }}
platforms: ${{ env.TARGET_PLATFORMS }}
1 change: 1 addition & 0 deletions .gitignore
Original file line number Diff line number Diff line change
Expand Up @@ -225,3 +225,4 @@ cython_debug/
/docs/tutorial_advanced/data/
/my_tests/
/examples/dynamics_simulation/Joglekar_2018_data/
/docs/apis/deprecated/generated/
13 changes: 13 additions & 0 deletions ACKNOWLEDGMENTS.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,13 @@
# Acknowledgments

The development of BrainPy is being or has been supported by many organizations, programs, and individuals since 2020.
The following list of support received is therefore necessarily incomplete.


This project has received funding from Science and Technology Innovation 2030 (China Brain Project):

- Brain Science and Brain-inspired Intelligence Project (No. 2021ZD0200204).

Additionally, BrainPy gratefully acknowledges the support and funding received from:

- Beijing Academy of Artificial Intelligence.
27 changes: 21 additions & 6 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -34,22 +34,37 @@ $ pip install brainpy brainpylib -U
For detailed installation instructions, please refer to the documentation: [Quickstart/Installation](https://brainpy.readthedocs.io/en/latest/quickstart/installation.html)


### Using BrainPy with docker

We provide a docker image for BrainPy. You can use the following command to pull the image:
```bash
$ docker pull brainpy/brainpy:latest
```

Then, you can run the image with the following command:
```bash
$ docker run -it --platform linux/amd64 brainpy/brainpy:latest
```

### Using BrainPy with Binder

We provide a Binder environment for BrainPy. You can use the following button to launch the environment:

[![Binder](https://mybinder.org/badge_logo.svg)](https://mybinder.org/v2/gh/brainpy/BrainPy-binder/main)

## Ecosystem

- **[BrainPy](https://github.com/brainpy/BrainPy)**: The solution for the general-purpose brain dynamics programming.
- **[brainpy-examples](https://github.com/brainpy/examples)**: Comprehensive examples of BrainPy computation.
- **[brainpy-datasets](https://github.com/brainpy/datasets)**: Neuromorphic and Cognitive Datasets for Brain Dynamics Modeling.

## Citing and Funding

If you are using ``brainpy``, please consider citing [the corresponding papers](https://brainpy.readthedocs.io/en/latest/tutorial_FAQs/citing_and_publication.html).
## Citing

BrainPy is developed by a team in Neural Information Processing Lab at Peking University, China.
Our team is committed to the long-term maintenance and development of the project.

Moreover, the development of BrainPy is being or has been supported by Science and Technology
Innovation 2030 - Brain Science and Brain-inspired Intelligence Project (China Brain Project),
and Beijing Academy of Artificial Intelligence.
If you are using ``brainpy``, please consider citing [the corresponding papers](https://brainpy.readthedocs.io/en/latest/tutorial_FAQs/citing_and_publication.html).



## Ongoing development plans
Expand Down
2 changes: 1 addition & 1 deletion brainpy/__init__.py
Original file line number Diff line number Diff line change
@@ -1,6 +1,6 @@
# -*- coding: utf-8 -*-

__version__ = "2.4.4.post3"
__version__ = "2.4.4.post4"

# fundamental supporting modules
from brainpy import errors, check, tools
Expand Down
4 changes: 2 additions & 2 deletions brainpy/_src/delay.py
Original file line number Diff line number Diff line change
Expand Up @@ -327,7 +327,7 @@ def retrieve(self, delay_step, *indices):

if self.method == ROTATE_UPDATE:
i = share.load('i')
delay_idx = bm.as_jax((delay_step - i - 1) % self.max_length)
delay_idx = bm.as_jax((delay_step - i - 1) % self.max_length, dtype=jnp.int32)
delay_idx = jax.lax.stop_gradient(delay_idx)

elif self.method == CONCAT_UPDATE:
Expand Down Expand Up @@ -358,7 +358,7 @@ def update(
# update the delay data at the rotation index
if self.method == ROTATE_UPDATE:
i = share.load('i')
idx = bm.as_jax((-i - 1) % self.max_length)
idx = bm.as_jax((-i - 1) % self.max_length, dtype=jnp.int32)
self.data[idx] = latest_value

# update the delay data at the first position
Expand Down
125 changes: 105 additions & 20 deletions brainpy/_src/dnn/linear.py
Original file line number Diff line number Diff line change
Expand Up @@ -10,12 +10,12 @@
from brainpy import math as bm
from brainpy._src import connect, initialize as init
from brainpy._src.context import share
from brainpy.algorithms import OnlineAlgorithm, OfflineAlgorithm
from brainpy.check import is_initializer
from brainpy.errors import MathError
from brainpy.initialize import XavierNormal, ZeroInit, Initializer, parameter
from brainpy.types import ArrayType, Sharding
from brainpy._src.dnn.base import Layer
from brainpy._src.mixin import SupportOnline, SupportOffline, SupportSTDP

__all__ = [
'Dense', 'Linear',
Expand All @@ -29,35 +29,29 @@
]


class Dense(Layer):
class Dense(Layer, SupportOnline, SupportOffline, SupportSTDP):
r"""A linear transformation applied over the last dimension of the input.
Mathematically, this node can be defined as:
.. math::
y = x \cdot W + b
y = x \cdot weight + b
Parameters
----------
num_in: int
The number of the input feature. A positive integer.
num_out: int
The number of the output features. A positive integer.
W_initializer: optional, Initializer
weight_initializer: optional, Initializer
The weight initialization.
b_initializer: optional, Initializer
The bias initialization.
mode: Mode
Enable training this node or not. (default True)
"""

online_fit_by: Optional[OnlineAlgorithm]
'''Online fitting method.'''

offline_fit_by: Optional[OfflineAlgorithm]
'''Offline fitting method.'''

def __init__(
self,
num_in: int,
Expand All @@ -80,13 +74,13 @@ def __init__(
f'a positive integer. Received: num_out={num_out}')

# weight initializer
self.weight_initializer = W_initializer
self.W_initializer = W_initializer
self.bias_initializer = b_initializer
is_initializer(W_initializer, 'weight_initializer')
is_initializer(b_initializer, 'bias_initializer', allow_none=True)

# parameter initialization
W = parameter(self.weight_initializer, (num_in, self.num_out))
W = parameter(self.W_initializer, (num_in, self.num_out))
b = parameter(self.bias_initializer, (self.num_out,))
if isinstance(self.mode, bm.TrainingMode):
W = bm.TrainVar(W)
Expand All @@ -95,8 +89,8 @@ def __init__(
self.b = b

# fitting parameters
self.online_fit_by = None
self.offline_fit_by = None
self.online_fit_by = None # support online training
self.offline_fit_by = None # support offline training
self.fit_record = dict()

def __repr__(self):
Expand Down Expand Up @@ -204,6 +198,20 @@ def offline_fit(self,
self.W.value = Wff
self.b.value = bias[0]

def update_STDP(self, dW, constraints=None):
if isinstance(self.W, float):
raise ValueError(f'Cannot update the weight of a constant node.')
if not isinstance(dW, (bm.ndarray, jnp.ndarray, np.ndarray)):
raise ValueError(f'"delta_weight" must be a array, but got {type(dW)}')
if self.W.shape != dW.shape:
raise ValueError(f'The shape of delta_weight {dW.shape} '
f'should be the same as the shape of weight {self.W.shape}.')
if not isinstance(self.W, bm.Variable):
self.tracing_variable('W', self.W, self.W.shape)
self.W += dW
if constraints is not None:
self.W.value = constraints(self.W)


Linear = Dense

Expand All @@ -219,7 +227,7 @@ def update(self, x):
return x


class AllToAll(Layer):
class AllToAll(Layer, SupportSTDP):
"""Synaptic matrix multiplication with All2All connections.
Args:
Expand Down Expand Up @@ -281,8 +289,23 @@ def update(self, pre_val):
post_val = pre_val @ self.weight
return post_val

def update_STDP(self, dW, constraints=None):
if isinstance(self.weight, float):
raise ValueError(f'Cannot update the weight of a constant node.')
if not isinstance(dW, (bm.ndarray, jnp.ndarray, np.ndarray)):
raise ValueError(f'"delta_weight" must be a array, but got {type(dW)}')
if self.weight.shape != dW.shape:
raise ValueError(f'The shape of delta_weight {dW.shape} '
f'should be the same as the shape of weight {self.weight.shape}.')
if not isinstance(self.weight, bm.Variable):
self.tracing_variable('weight', self.weight, self.weight.shape)
self.weight += dW
if constraints is not None:
self.weight.value = constraints(self.weight)



class OneToOne(Layer):
class OneToOne(Layer, SupportSTDP):
"""Synaptic matrix multiplication with One2One connection.
Args:
Expand Down Expand Up @@ -315,8 +338,23 @@ def __init__(
def update(self, pre_val):
return pre_val * self.weight


class MaskedLinear(Layer):
def update_STDP(self, dW, constraints=None):
if isinstance(self.weight, float):
raise ValueError(f'Cannot update the weight of a constant node.')
if not isinstance(dW, (bm.ndarray, jnp.ndarray, np.ndarray)):
raise ValueError(f'"delta_weight" must be a array, but got {type(dW)}')
dW = dW.sum(axis=0)
if self.weight.shape != dW.shape:
raise ValueError(f'The shape of delta_weight {dW.shape} '
f'should be the same as the shape of weight {self.weight.shape}.')
if not isinstance(self.weight, bm.Variable):
self.tracing_variable('weight', self.weight, self.weight.shape)
self.weight += dW
if constraints is not None:
self.weight.value = constraints(self.weight)


class MaskedLinear(Layer, SupportSTDP):
r"""Synaptic matrix multiplication with masked dense computation.
It performs the computation of:
Expand Down Expand Up @@ -369,8 +407,23 @@ def __init__(
def update(self, x):
return x @ self.mask_fun(self.weight * self.mask)

def update_STDP(self, dW, constraints=None):
if isinstance(self.weight, float):
raise ValueError(f'Cannot update the weight of a constant node.')
if not isinstance(dW, (bm.ndarray, jnp.ndarray, np.ndarray)):
raise ValueError(f'"delta_weight" must be a array, but got {type(dW)}')
if self.weight.shape != dW.shape:
raise ValueError(f'The shape of delta_weight {dW.shape} '
f'should be the same as the shape of weight {self.weight.shape}.')
if not isinstance(self.weight, bm.Variable):
self.tracing_variable('weight', self.weight, self.weight.shape)

self.weight += dW
if constraints is not None:
self.weight.value = constraints(self.weight)


class CSRLinear(Layer):
class CSRLinear(Layer, SupportSTDP):
r"""Synaptic matrix multiplication with CSR sparse computation.
It performs the computation of:
Expand Down Expand Up @@ -438,6 +491,22 @@ def _batch_csrmv(self, x):
transpose=self.transpose,
method=self.method)

def update_STDP(self, dW, constraints=None):
if isinstance(self.weight, float):
raise ValueError(f'Cannot update the weight of a constant node.')
if not isinstance(dW, (bm.ndarray, jnp.ndarray, np.ndarray)):
raise ValueError(f'"delta_weight" must be a array, but got {type(dW)}')
pre_ids, post_ids = bm.sparse.csr_to_coo(self.indices, self.indptr)
sparse_dW = dW[pre_ids, post_ids]
if self.weight.shape != sparse_dW.shape:
raise ValueError(f'The shape of sparse delta_weight {sparse_dW.shape} '
f'should be the same as the shape of sparse weight {self.weight.shape}.')
if not isinstance(self.weight, bm.Variable):
self.tracing_variable('weight', self.weight, self.weight.shape)
self.weight += sparse_dW
if constraints is not None:
self.weight.value = constraints(self.weight)


class CSCLinear(Layer):
r"""Synaptic matrix multiplication with CSC sparse computation.
Expand Down Expand Up @@ -474,7 +543,7 @@ def __init__(
self.sharding = sharding


class EventCSRLinear(Layer):
class EventCSRLinear(Layer, SupportSTDP):
r"""Synaptic matrix multiplication with event CSR sparse computation.
It performs the computation of:
Expand Down Expand Up @@ -538,6 +607,22 @@ def _batch_csrmv(self, x):
shape=(self.conn.pre_num, self.conn.post_num),
transpose=self.transpose)

def update_STDP(self, dW, constraints=None):
if isinstance(self.weight, float):
raise ValueError(f'Cannot update the weight of a constant node.')
if not isinstance(dW, (bm.ndarray, jnp.ndarray, np.ndarray)):
raise ValueError(f'"delta_weight" must be a array, but got {type(dW)}')
pre_ids, post_ids = bm.sparse.csr_to_coo(self.indices, self.indptr)
sparse_dW = dW[pre_ids, post_ids]
if self.weight.shape != sparse_dW.shape:
raise ValueError(f'The shape of sparse delta_weight {sparse_dW.shape} '
f'should be the same as the shape of sparse weight {self.weight.shape}.')
if not isinstance(self.weight, bm.Variable):
self.tracing_variable('weight', self.weight, self.weight.shape)
self.weight += sparse_dW
if constraints is not None:
self.weight.value = constraints(self.weight)


class BcsrMM(Layer):
r"""Synaptic matrix multiplication with BCSR sparse computation.
Expand Down
Loading

0 comments on commit 8b67a14

Please sign in to comment.