diff --git a/README.md b/README.md index 9578bbd42..6d2ee4bf4 100644 --- a/README.md +++ b/README.md @@ -104,4 +104,6 @@ We also welcome your contributions - [ ] pipeline parallelization on multiple devices for sparse spiking network models - [ ] multi-compartment modeling - [ ] measurements, analysis, and visualization methods for large-scale spiking data +- [ ] Online learning methods for large-scale spiking network models +- [ ] Classical plasticity rules for large-scale spiking network models diff --git a/brainpy/_src/dyn/_docs.py b/brainpy/_src/dyn/_docs.py index c2c75ffc9..d528d4266 100644 --- a/brainpy/_src/dyn/_docs.py +++ b/brainpy/_src/dyn/_docs.py @@ -40,3 +40,166 @@ ltc_doc = 'with liquid time-constant' + +dual_exp_syn_doc = r''' + + **Model Descriptions** + + The dual exponential synapse model [1]_, also named as *difference of two exponentials* model, + is given by: + + .. math:: + + g_{\mathrm{syn}}(t)=g_{\mathrm{max}} A \left(\exp \left(-\frac{t-t_{0}}{\tau_{1}}\right) + -\exp \left(-\frac{t-t_{0}}{\tau_{2}}\right)\right) + + where :math:`\tau_1` is the time constant of the decay phase, :math:`\tau_2` + is the time constant of the rise phase, :math:`t_0` is the time of the pre-synaptic + spike, :math:`g_{\mathrm{max}}` is the maximal conductance. + + However, in practice, this formula is hard to implement. The equivalent solution is + two coupled linear differential equations [2]_: + + .. math:: + + \begin{aligned} + &\frac{d g}{d t}=-\frac{g}{\tau_{\mathrm{decay}}}+h \\ + &\frac{d h}{d t}=-\frac{h}{\tau_{\text {rise }}}+ (\frac{1}{\tau_{\text{rise}}} - \frac{1}{\tau_{\text{decay}}}) A \delta\left(t_{0}-t\right), + \end{aligned} + + By default, :math:`A` has the following value: + + .. math:: + + A = \frac{{\tau }_{decay}}{{\tau }_{decay}-{\tau }_{rise}}{\left(\frac{{\tau }_{rise}}{{\tau }_{decay}}\right)}^{\frac{{\tau }_{rise}}{{\tau }_{rise}-{\tau }_{decay}}} + + .. [1] Sterratt, David, Bruce Graham, Andrew Gillies, and David Willshaw. + "The Synapse." Principles of Computational Modelling in Neuroscience. + Cambridge: Cambridge UP, 2011. 172-95. Print. + .. [2] Roth, A., & Van Rossum, M. C. W. (2009). Modeling Synapses. Computational + Modeling Methods for Neuroscientists. + +''' + +dual_exp_args = ''' + tau_decay: float, ArrayArray, Callable. The time constant of the synaptic decay phase. [ms] + tau_rise: float, ArrayArray, Callable. The time constant of the synaptic rise phase. [ms] + A: float. The normalization factor. Default None. + +''' + + +alpha_syn_doc = r''' + + **Model Descriptions** + + The analytical expression of alpha synapse is given by: + + .. math:: + + g_{syn}(t)= g_{max} \frac{t-t_{s}}{\tau} \exp \left(-\frac{t-t_{s}}{\tau}\right). + + While, this equation is hard to implement. So, let's try to convert it into the + differential forms: + + .. math:: + + \begin{aligned} + &\frac{d g}{d t}=-\frac{g}{\tau}+\frac{h}{\tau} \\ + &\frac{d h}{d t}=-\frac{h}{\tau}+\delta\left(t_{0}-t\right) + \end{aligned} + + .. [1] Sterratt, David, Bruce Graham, Andrew Gillies, and David Willshaw. + "The Synapse." Principles of Computational Modelling in Neuroscience. + Cambridge: Cambridge UP, 2011. 172-95. Print. + + +''' + + +exp_syn_doc = r''' + + **Model Descriptions** + + The single exponential decay synapse model assumes the release of neurotransmitter, + its diffusion across the cleft, the receptor binding, and channel opening all happen + very quickly, so that the channels instantaneously jump from the closed to the open state. + Therefore, its expression is given by + + .. math:: + + g_{\mathrm{syn}}(t)=g_{\mathrm{max}} e^{-\left(t-t_{0}\right) / \tau} + + where :math:`\tau_{delay}` is the time constant of the synaptic state decay, + :math:`t_0` is the time of the pre-synaptic spike, + :math:`g_{\mathrm{max}}` is the maximal conductance. + + Accordingly, the differential form of the exponential synapse is given by + + .. math:: + + \begin{aligned} + & \frac{d g}{d t} = -\frac{g}{\tau_{decay}}+\sum_{k} \delta(t-t_{j}^{k}). + \end{aligned} + + .. [1] Sterratt, David, Bruce Graham, Andrew Gillies, and David Willshaw. + "The Synapse." Principles of Computational Modelling in Neuroscience. + Cambridge: Cambridge UP, 2011. 172-95. Print. + +''' + + +std_doc = r''' + + This model filters the synaptic current by the following equation: + + .. math:: + + I_{syn}^+(t) = I_{syn}^-(t) * x + + where :math:`x` is the normalized variable between 0 and 1, and + :math:`I_{syn}^-(t)` and :math:`I_{syn}^+(t)` are the synaptic currents before + and after STD filtering. + + Moreover, :math:`x` is updated according to the dynamics of: + + .. math:: + + \frac{dx}{dt} = \frac{1-x}{\tau} - U * x * \delta(t-t_{spike}) + + where :math:`U` is the fraction of resources used per action potential, + :math:`\tau` is the time constant of recovery of the synaptic vesicles. + +''' + + +stp_doc = r''' + + This model filters the synaptic currents according to two variables: :math:`u` and :math:`x`. + + .. math:: + + I_{syn}^+(t) = I_{syn}^-(t) * x * u + + where :math:`I_{syn}^-(t)` and :math:`I_{syn}^+(t)` are the synaptic currents before + and after STP filtering, :math:`x` denotes the fraction of resources that remain available + after neurotransmitter depletion, and :math:`u` represents the fraction of available + resources ready for use (release probability). + + The dynamics of :math:`u` and :math:`x` are governed by + + .. math:: + + \begin{aligned} + \frac{du}{dt} & = & -\frac{u}{\tau_f}+U(1-u^-)\delta(t-t_{sp}), \\ + \frac{dx}{dt} & = & \frac{1-x}{\tau_d}-u^+x^-\delta(t-t_{sp}), \\ + \end{aligned} + + where :math:`t_{sp}` denotes the spike time and :math:`U` is the increment + of :math:`u` produced by a spike. :math:`u^-, x^-` are the corresponding + variables just before the arrival of the spike, and :math:`u^+` + refers to the moment just after the spike. + + +''' + diff --git a/brainpy/_src/dyn/synapses/abstract_models.py b/brainpy/_src/dyn/synapses/abstract_models.py index 4864b8d67..cdc1912d7 100644 --- a/brainpy/_src/dyn/synapses/abstract_models.py +++ b/brainpy/_src/dyn/synapses/abstract_models.py @@ -2,7 +2,8 @@ from brainpy import math as bm from brainpy._src.context import share -from brainpy._src.dyn._docs import pneu_doc +from brainpy._src.initialize import parameter +from brainpy._src.dyn import _docs from brainpy._src.dyn.base import SynDyn from brainpy._src.integrators.joint_eq import JointEq from brainpy._src.integrators.ode.generic import odeint @@ -23,28 +24,7 @@ class Expon(SynDyn, AlignPost): r"""Exponential decay synapse model. - **Model Descriptions** - - The single exponential decay synapse model assumes the release of neurotransmitter, - its diffusion across the cleft, the receptor binding, and channel opening all happen - very quickly, so that the channels instantaneously jump from the closed to the open state. - Therefore, its expression is given by - - .. math:: - - g_{\mathrm{syn}}(t)=g_{\mathrm{max}} e^{-\left(t-t_{0}\right) / \tau} - - where :math:`\tau_{delay}` is the time constant of the synaptic state decay, - :math:`t_0` is the time of the pre-synaptic spike, - :math:`g_{\mathrm{max}}` is the maximal conductance. - - Accordingly, the differential form of the exponential synapse is given by - - .. math:: - - \begin{aligned} - & \frac{d g}{d t} = -\frac{g}{\tau_{decay}}+\sum_{k} \delta(t-t_{j}^{k}). - \end{aligned} + %s This module can be used with interface ``brainpy.dyn.ProjAlignPreMg2``, as shown in the following example: @@ -106,11 +86,6 @@ def __init__(self, pre, post, delay, prob, g_max, tau, E): ) - - .. [1] Sterratt, David, Bruce Graham, Andrew Gillies, and David Willshaw. - "The Synapse." Principles of Computational Modelling in Neuroscience. - Cambridge: Cambridge UP, 2011. 172-95. Print. - Args: tau: float. The time constant of decay. [ms] %s @@ -162,36 +137,21 @@ def return_info(self): return self.g -Expon.__doc__ = Expon.__doc__ % (pneu_doc,) - - -class DualExpon(SynDyn): - r"""Dual exponential synapse model. - - **Model Descriptions** - - The dual exponential synapse model [1]_, also named as *difference of two exponentials* model, - is given by: - - .. math:: +Expon.__doc__ = Expon.__doc__ % (_docs.exp_syn_doc, _docs.pneu_doc,) - g_{\mathrm{syn}}(t)=g_{\mathrm{max}} \frac{\tau_{1} \tau_{2}}{ - \tau_{1}-\tau_{2}}\left(\exp \left(-\frac{t-t_{0}}{\tau_{1}}\right) - -\exp \left(-\frac{t-t_{0}}{\tau_{2}}\right)\right) - where :math:`\tau_1` is the time constant of the decay phase, :math:`\tau_2` - is the time constant of the rise phase, :math:`t_0` is the time of the pre-synaptic - spike, :math:`g_{\mathrm{max}}` is the maximal conductance. +def _format_dual_exp_A(self, A): + A = parameter(A, sizes=self.varshape, allow_none=True, sharding=self.sharding) + if A is None: + A = (self.tau_decay / (self.tau_decay - self.tau_rise) * + bm.float_power(self.tau_rise / self.tau_decay, self.tau_rise / (self.tau_rise - self.tau_decay))) + return A - However, in practice, this formula is hard to implement. The equivalent solution is - two coupled linear differential equations [2]_: - .. math:: +class DualExpon(SynDyn): + r"""Dual exponential synapse model. - \begin{aligned} - &\frac{d g}{d t}=-\frac{g}{\tau_{\mathrm{decay}}}+h \\ - &\frac{d h}{d t}=-\frac{h}{\tau_{\text {rise }}}+ \delta\left(t_{0}-t\right), - \end{aligned} + %s This module can be used with interface ``brainpy.dyn.ProjAlignPreMg2``, as shown in the following example: @@ -203,11 +163,9 @@ class DualExpon(SynDyn): import matplotlib.pyplot as plt - class DualExpSparseCOBA(bp.Projection): def __init__(self, pre, post, delay, prob, g_max, tau_decay, tau_rise, E): super().__init__() - self.proj = bp.dyn.ProjAlignPreMg2( pre=pre, delay=delay, @@ -217,7 +175,6 @@ def __init__(self, pre, post, delay, prob, g_max, tau_decay, tau_rise, E): post=post, ) - class SimpleNet(bp.DynSysGroup): def __init__(self, syn_cls, E=0.): super().__init__() @@ -253,16 +210,16 @@ def update(self): plt.title('Post V') plt.show() - .. [1] Sterratt, David, Bruce Graham, Andrew Gillies, and David Willshaw. - "The Synapse." Principles of Computational Modelling in Neuroscience. - Cambridge: Cambridge UP, 2011. 172-95. Print. - .. [2] Roth, A., & Van Rossum, M. C. W. (2009). Modeling Synapses. Computational - Modeling Methods for Neuroscientists. + See Also: + DualExponV2 + + .. note:: + + The implementation of this model can only be used in ``AlignPre`` projections. + One the contrary, to seek the ``AlignPost`` projection, please use ``DualExponV2``. Args: - tau_decay: float, ArrayArray, Callable. The time constant of the synaptic decay phase. [ms] - tau_rise: float, ArrayArray, Callable. The time constant of the synaptic rise phase. [ms] - normalize: bool. Normalize the raise and decay time constants so that the maximum conductance is 1. Default False. + %s %s """ @@ -278,7 +235,7 @@ def __init__( # synapse parameters tau_decay: Union[float, ArrayType, Callable] = 10.0, tau_rise: Union[float, ArrayType, Callable] = 1., - normalize: bool = False, + A: Optional[Union[float, ArrayType, Callable]] = None, ): super().__init__(name=name, mode=mode, @@ -287,15 +244,10 @@ def __init__( sharding=sharding) # parameters - self.normalize = normalize self.tau_rise = self.init_param(tau_rise) self.tau_decay = self.init_param(tau_decay) - if normalize: - self.a = ((1 / self.tau_rise - 1 / self.tau_decay) / - (self.tau_decay / self.tau_rise * (bm.exp(-self.tau_rise / (self.tau_decay - self.tau_rise)) - - bm.exp(-self.tau_decay / (self.tau_decay - self.tau_rise))))) - else: - self.a = 1. + A = _format_dual_exp_A(self, A) + self.a = (self.tau_decay - self.tau_rise) / self.tau_rise / self.tau_decay * A # integrator self.integral = odeint(JointEq(self.dg, self.dh), method=method) @@ -313,6 +265,8 @@ def dg(self, g, t, h): return -g / self.tau_decay + h def update(self, x): + # x: the pre-synaptic spikes + # update synaptic variables self.g.value, self.h.value = self.integral(self.g.value, self.h.value, share['t'], dt=share['dt']) self.h += self.a * x @@ -322,24 +276,17 @@ def return_info(self): return self.g -DualExpon.__doc__ = DualExpon.__doc__ % (pneu_doc,) +DualExpon.__doc__ = DualExpon.__doc__ % (_docs.dual_exp_syn_doc, _docs.pneu_doc, _docs.dual_exp_args) class DualExponV2(SynDyn, AlignPost): r"""Dual exponential synapse model. - The dual exponential synapse model [1]_, also named as *difference of two exponentials* model, - is given by: + %s - .. math:: + .. note:: - g_{\mathrm{syn}}(t)=g_{\mathrm{max}} \frac{\tau_{1} \tau_{2}}{ - \tau_{1}-\tau_{2}}\left(\exp \left(-\frac{t-t_{0}}{\tau_{1}}\right) - -\exp \left(-\frac{t-t_{0}}{\tau_{2}}\right)\right) - - where :math:`\tau_1` is the time constant of the decay phase, :math:`\tau_2` - is the time constant of the rise phase, :math:`t_0` is the time of the pre-synaptic - spike, :math:`g_{\mathrm{max}}` is the maximal conductance. + Different from ``DualExpon``, this model can be used in both modes of ``AlignPre`` and ``AlignPost`` projections. This module can be used with interface ``brainpy.dyn.ProjAlignPreMg2``, as shown in the following example: @@ -383,9 +330,6 @@ def update(self): current = self.post.sum_inputs(self.post.V) return conductance, current, self.post.V - - - indices = np.arange(1000) # 100 ms, dt= 0.1 ms net = SimpleNet(DualExponV2SparseCOBAPost, E=0.) conductances, currents, potentials = bm.for_loop(net.step_run, indices, progress_bar=True) @@ -402,7 +346,6 @@ def update(self): plt.title('Post V') plt.show() - Moreover, it can also be used with interface ``ProjAlignPostMg2``: .. code-block:: python @@ -420,18 +363,11 @@ def __init__(self, pre, post, delay, prob, g_max, tau_decay, tau_rise, E): post=post, ) - - - .. [1] Sterratt, David, Bruce Graham, Andrew Gillies, and David Willshaw. - "The Synapse." Principles of Computational Modelling in Neuroscience. - Cambridge: Cambridge UP, 2011. 172-95. Print. - .. [2] Roth, A., & Van Rossum, M. C. W. (2009). Modeling Synapses. Computational - Modeling Methods for Neuroscientists. + See Also: + DualExpon Args: - tau_decay: float, ArrayArray, Callable. The time constant of the synaptic decay phase. [ms] - tau_rise: float, ArrayArray, Callable. The time constant of the synaptic rise phase. [ms] - normalize: bool. Normalize the raise and decay time constants so that the maximum conductance is 1. Default True. + %s %s """ @@ -447,7 +383,7 @@ def __init__( # synapse parameters tau_decay: Union[float, ArrayType, Callable] = 10.0, tau_rise: Union[float, ArrayType, Callable] = 1., - normalize: bool = True, + A: Optional[Union[float, ArrayType, Callable]] = None, ): super().__init__(name=name, mode=mode, @@ -456,13 +392,9 @@ def __init__( sharding=sharding) # parameters - self.normalize = normalize self.tau_rise = self.init_param(tau_rise) self.tau_decay = self.init_param(tau_decay) - if normalize: - self.a = self.tau_rise * self.tau_decay / (self.tau_decay - self.tau_rise) - else: - self.a = 1. + self.a = _format_dual_exp_A(self, A) # integrator self.integral = odeint(lambda g, t, tau: -g / tau, method=method) @@ -489,29 +421,13 @@ def return_info(self): lambda shape: self.a * (self.g_decay - self.g_rise)) -DualExponV2.__doc__ = DualExponV2.__doc__ % (pneu_doc,) +DualExponV2.__doc__ = DualExponV2.__doc__ % (_docs.dual_exp_syn_doc, _docs.pneu_doc, _docs.dual_exp_args,) class Alpha(SynDyn): r"""Alpha synapse model. - **Model Descriptions** - - The analytical expression of alpha synapse is given by: - - .. math:: - - g_{syn}(t)= g_{max} \frac{t-t_{s}}{\tau} \exp \left(-\frac{t-t_{s}}{\tau}\right). - - While, this equation is hard to implement. So, let's try to convert it into the - differential forms: - - .. math:: - - \begin{aligned} - &\frac{d g}{d t}=-\frac{g}{\tau}+\frac{h}{\tau} \\ - &\frac{d h}{d t}=-\frac{h}{\tau}+\delta\left(t_{0}-t\right) - \end{aligned} + %s This module can be used with interface ``brainpy.dyn.ProjAlignPreMg2``, as shown in the following example: @@ -574,17 +490,9 @@ def update(self): plt.show() - - - - .. [1] Sterratt, David, Bruce Graham, Andrew Gillies, and David Willshaw. - "The Synapse." Principles of Computational Modelling in Neuroscience. - Cambridge: Cambridge UP, 2011. 172-95. Print. - Args: %s tau_decay: float, ArrayType, Callable. The time constant [ms] of the synaptic decay phase. - The name of this synaptic projection. """ def __init__( @@ -635,8 +543,7 @@ def return_info(self): return self.g - -Alpha.__doc__ = Alpha.__doc__ % (pneu_doc,) +Alpha.__doc__ = Alpha.__doc__ % (_docs.alpha_syn_doc, _docs.pneu_doc,) class NMDA(SynDyn): @@ -821,30 +728,13 @@ def return_info(self): return self.g -NMDA.__doc__ = NMDA.__doc__ % (pneu_doc,) +NMDA.__doc__ = NMDA.__doc__ % (_docs.pneu_doc,) class STD(SynDyn): r"""Synaptic output with short-term depression. - This model filters the synaptic current by the following equation: - - .. math:: - - I_{syn}^+(t) = I_{syn}^-(t) * x - - where :math:`x` is the normalized variable between 0 and 1, and - :math:`I_{syn}^-(t)` and :math:`I_{syn}^+(t)` are the synaptic currents before - and after STD filtering. - - Moreover, :math:`x` is updated according to the dynamics of: - - .. math:: - - \frac{dx}{dt} = \frac{1-x}{\tau} - U * x * \delta(t-t_{spike}) - - where :math:`U` is the fraction of resources used per action potential, - :math:`\tau` is the time constant of recovery of the synaptic vesicles. + %s Args: tau: float, ArrayType, Callable. The time constant of recovery of the synaptic vesicles. @@ -900,36 +790,13 @@ def return_info(self): return self.x -STD.__doc__ = STD.__doc__ % (pneu_doc,) +STD.__doc__ = STD.__doc__ % (_docs.std_doc, _docs.pneu_doc,) class STP(SynDyn): r"""Synaptic output with short-term plasticity. - This model filters the synaptic currents according to two variables: :math:`u` and :math:`x`. - - .. math:: - - I_{syn}^+(t) = I_{syn}^-(t) * x * u - - where :math:`I_{syn}^-(t)` and :math:`I_{syn}^+(t)` are the synaptic currents before - and after STP filtering, :math:`x` denotes the fraction of resources that remain available - after neurotransmitter depletion, and :math:`u` represents the fraction of available - resources ready for use (release probability). - - The dynamics of :math:`u` and :math:`x` are governed by - - .. math:: - - \begin{aligned} - \frac{du}{dt} & = & -\frac{u}{\tau_f}+U(1-u^-)\delta(t-t_{sp}), \\ - \frac{dx}{dt} & = & \frac{1-x}{\tau_d}-u^+x^-\delta(t-t_{sp}), \\ - \tag{1}\end{aligned} - - where :math:`t_{sp}` denotes the spike time and :math:`U` is the increment - of :math:`u` produced by a spike. :math:`u^-, x^-` are the corresponding - variables just before the arrival of the spike, and :math:`u^+` - refers to the moment just after the spike. + %s Args: tau_f: float, ArrayType, Callable. The time constant of short-term facilitation. @@ -1006,4 +873,4 @@ def return_info(self): lambda shape: self.u * self.x) -STP.__doc__ = STP.__doc__ % (pneu_doc,) +STP.__doc__ = STP.__doc__ % (_docs.stp_doc, _docs.pneu_doc,) diff --git a/brainpy/_src/dyn/synapses/tests/test_abstract_models.py b/brainpy/_src/dyn/synapses/tests/test_abstract_models.py new file mode 100644 index 000000000..ca028e2e4 --- /dev/null +++ b/brainpy/_src/dyn/synapses/tests/test_abstract_models.py @@ -0,0 +1,87 @@ +import unittest + +import matplotlib.pyplot as plt + +import brainpy as bp +import brainpy.math as bm + +show = False + + +class TestDualExpon(unittest.TestCase): + def test_dual_expon(self): + # bm.set(dt=0.01) + + class Net(bp.DynSysGroup): + def __init__(self, tau_r, tau_d, n_spk): + super().__init__() + + self.inp = bp.dyn.SpikeTimeGroup(1, bm.zeros(n_spk, dtype=int), bm.linspace(2., 100., n_spk)) + self.proj = bp.dyn.DualExpon(1, tau_rise=tau_r, tau_decay=tau_d) + + def update(self): + self.proj(self.inp()) + return self.proj.h.value, self.proj.g.value + + for tau_r, tau_d in [(1., 10.), (10., 100.)]: + for n_spk in [1, 10, 100]: + net = Net(tau_r, tau_d, n_spk) + indices = bm.as_numpy(bm.arange(1000)) + hs, gs = bm.for_loop(net.step_run, indices, progress_bar=True) + + bp.visualize.line_plot(indices * bm.get_dt(), hs, legend='h') + bp.visualize.line_plot(indices * bm.get_dt(), gs, legend='g', show=show) + plt.close('all') + + + def test_dual_expon_v2(self): + class Net(bp.DynSysGroup): + def __init__(self, tau_r, tau_d, n_spk): + super().__init__() + + self.inp = bp.dyn.SpikeTimeGroup(1, bm.zeros(n_spk, dtype=int), bm.linspace(2., 100., n_spk)) + self.syn = bp.dyn.DualExponV2(1, tau_rise=tau_r, tau_decay=tau_d) + + def update(self): + return self.syn(self.inp()) + + for tau_r, tau_d in [(1., 10.), (5., 50.), (10., 100.)]: + for n_spk in [1, 10, 100]: + net = Net(tau_r, tau_d, n_spk) + indices = bm.as_numpy(bm.arange(1000)) + gs = bm.for_loop(net.step_run, indices, progress_bar=True) + + bp.visualize.line_plot(indices * bm.get_dt(), gs, legend='g', show=show) + + plt.close('all') + +class TestAlpha(unittest.TestCase): + + def test_v1(self): + class Net(bp.DynSysGroup): + def __init__(self, tau, n_spk): + super().__init__() + + self.inp = bp.dyn.SpikeTimeGroup(1, bm.zeros(n_spk, dtype=int), bm.linspace(2., 100., n_spk)) + self.neu = bp.dyn.LifRef(1) + self.proj = bp.dyn.FullProjAlignPreDS(self.inp, None, + bp.dyn.Alpha(1, tau_decay=tau), + bp.dnn.AllToAll(1, 1, 1.), + bp.dyn.CUBA(), self.neu) + + def update(self): + self.inp() + self.proj() + self.neu() + return self.proj.syn.h.value, self.proj.syn.g.value + + for tau in [10.]: + for n_spk in [1, 10, 50]: + net = Net(tau=tau, n_spk=n_spk) + indices = bm.as_numpy(bm.arange(1000)) + hs, gs = bm.for_loop(net.step_run, indices, progress_bar=True) + + bp.visualize.line_plot(indices * bm.get_dt(), hs, legend='h') + bp.visualize.line_plot(indices * bm.get_dt(), gs, legend='g', show=show) + + plt.close('all') diff --git a/brainpy/_src/dynold/synapses/abstract_models.py b/brainpy/_src/dynold/synapses/abstract_models.py index f345050c4..c7a902f01 100644 --- a/brainpy/_src/dynold/synapses/abstract_models.py +++ b/brainpy/_src/dynold/synapses/abstract_models.py @@ -7,6 +7,7 @@ import brainpy.math as bm from brainpy._src.connect import TwoEndConnector, All2All, One2One from brainpy._src.dnn import linear +from brainpy._src.dyn import _docs from brainpy._src.dyn import synapses from brainpy._src.dyn.base import NeuDyn from brainpy._src.dynold.synouts import MgBlock, CUBA @@ -175,32 +176,7 @@ def update(self, pre_spike=None): class Exponential(TwoEndConn): r"""Exponential decay synapse model. - **Model Descriptions** - - The single exponential decay synapse model assumes the release of neurotransmitter, - its diffusion across the cleft, the receptor binding, and channel opening all happen - very quickly, so that the channels instantaneously jump from the closed to the open state. - Therefore, its expression is given by - - .. math:: - - g_{\mathrm{syn}}(t)=g_{\mathrm{max}} e^{-\left(t-t_{0}\right) / \tau} - - where :math:`\tau_{delay}` is the time constant of the synaptic state decay, - :math:`t_0` is the time of the pre-synaptic spike, - :math:`g_{\mathrm{max}}` is the maximal conductance. - - Accordingly, the differential form of the exponential synapse is given by - - .. math:: - - \begin{aligned} - & g_{\mathrm{syn}}(t) = g_{max} g * \mathrm{STP} \\ - & \frac{d g}{d t} = -\frac{g}{\tau_{decay}}+\sum_{k} \delta(t-t_{j}^{k}). - \end{aligned} - - where :math:`\mathrm{STP}` is used to model the short-term plasticity effect. - + %s **Model Examples** @@ -256,12 +232,6 @@ class Exponential(TwoEndConn): method: str The numerical integration methods. - References - ---------- - - .. [1] Sterratt, David, Bruce Graham, Andrew Gillies, and David Willshaw. - "The Synapse." Principles of Computational Modelling in Neuroscience. - Cambridge: Cambridge UP, 2011. 172-95. Print. """ @@ -346,36 +316,13 @@ def update(self, pre_spike=None): return self.output(g) -class DualExponential(_TwoEndConnAlignPre): - r"""Dual exponential synapse model. - - **Model Descriptions** - - The dual exponential synapse model [1]_, also named as *difference of two exponentials* model, - is given by: - - .. math:: - - g_{\mathrm{syn}}(t)=g_{\mathrm{max}} \frac{\tau_{1} \tau_{2}}{ - \tau_{1}-\tau_{2}}\left(\exp \left(-\frac{t-t_{0}}{\tau_{1}}\right) - -\exp \left(-\frac{t-t_{0}}{\tau_{2}}\right)\right) - - where :math:`\tau_1` is the time constant of the decay phase, :math:`\tau_2` - is the time constant of the rise phase, :math:`t_0` is the time of the pre-synaptic - spike, :math:`g_{\mathrm{max}}` is the maximal conductance. +Exponential.__doc__ = Exponential.__doc__ % (_docs.exp_syn_doc,) - However, in practice, this formula is hard to implement. The equivalent solution is - two coupled linear differential equations [2]_: - .. math:: - - \begin{aligned} - &g_{\mathrm{syn}}(t)=g_{\mathrm{max}} g * \mathrm{STP} \\ - &\frac{d g}{d t}=-\frac{g}{\tau_{\mathrm{decay}}}+h \\ - &\frac{d h}{d t}=-\frac{h}{\tau_{\text {rise }}}+ \delta\left(t_{0}-t\right), - \end{aligned} +class DualExponential(_TwoEndConnAlignPre): + r"""Dual exponential synapse model. - where :math:`\mathrm{STP}` is used to model the short-term plasticity effect of synapses. + %s **Model Examples** @@ -427,15 +374,6 @@ class DualExponential(_TwoEndConnAlignPre): method: str The numerical integration methods. - References - ---------- - - .. [1] Sterratt, David, Bruce Graham, Andrew Gillies, and David Willshaw. - "The Synapse." Principles of Computational Modelling in Neuroscience. - Cambridge: Cambridge UP, 2011. 172-95. Print. - .. [2] Roth, A., & Van Rossum, M. C. W. (2009). Modeling Synapses. Computational - Modeling Methods for Neuroscientists. - """ def __init__( @@ -450,6 +388,7 @@ def __init__( tau_decay: Union[float, ArrayType] = 10.0, tau_rise: Union[float, ArrayType] = 1., delay_step: Union[int, ArrayType, Initializer, Callable] = None, + A: Optional[Union[float, ArrayType, Callable]] = None, method: str = 'exp_auto', # other parameters @@ -472,6 +411,7 @@ def __init__( syn = synapses.DualExpon(pre.size, pre.keep_size, + A=A, mode=mode, tau_decay=tau_decay, tau_rise=tau_rise, @@ -498,27 +438,13 @@ def update(self, pre_spike=None): return super().update(pre_spike, stop_spike_gradient=self.stop_spike_gradient) -class Alpha(_TwoEndConnAlignPre): - r"""Alpha synapse model. - - **Model Descriptions** +DualExponential.__doc__ = DualExponential.__doc__ % (_docs.dual_exp_syn_doc,) - The analytical expression of alpha synapse is given by: - .. math:: - - g_{syn}(t)= g_{max} \frac{t-t_{s}}{\tau} \exp \left(-\frac{t-t_{s}}{\tau}\right). - - While, this equation is hard to implement. So, let's try to convert it into the - differential forms: - - .. math:: +class Alpha(_TwoEndConnAlignPre): + r"""Alpha synapse model. - \begin{aligned} - &g_{\mathrm{syn}}(t)= g_{\mathrm{max}} g \\ - &\frac{d g}{d t}=-\frac{g}{\tau}+\frac{h}{\tau} \\ - &\frac{d h}{d t}=-\frac{h}{\tau}+\delta\left(t_{0}-t\right) - \end{aligned} + %s **Model Examples** @@ -567,12 +493,6 @@ class Alpha(_TwoEndConnAlignPre): method: str The numerical integration methods. - References - ---------- - - .. [1] Sterratt, David, Bruce Graham, Andrew Gillies, and David Willshaw. - "The Synapse." Principles of Computational Modelling in Neuroscience. - Cambridge: Cambridge UP, 2011. 172-95. Print. """ def __init__( @@ -617,7 +537,7 @@ def __init__( output=output, stp=stp, name=name, - mode=mode,) + mode=mode, ) self.check_post_attrs('input') # copy the references @@ -628,6 +548,8 @@ def update(self, pre_spike=None): return super().update(pre_spike, stop_spike_gradient=self.stop_spike_gradient) +Alpha.__doc__ = Alpha.__doc__ % (_docs.alpha_syn_doc,) + class NMDA(_TwoEndConnAlignPre): r"""NMDA synapse model. diff --git a/docs/tutorial_math/einops_in_brainpy.ipynb b/docs/tutorial_math/einops_in_brainpy.ipynb index 2489d6bae..a94301fd6 100644 --- a/docs/tutorial_math/einops_in_brainpy.ipynb +++ b/docs/tutorial_math/einops_in_brainpy.ipynb @@ -1435,39 +1435,6 @@ "bm.ein_reduce(ims, 'b (h h2) (w w2) c -> (h w2) (b w c)', 'mean', h2=3, w2=3).shape" ] }, - { - "cell_type": "markdown", - "metadata": {}, - "source": [ - "## Ok, numpy is fun, but how do I use einops with some other framework?\n", - "\n", - "If that's what you've done with `ims` being numpy array:\n", - "```python\n", - "bm.ein_rearrange(ims, 'b h w c -> w (b h) c')\n", - "```\n", - "That's how you adapt the code for other frameworks:\n", - "\n", - "```python\n", - "# pytorch:\n", - "bm.ein_rearrange(ims, 'b h w c -> w (b h) c')\n", - "# tensorflow:\n", - "bm.ein_rearrange(ims, 'b h w c -> w (b h) c')\n", - "# chainer:\n", - "bm.ein_rearrange(ims, 'b h w c -> w (b h) c')\n", - "# gluon:\n", - "bm.ein_rearrange(ims, 'b h w c -> w (b h) c')\n", - "# cupy:\n", - "bm.ein_rearrange(ims, 'b h w c -> w (b h) c')\n", - "# jax:\n", - "bm.ein_rearrange(ims, 'b h w c -> w (b h) c')\n", - "\n", - "...well, you got the idea.\n", - "```\n", - "\n", - "Einops allows backpropagation as if all operations were native to framework.\n", - "Operations do not change when moving to another framework - einops notation is universal" - ] - }, { "cell_type": "markdown", "metadata": { @@ -1476,7 +1443,7 @@ } }, "source": [ - "# Summary\n", + "## Summary\n", "\n", "- `rearrange` doesn't change number of elements and covers different numpy functions (like `transpose`, `reshape`, `stack`, `concatenate`, `squeeze` and `expand_dims`)\n", "- `reduce` combines same reordering syntax with reductions (`mean`, `min`, `max`, `sum`, `prod`, and any others)\n",