Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Question] How to add non-uniform transmission delays? #690

Open
brendanjohnharris opened this issue Oct 22, 2024 · 2 comments
Open

[Question] How to add non-uniform transmission delays? #690

brendanjohnharris opened this issue Oct 22, 2024 · 2 comments
Labels
bug Something isn't working

Comments

@brendanjohnharris
Copy link

brendanjohnharris commented Oct 22, 2024

Hi brainpy!

I am hoping to incorporate non-uniform transmission delays into a LIF network (i.e. each synapse has a different delay), but I can't find a straightfoward way to implement this from the documentation.
For instance, in the example below I have tried passing an initializer for the delay to the synaptic projection classes with reduction and merging, which fails (as expected).
Is there a recommended way of constructing non-uniform delays, by drawing delays for each synapse from the initializer?

import brainpy.math as bm
class EINet(bp.DynSysGroup):
  def __init__(self):
    super().__init__()
    ne, ni = 3200, 800
    delay = bp.init.Uniform(0.0, 4.0)
    self.E = bp.dyn.LifRef(ne, V_rest=-60., V_th=-50., V_reset=-60., tau=20., tau_ref=5.,
                           V_initializer=bp.init.Normal(-55., 2.))
    self.I = bp.dyn.LifRef(ni, V_rest=-60., V_th=-50., V_reset=-60., tau=20., tau_ref=5.,
                           V_initializer=bp.init.Normal(-55., 2.))
    self.E2E = bp.dyn.FullProjAlignPreDSMg(pre=self.E,
                                          delay=delay,
                                          syn=bp.dyn.Expon.desc(size=ne, tau=5.),
                                          comm=bp.dnn.JitFPHomoLinear(ne, ne, prob=0.02, weight=0.6),
                                          out=bp.dyn.COBA(E=0.),
                                          post=self.E)
    self.E2I = bp.dyn.FullProjAlignPreDSMg(pre=self.E,
                                          delay=delay,
                                          syn=bp.dyn.Expon.desc(size=ne, tau=5.),
                                          comm=bp.dnn.JitFPHomoLinear(ne, ni, prob=0.02, weight=0.6),
                                          out=bp.dyn.COBA(E=0.),
                                          post=self.I)
    self.I2E = bp.dyn.FullProjAlignPreDSMg(pre=self.I,
                                          delay=delay,
                                          syn=bp.dyn.Expon.desc(size=ni, tau=10.),
                                          comm=bp.dnn.JitFPHomoLinear(ni, ne, prob=0.02, weight=6.7),
                                          out=bp.dyn.COBA(E=-80.),
                                          post=self.E)
    self.I2I = bp.dyn.FullProjAlignPreDSMg(pre=self.I,
                                          delay=delay,
                                          syn=bp.dyn.Expon.desc(size=ni, tau=10.),
                                          comm=bp.dnn.JitFPHomoLinear(ni, ni, prob=0.02, weight=6.7),
                                          out=bp.dyn.COBA(E=-80.),
                                          post=self.I)

  def update(self, inp):
    self.E2E()
    self.E2I()
    self.I2E()
    self.I2I()
    self.E(inp)
    self.I(inp)
    return self.E.spike

model = EINet()
indices = bm.arange(1000)
spks = bm.for_loop(lambda i: model.step_run(i, 20.), indices)
bp.visualize.raster_plot(indices, spks, show=True)
@brendanjohnharris brendanjohnharris added the bug Something isn't working label Oct 22, 2024
@brendanjohnharris brendanjohnharris changed the title How to add non-uniform transmission delays? [Question] How to add non-uniform transmission delays? Oct 22, 2024
@Routhleck
Copy link
Collaborator

Thank you for your question!

A contributor proposed a solution in a pull request (#595) to address non-uniform transmission delays. Although this PR hasn't been merged, it provides a useful approach:

This PR introduces a new class called HeteroLengthDelay in the file brainpy/_src/math/delayvars.py. It is a modification of the LengthDelay class with the following changes:

  1. The __init__() function requires the delay length of each synapse and the number of synapses per pre-synaptic neuron. The delay length array should be sorted by pre-synaptic neuron index.
  2. The retrieve() function outputs a 1D array of spikes delivered to each synapse, with the length of the array corresponding to the number of synapses, not post-synaptic neurons.
  3. Imports for numpy and brainpy.math have been added.

The class internally stores previous spikes in a matrix of dimensions [max_delay_length, num_pre_neurons], and it should work as long as memory limits are respected.

However, we do not recommend using this method for GPU implementations because pure heterogeneous delay retrieval is very expensive. Instead, we suggest using the Taichi custom interface(in brainpy.math.XLACustomOp) to merge delay retrieval and sparse computations, minimizing the memory indexing overhead.

Hope this helps! Feel free to reach out if you have further questions.

@ztqakita
Copy link
Collaborator

Hi Brendan! Here are some additional remarks.
Unfortunately, BrainPy currently does not have a straightforward way to implement non-uniform delays in the network. AlignPre and AlignPost have an assumption that all synapses in one projection share the same delay. So using non-uniform delays in our synaptic projection classes is not easy.
However, there is one alternative (relatively simple but ugly) way to do this. If we separate our neuron groups into several subgroups, the synaptic projections will be defined between each subgroup and each projection can define one delay that differs from the others. I don't know if this approach sounds good to you.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working
Projects
None yet
Development

No branches or pull requests

3 participants