Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Human cortical neurons have tau_rc=0.01, not 0.02 #91

Open
tcstewar opened this issue Oct 17, 2017 · 2 comments
Open

Human cortical neurons have tau_rc=0.01, not 0.02 #91

tcstewar opened this issue Oct 17, 2017 · 2 comments
Labels

Comments

@tcstewar
Copy link
Member

Thanks to Dominic for bringing up this result:

https://elifesciences.org/articles/16553

From the abstract: "Here we show that layer 2/3 pyramidal neurons from human temporal cortex (HL2/3 PCs) have a specific membrane capacitance (Cm) of ~0.5 µF/cm2, half of the commonly accepted 'universal' value (~1 µF/cm2) for biological membranes."

It'd be interesting to take a look at trying to characterize this in terms of NEF. What sorts of functions are improved by this? Can we find anything like the horizontal eye control example (where neurons with high tau_rc are better for implementing an integrator)?

@arvoelke
Copy link
Member

arvoelke commented Apr 9, 2018

Not sure how much weight this one article carries, but their system identification analysis of the HH model revealed that it's closest to a LIF with tau_rc=0.005:

Principal Dynamic Mode Analysis of the Hodgkin–Huxley Equations (Eikenberry, S. E., & Marmarelis, V. Z., 2015)

Examining the structure of the reduced and pruned PDM-based models shows that, within
the subthreshold regime, the H–H membrane acts as a “leaky integrator” with a memory of
approximately 5 ms
, in accordance with the widely posited leaky integrating characteristic
of the H–H membrane.

I suspect that smaller tau_rc should do better for quick reaction time from rest (inhibited) state (V=0). Apart from this I would be surprised if the neural representation in the NEF were ever improved. The larger the value of tau_rc, the closer it gets to an IF neuron, which should more accurately encode higher frequency stimuli since the neurons should (?) become more uniformly "ready" to spike at any given moment in time. For LIF, I called this "spike dropping" when I talked about threshold crossing in my comp-II report. That said, I feel this is important in the sense that we might want this in certain scenarios as a way to alter the computation at different frequencies for free. Also, I need to actually look more into this instead of just guess.

@arvoelke
Copy link
Member

arvoelke commented Apr 9, 2018

Surprisingly, the above guess was right.

tau_rc_accuracy

There is a linear relationship between frequency and RMSE, with the slope being inversely proportional to tau_rc. Once tau_rc is large enough (approaching the IF neuron), the accuracy of the representation becomes independent of the input frequency. Also see Fig. 5; Voelker et al., 2017 for a similar plot highlighting the linear relationship over a greater number of trials.

I haven't checked this, but my guess is that the lower tau_rc cases are under-approximating due to spikes being "dropped". Again, this may be computationally relevant (as a way to dampen the frequency response from rapidly changing inputs for instance).

def go(freq, tau_rc, n_neurons=50, tau_probe=0.005, t=1.0, dt=0.001, seed=0):
    with nengo.Network() as model:
        u = nengo.Node(output=lambda t: np.sin(2*np.pi*freq*t))
        x = nengo.Ensemble(n_neurons, 1, seed=seed,
                           neuron_type=nengo.LIF(tau_rc=tau_rc))
        nengo.Connection(u, x, synapse=None)
        
        p_u = nengo.Probe(u, synapse=tau_probe)
        p_x = nengo.Probe(x, synapse=tau_probe)
        
    with nengo.Simulator(model, dt=dt, progress_bar=False) as sim:
        sim.run(t, progress_bar=False)
        
    return nengo.utils.numpy.rmse(sim.data[p_u], sim.data[p_x])

data = []
for seed in range(5):
    for freq in np.linspace(0, 50, 6):
        for tau_rc in [0.001, 0.005, 0.01, 0.02, 10000]:
            print(freq, tau_rc)
            data.append((freq, tau_rc, seed, go(freq, tau_rc, seed=seed)))
df = pd.DataFrame(data, columns=("Frequency", "tau_rc", "Seed", "RMSE"))

plt.figure()
for tau_rc in df.tau_rc.unique():
    sns.regplot(data=df[df['tau_rc'] == tau_rc], x_jitter=1.5,
                x="Frequency", y="RMSE", label=str(tau_rc))
plt.legend()
plt.show()

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
None yet
Development

No branches or pull requests

3 participants