Neural Dynamics for ML Engineers – Part 2: The CUBA LIF Neuron

Overview

So it’s been a little while getting to this, but finally got to writing a sequel to my first post about discretizing Leaky Integrate and Fire (LIF) neurons for simulation on a computer. The purpose of this series is to provide an easy-to-follow introduction to neuromorphic computing for ML-Engineers starting from the basics. I hope to cover all the basics needed to start actually applying neuromorphics to real world problems. So let’s not wait any further. Let’s learn about the CUBA LIF!

The Current BAsed (CUBA) Leaky Integrate and Fire (LIF) Neuron is a second order neuron model with increased recurrent capabilities with respect to the LIF. In this tutorial we will start by simplifying the LIF model to make it easier to run on digital hardware. Then we will introduce the CUBA LIF model and explain why it is preferable through a specific example. We will end the tutorial with a simple implementation of a CUBA LIF in Lava, an open source simulator for Spiking Neural Networks (SNNs).

Review of the LIF

Recall the dynamics of the LIF neuron model:

\tau_{m}\frac{{\text{{d}}u}}{{\text{{d}}t}}=-[u(t)-u_{\text{{rest}}}] + R\,I(t)

where u(t) is the membrane potential, u_{\text{{rest}}} is the resting potential of the neuron, \tau_{m} is a positive constant controlling the leak speed, and R is the resistance, which modules the impact of the input current I(t) on the membrane potential. We were then able to discretize the equation and transform into an easy simulatable form:

u[t+\Delta t] = \frac{{\tau_{m}\,u[t] - \Delta t\,u(t) + \Delta t\,u_{\text{{rest}}} + R\,\Delta t\,I(t)}}{{\tau_{m}}}

where d t =\Delta t represents the timestep of the simulation, which allows us to control the fidelity of the simulation. We also covered the spiking mechasim of the LIF which is triggered by the voltage crossing some threshold v_\text{th}. A spiking event is followed by a reset where the voltage is reset to some base value \leq u_\text{rest}.

Simplifying the Lif

The first thing we are going to do, is rewrite the equation splitting it into the sum of three terms:

\[ u[t+\Delta t] = \left(1 – \frac{\Delta t}{\tau_{m}}\right)u(t) + \left(\frac{\Delta t}{\tau_{m}}\right)u_{\text{rest}} + \left(\frac{R \Delta t}{\tau_{m}}\right)I(t) \]

Let’s just assume u_{\text{{rest}}} = 0 to further simplify things. This leaves us with:

\[ u[t+\Delta t] = \left(1 – \frac{\Delta t}{\tau_{m}}\right)u(t) + \left(\frac{R \Delta t}{\tau_{m}}\right)I(t) \]

Now lets look at the coefficients (1 - \frac{\Delta t}{\tau_m}) and \left(\frac{R \Delta t}{\tau_{m}}\right).

In order for the neuron voltage to actually decay over time, we know that (1 - \frac{\Delta t}{\tau_m}) must be less than 1 and greater than 0. (If this term is negative, then the voltage will oscillate between negative and positive every timestep). To simplify the math, lets just define a parameter du = \frac{\Delta t}{\tau_m}.

Furthermore, we can just set  \left(\frac{R \Delta t}{\tau_{m}}\right) = 1 for our purposes. Recall that I(t) is a the input current into the neuron. If we assume this neuron is part of an actual SNN implemented on a chip, it is just a a synaptically weighted sum of spikes feeding into that neuron. We can just assume that this sum will feed the  ‘right’ value into the neuron every timestep, obviating the need to scale it again.

Lastly, we can enforce unitless timesteps where  \Delta t = 1 for real time applications. This idea of a unitless timestep simplifies the math. However, it  now becomes the duty of the engineer to apply the neural dynamics at regular, known timestep durations during run-time to ensure correct neuron behavior during training and inference. This leaves us with the following simplified LIF dynamics:

\[ u[t+1] = (1 – du) \cdot u(t) + I(t) \]

This is a good point to discuss why this implementation is preferable for digital logic based hardware. By digital logic, I mean any binary based chip that with synchronous, clock based logic, like a CPU, an FPGA, or Loihi2. First of all, less parameters means less memory usage and less computational resources dedicated to multiplications. And by removing the \Delta t, we no longer have to explicitly handle time in the dynamics as if we are simulating. Instead, timing is implicit where a fixed number of clock cycles on the hardware directly corresponds to one timestep in the neuron model.

At this point we can simulate our neuron just like last time. We set du =.03 and the voltage threshold v_\text{th} to 2. I recommend coding this yourself from scratch before continuing.

import numpy as np
import matplotlib.pyplot as plt

sim_timesteps = 100
input_current = np.zeros(sim_timesteps)
input_current[[10, 20, 30, 43]] = 1  # spike at timestep 11, 21, and 31, 43
#input_current[[27, 29, 30, 43]] = 1  # Burst pattern

# the neurons output spike train
output_spike_train = np.zeros(sim_timesteps)
neuron_voltages = np.zeros(sim_timesteps)

# neuron params
vth = 2
alpha = 0.03
u_t = 0  # u(t)

# Code the rest yourself before continuing
for timestep, I_t in enumerate(input_current):
    u_t = (1 - alpha) * u_t + I_t
    neuron_voltages[timestep] = u_t
    if u_t > vth:
        output_spike_train[timestep] = 1
        u_t = 0

# Plotting
def plot_neuron_simulation(times, neuron_voltages, vth, input_spike_times, output_spike_times, title='Neuron Simulation'):
    plt.figure(figsize=(10, 6))
    plt.plot(times, neuron_voltages, label='Membrane Potential')
    plt.axhline(y=vth, color='r', linestyle='--', label='Threshold')
    # Plot input and output spike times (scaling for visibility)
    plt.vlines(input_spike_times, -vth, -vth * 0.50, color='g', linestyle='--', label='Input Spikes (Not to scale)')
    plt.vlines(output_spike_times, vth, vth * 1.5, color='black', linestyle='--', label='Output Spikes (Not to scale)')
    plt.gca().set_yticks([tick for tick in plt.gca().get_yticks() if tick >= 0])
    plt.xlabel('Time (timesteps)')
    plt.ylabel('Membrane Potential')
    plt.title(title)
    plt.legend(loc='upper right')
    plt.tight_layout()
    plt.show()

# Plotting
times = np.arange(sim_timesteps)

# Identifying the exact spike times for input and output
input_spike_times = times[input_current == 1]
output_spike_times = times[output_spike_train == 1]
plot_neuron_simulation(times, neuron_voltages, vth, input_spike_times, output_spike_times)

CUBA LIF

The problem with our simplified LIF is that after the neuron fires, all recurrent memory is effectively wiped. For example, consider the same neuron which receives a burst pattern of input spikes instead:

Even though the input spike trains are quite different, the output spike trains and post spike voltages are exactly the same. In other words, the neuron has no way of it has just received a burst of spikes instead of a steady stream. This spike burst could have been carrying useful excitatory information to the neuron, but our neural dynamics were not sophisticated enough to exploit it. So, how do we give our LIF useful memory to increase its representational power?

First we change the significance of u(t+1) from a voltage term to a current term and remove the reset mechanism from u. We then introduce a new voltage term called v(t) which is updated by the current input u(t). The updated CUBA dynamics are shown below:

\[
u[t+1] = (1 – du) \cdot u(t) + I(t) \\
v[t+1] = (1 – dv) \cdot v(t) + u(t) \\
s(t+1) = \begin{cases}
1 & \text{if } v[t+1] > v_{\text{th}} \\
0 & \text{otherwise}
\end{cases}
\]

Importantly, the voltage v still resets after a spike event (ie s(t) = 1) unlike the current u. This is what fundamentally allows the neuron to maintain state after a spiking event. Since v(t) takes u(t), the output of another function, as its input, this equation is a second order function.

Let’s re-run the two spike patterns from before with these new dynamics. For this neuron, we set du =.85 and dv =.03.

With a steady spike stream every 10 timesteps, the first neuron simulation looks very similar to our original simple lif implementation. (In fact, if we set du = 1, then they will be exactly the same.) However, a burst spike train causes a different output spiking pattern indicating that the addition of a current term effectively increases the memory of the neuron.

Excercise: Try to update the simulation to incorporate the full CUBA dynamics with u(t) and v(t)

Lava simulation of LIF

We won’t get very far in our study of neural dynamics if we are forced to manually code our SNNs from scratch. Fortunately for us there is an open source simulator for spiking neural networks called Lava managed by Intel. The great thing about Lava is that they support a wide variety of neuron models, can accurately simulate fixed point behavior, and is Loihi2 compatible. Furthermore, there are a variety of useful tutorials available on lava which you should definitely check out.

Lava functions through Processes and Process Models. Processes are top level specifications of what parameters a given neuron model can take, like v_\text{th}, du, and dv in the case of CUBA neurons. Processe Models are the actual implementations of the neuron models designed for specific precision and /or hardware specifications. For example, SNNs with different weight precisions can all be simulated in Lava as if they were running directly on Intel’s Loihi2 chip.

Lava also provides RingBuffer processes for injecting spike trains into SNNs and also for reading them. Monitors can be defined which allow us to inspect the neuron’s internal signal values like voltage or current. Data is passed between processes in Lava via ports which are similar to signals in VHDL (VHSIC Hardware Description Language) .

So what are we waiting for? Let’s try to re-create the plot above using lava:

from lava.proc.io.source import RingBuffer as RingBufferSend
from lava.proc.io.sink import RingBuffer as RingBufferReceive
from lava.proc.lif.process import LIF
from lava.proc.monitor.process import Monitor
from lava.magma.core.run_conditions import RunSteps
from lava.magma.core.run_configs import Loihi2SimCfg
import numpy as np
import matplotlib.pyplot as plt

sim_timesteps = 100
input_current = np.zeros(sim_timesteps)
input_current[[27, 29, 30, 43]] = 1  # Spike at timesteps 27, 29, and 30
input_current_proc = RingBufferSend(data = input_current[None, ...])
vth = 2
lif = LIF(shape= (1, ), du = .85, dv = .03, vth = vth)

# Voltage Monitor
voltage_monitor = Monitor()
voltage_monitor.probe(target = lif.v, num_steps=sim_timesteps)

# Output Spike Recorder
output_spike_proc = RingBufferReceive(shape = (1, ), buffer = sim_timesteps)

# Connect graph together and run
input_current_proc.s_out.connect(lif.a_in)
lif.s_out.connect(output_spike_proc.a_in)

run_config = Loihi2SimCfg(select_tag="floating_pt")
lif.run(condition = RunSteps(num_steps = sim_timesteps, blocking = True), run_cfg=run_config) # blocking means we wait for process to continue before continuing
neuron_voltages = voltage_monitor.get_data()[lif.name]["v"].flatten()
output_spike_train = output_spike_proc.data.get().flatten()
lif.stop()

# Plotting
times = np.arange(sim_timesteps)

# Identifying the exact spike times for input and output
input_spike_times = times[input_current == 1]
output_spike_times = times[output_spike_train == 1]

# Reuse plotting function from before
plot_neuron_simulation(times, neuron_voltages, vth, input_spike_times, output_spike_times, 
'CUBA LIF Simulation (Burst spike input)')

The only discrepancy between lava and my own implementation is that I show the voltage crossing the voltage threshold for demonstration purposes wheras lava does not. To learn more about how Lava works and some of the advanced features contained within, please see this resource and the other end-to-end tutorials available in the lava repository.

Conclusion

In this tutorial we have explained the intuition behind the CUBA neuron. By simplifying the LIF model, we have made it easier to implement it on a digital logic based hardware. By transitioning the LIF voltage formula from a first order to a second order equation, we showed an increased recurrent memory capacity that is not erased after spiking behavior.

For future tutorials, I think I will focus on Hebbian Learning and Reward Modulated Spike Time Dependent Plasticity (R-STDP). Although I do plan to make a tutorial about Resonate and Fire (R&F) Neurons and Surrogate Gradient (SG) offlne learning methods as well. If you have any suggestions for future articles, please feel free to leave a comment.

References

  1. Complete CUBA Neural Dynamics: https://www.frontiersin.org/journals/neuroscience/articles/10.3389/fnins.2023.1244675/full
  2. Simplifed CUBA Neural Dynamics: https://lava-nc.org/lava-lib-dl/slayer/notebooks/neuron_dynamics/dynamics.html
  3. Simulation of a CUBA LIF: https://youtu.be/43hsjEplkJ8
Scroll to Top