Overview
Are you an ML engineer interested in developing neuromorphic algorithms? Are you really good with backpropagation, Pytorch, and Tensorflow but are a little rusty in physics and diff eq? If you answered ‘yes’ to both questions then you have come to the right place. This tutorial is designed to give ML engineers a guide to understanding one of the most fundamental building block of neuromorphic computing: the Leaky Integrate and Fire (LIF) neuron. We will develop this understanding by walking through and experimenting with a simulation of a LIF in python. While this simulation will be relatively simple, the discretization method we use to model the neuron is highly relevant and significant in neuromorphic computing for the simulation of interconnected neuron populations. Furthermore, manually implementing and experimenting with the neurons will also give us greater understanding and intuition about how these neurons behave in practice.
What are ‘Neural Dynamics’?
Neural dynamics refers to the equations which govern how neurons behave in time. Unlike Artificial Neurons used in DL, spiking neurons are naturally recurrent, meaning that their internal state depends not only on the input to the neuron but also the neuron’s previous state. In many cases these equations describe they dynamics of an actual electrical circuit. (An RC circuit specifically). Understanding neural dynamics is essential to developing neuromorphic algorithms. Without a solid knowledge base, we cannot understand more advanced concepts such as online learning in Spiking Neural Networks (SNNs). Attempting to use SNNs without understanding neural dynamics is like trying to debug ANNs without knowing gradient descent: when things start to get complicated or go wrong, we find ourselves unable to diagnose the issues. We will start by a review of differential equations and how to discretize them.
A Review of Differential Equations
A differential equation is a mathematical equation that relates a function of variables to derivatives. In simple terms this means that the rate of changes of the system (the derivatives) are dependent on the state of the system (or variables). This kind of behavior describes a diverse array of real world behaviors, ranging from biological neurons to how quickly an oven cools off. Let’s start with the later example of an oven cooling off in a home. (We will return to neural dynamics later). From experience we know that if you heat up to a high temperature, open it, and then turn off, the internal temperature of the oven (the independent variable) will rapidly start to cool off. The rate of temperature decrease (or derivative) will initially be very high, but as the oven begins to cool off, the temperature in the oven the cooling rate will slow down. In other words, the rate of change of the system (derivative of temperature) is dependent on the current state of the system (temperature). This behavior is captured by Newton’s Law of Cooling shown below:
where k is a positive cooling constant, T is the current oven temperature, and is the house temperature. For similicity we will refer to as simply from now on.
The first thing I typically do when I see an equation is see if the equation matches up with my expectations about the real world. Let us ask ourselves some questions:
- What happens if the temperature of the oven matches the temperature of the house? We set and observe that the derivative of temperature with respect to time . This matches our expectation that the temperature of the oven will not change in this case.
- What happens if the temperature of the oven is times higher than the ambient temperature? We set and observe that . This matches our expectation that the higher the oven temperature is, the faster it will initially cool off.
Discretizing and Simulating Differential Equations
To really test our understanding of this equation let’s try discretizing it so we can simulate it on a computer. Discretization involves breaking down the continuous dynamics into small discrete time steps allowing us to approximate the continuous analytical solution. Since a derivative is just an instantaneous rate of change (or slope) we can approximately solve a differential equation by choosing a tiny timestep and projecting the independent variables into the future along this line. (By the way, when someone refers to ‘solving’ a differential equation they are often referring to predicting the variable states in the future). This technique is particularly valuable in cases where the dynamics of a system are non-linear and cannot be solved analytically, such as with the three-body problem. Let’s start by unpacking the definition of a derivative:
The important thing to note in this equation is that literally means “a infinitesimal change of time”. In engineer speak this means we can set this abstract variable to 1 ms and call it a day. 🙂 Now all what we need to do is solve for in terms of to finish our discretization:
Now let’s see if we can code up these equation in python. Our oven’s starting temperature, will be , the ambient temperature of the house will be . We assume that k is known and is equal to .05 1/s. We set our = 1ms. Let’s attempt a simulation in python:
import numpy as np
import matplotlib.pyplot as plt
# Constants
k = 0.05 # Cooling constant in s^-1
T_a = 23 # Ambient temperature in degrees Celsius
T_0 = 230 # Initial oven temperature in degrees Celsius
delta_t = 0.001 # Time step in seconds
total_time = 60 # Total simulation time in seconds
# Number of time steps
num_steps = int(total_time / delta_t)
T = np.zeros(num_steps) # Store temperatures here
T_t = T_0 # current temperature
# Discretizing and solving the equation
for t_idx in range(num_steps):
T_t = -k * (T_t - T_a) * delta_t + T_t
T[t_idx] = T_t
# Plotting the results
plt.plot(np.arange(0, total_time, delta_t), T)
plt.xlabel('Time (s)')
plt.ylabel('Temperature (°C)')
plt.title('Cooling of an Oven')
plt.grid(True)
plt.show()
As an additional exercise I recommend solving the differential equation analytically and comparing the two solutions.
Leaky Integrate and Fire (LIF) Neuron
With the review material out of the way we can return to our introduction to neural dynamics. Before diving into the equation, lets refresh our memories about the fundamental idea of a biologically realistic neuron. Real neurons accumulate (or integrate) incoming synaptic current. This influx of current increases the membrane potential. When a certain membrane potential threshold is reached, the neuron will fire, or produce an output spike (also known as a post-synaptic action potential). These spike events are followed by a refractory (or recovery period) where it is difficult for the neuron to spike again. This regulation prevents neurons from spiking too frequently and in rapid succession, a pattern that resembles seizure-like activity in the brain.. This can be achieved in practice by resetting the voltage to a low base value after the neuron spikes. Another essential characteristic of neurons is their ‘leakiness,’ which refers to the gradual decrease in voltage in the absence of incoming signals. This property ensures that the membrane potential returns to a baseline level when not activated. As stated earlier this behavior can be exactly replicated via an electrical RC circuit shown and graphed below:
With that out of the way, I present the LIF first order differential equation:
where is the membrane potential, is the resting potential of the neuron, is a positive constant controlling the leak speed, and is the resistance, which modules the impact of the input current on the membrane potential. Let’s ask ourselves some an intuitive question to make sure we understand this equation:
- In the absence of input current, what happens to the membrane potential of the neuron? We set = 0 and find that . This equation matches our expectation that the membrane potential should decay towards its resting potential over time. It also informs us that the smaller becomes the faster the neuron will ‘leak’. Notice how Newton’s law of cooling shares this same form.
With that out of the way we will attempt to discretize the neural dynamics so we can write a simulation in python later. I recommend trying this on your own before continuing.
First we recall our classic definition of a derivative:
To complete our discretization we solve for :
A spike is transmitted and the voltage is simultaneously reset when crosses some constant voltage threshold. We define this binary spike transmission function below:
\[
s(t) = \begin{cases}
1 & \text{if } u[t] > v_{\text{th}} \\
0 & \text{otherwise}
\end{cases}
\]
To simulate our neuron we start by defining some initial conditions and running for a small amount of time. It’s important to note that the neuron parameters chosen in this example are not intended to reflect biologically realistic values or behavior. For example, in biological neurons a refractory period can be induced by reseting the voltage to a value instead of 0.
Moreover, for neuromorphic computing applications it can be beneficial to treat neuron parameters as unitless and set = 1. I plan to talk about this aspect more in future articles. For now, let’s just simulate our LIF as we have defined it above:
# Importing required libraries
import numpy as np
import matplotlib.pyplot as plt
# Given parameters
tau_m = 10e-3 # in seconds (10 ms)
u_rest = 0.0 # Resting potential in volts
threshold = 1.75e-3 # Threshold in volts
delta_t = 1e-5 # Time step in seconds (1/100 ms)
R = 1.52 # Resistance in ohms
T = 0.1 # Simulation time in seconds (100 ms)
# Time array and initialization
times = np.arange(0, T, delta_t)
u = np.zeros_like(times) # store voltages here
I = np.zeros_like(times) # store input currents here
# Input spike times and current injection
input_spike_times = [0.025, 0.045, 0.065] # in seconds
for spike_time in input_spike_times:
I[int(spike_time / delta_t)] = 1 # 1 amp current input
# STOP HERE: TRY CODING THE NEURON SIMULATION YOURSELF
# Neuron simulation
output_spike_times = []
u_t = 0 # current voltage
for t_idx in range(len(times)):
# update membrane voltage
u_t = (tau_m * u_t - delta_t * u_t + delta_t * u_rest + R * delta_t * I[t_idx]) / tau_m
u[t_idx] = u_t
if u_t >= threshold:
output_spike_times.append(times[t_idx])
u_t = 0
# Plotting
plt.figure(figsize=(10, 6))
plt.plot(times, u, label='Membrane Potential (u)')
plt.axhline(y=threshold, color='r', linestyle='--', label='Threshold')
# Plot input, output spike times (Scaling makes it look better)
plt.vlines(input_spike_times, -threshold, -threshold * .50,
color='g', linestyle='--', label='Input Spikes (Unitless)')
plt.vlines(output_spike_times, threshold, threshold * 1.5,
color='black', linestyle='--', label='Output Spikes (Unitless)')
plt.gca().set_yticks([tick for tick in plt.gca().get_yticks() if tick >= 0])
plt.xlabel('Time (s)')
plt.ylabel('Membrane Potential (V)')
plt.title('Neuron Simulation')
plt.legend()
plt.tight_layout()
plt.show()
Experimenting with the neuron parameters is a good way to gain additional intuition about how LIFs behave. (Try modifying the leak parameter, using a constant input current, or varying the input spike patterns.)
Conclusion
In this tutorial we have covered the basics of a Leaking Integrate and Fire (LIF) neuron, the fundamental building block of many neuromorphic applications. I hope that you have come away with the impression that SNNs are fundamentally different than ANNs due to their time sensitive neural dynamics. To reiterate, these differences mean that diagnosing neuromorphic algorithms requires a different set of insights than debugging ANNs. For instance, if an SNN is misbehaving, it may be necessary to visualize and track spiking activity of an SNN over time. Furthermore, changing a hyperparameter in an SNN (like voltage decay) is easier if you know what the hyperparameter exactly controls.
For future tutorials, I plan to cover a larger variety of neuron models with more complex dynamics such as Resonate and Fire (R&F) neurons and CUrrent BAsed (CUBA) LIF neurons. It would also be great to cover learning mechanisms for SNNs as well as data encoding strategies. Please feel free to contact me or leave a comment if you have any questions or tutorial requests.
References
- Gerstner, Wulfram, et al. Neuronal dynamics: From single neurons to networks and models of cognition. Cambridge University Press, 2014.
- Image of Leaky and Integrate and Fire graphic was taken from Wikipedia (Thank you user Spiking16 for making this graphic)