Activation Function in Neural Networks

We always use a simple Hard Limit / Binary Step function with an adaptive threshold as our Activation Function. Instead of the most commonly used Sigmoid Function.

The adaptive threshold models a behaviour over time, which is elementar for the function of a Spiking Neural Network, and which can't be achieved with other methods.

(You still may want to use a Sigmoid Function to model the adaptive behaviour of the threshold. Or anything else, it really doesn't matter that much.)

 

And here comes the Big Trick:

Activate sequences in sequence, and not at the same time!

Detlef Kroll

If you do not know what is meant by this, you will certainly find out later 😉