The subject of my research are simulated “systems of nerve cells” such as the human brain, but also of insects such as ants or pigeons. The latter have a complexity, which should already be calculable with a commercially available computer, so my assessment.
My research on the plasticity, that is to say the temporal course of the actions and reactions of individual neurons, I consider to be concluded. There remains the question of the overall structure or topology, and whether this could have a particular concern for the system. To investigate this, I use the method of inverse development.
Reverse engineering means “reverse development” in order to extract the structural elements from an existing finished system [..] by examining the structures, states and behavior. (-> Wikipedia)
The question during Reverse Engineering is always:
How must it be built to do what it can?
What can it do ?
It can categorize and “make terms”, which is the same.
This is the essential basic function and property. Other functions and properties are the result of this ability.
How does it do that ?
By building patterns of patterns. And thereafter patterns of patterns of patterns, etc.
Each new pattern already links what is already existing and summarizes this to a new category at a higher level. At the same time, this is also a new (upper) concept, a new CONCEPT, or an IDEA or PRESENTATION, as all these are only patterns of patterns or patterns of patterns of patterns.
*) All of these assumptions are based on intensive considerations of the past few weeks.
Our science, our mathematics, our language are all patterns of patterns.
This also describes the special property of SNNs versus other categorization techniques that we already know and use: in Bayesian methods used in SPAM filters, or in backpropagation networks, as used in OCR,
“each with everything” is already linked. However, SNNs have the additional ability to also link patterns of patterns.
Topology of SNNs
Synapses are relatively short and are only locally connected.
Axons can also grow very long (up to one meter).
Approximately 90% of the axons cross-link locally, approximately 10% are longer,, and extend into distant layers where synapses from that layer can join.
Neurons connect with neighboring neurons on the same level (also by occasional spontaneous firing without stimulation), but not with distant neurons on the same level, because their axons and synapses can not grow there.
In order to better describe this, I will introduce two terms:
Assumption about structure:
The topology can be described by planes or layers (onion model).
There are two types of connections:
- Horizontal linkage
- Vertical linkage
Thus, the horizontal linkage describes connections on the same level.
Vertical linkage, on the other hand, leads to higher or lower levels or layers.
We must distinguish between local and remote networks. Distant crosslinks are always or predominantly vertical.
Long axons would have to grow predominantly in the direction of a center, so that the synapses of neurons can be locally connected in lower layers. Like roots of trees, they grow far into the depths to absorb the impulses of lower neurons. Through local networking on their own level, they represent an all-embracing concept of everything that is connected to it at a more elementary level.
Spontaneous firing without stimulation
Only the occasional spontaneous firing without stimulation (auto-fire) of the individual neurons allows their vertical networking. Without this, they could never connect, because there was an initial initial stimulation (Hen Egg Problem). Spontaneous firing, however, should not lead to additional horizontal networking, since this would be chaotic senseless links. In nature, this is impossible because there is not enough space for the axons to extend widely on the same plane.
Spontaneous firing is essential to the operation.
This contribution is part of my research and does not yet reflect any reliable knowledge.
My next task will be to continue modeling this topology in my snn-model.
My present model already shows clear tendencies towards conceptual formation. However, after a long computing time, it is also possible to observe how everything becomes increasingly mixed and mixed. Everything has so far been done on the same level, and this obviously leaves only a category formation of a first order. Higher categories (second and third order, etc.) may not yet arise, so my assumption from my previous observations. This requires a special topology.
The present state of the image-giving process still does not allow a precise conclusion about such topologies (or this knowledge has not yet arrived with me). Therefore, I try to approach this path. The goal is a functioning computer model, which has abilities that we know otherwise only from nature.
Phineas and Ferb have the same problem in the same character series: What do it do? What do you do if you don’t know what it does? The answer in this episode is: Reverse Engineering! Do not miss a show and watch it together with your kids! 😉