The optimization of neural representations of natural selection
Short reminder: In Neural Representation (NR) I understand the "picture" of information, as it is expressed by neurons in the brain. For example, through a network of electrical impulses between certain individual neurons.
I go by the theories of intuitive fragmentation, natural selection and about the universal consciousness from:
Intuitive fragmentation
Universal natural selection
awareness
, the development of the brain is controlled by genes and experience.
The recorded stimuli and their manner of processing to transform the brain of a person continuously. Each input is reflected in the NR again. The more often a NR is used, the more the connections between neurons are involved reinforced. If rarely or no use of it to waste.
What has this functionality for an effect on memories? Obviously
seen here is the competition between different NRs, the winners are distinguished by particularly frequent use. Because that's what it's what keeps you alive.
But an experience, or the corresponding input occurs exactly with vanishingly low probability even at a time and thus, their NO. On the surface
likely remain nothing in our memory. This is only true if each of NR would be completely customized. It is easy to find content overlaps between different experiences. So you could understand a NO as a combination and / or modification of other NRs.
I call such an overlap between NRs from now on "Integrated Neural Representation," or in short: "INR"
If INR is strong, it means that it is integrated in a relatively large, as frequently used NRs. It is weak when the opposite is the case.
If these INRs would also at the neuronal level would have to include only one of all NRs that are used contain the INR, so that the simultaneous use of the INR is.
If an INR is found in many NRs, the frequency of its use is relatively high. This leads to frequent its players to their neural connections.
This in turn gives you an evolutionary advantage over less strong INRs. The consequence is that strong INRs prevail in principle to weak INRs. Since NO individual of an input is almost never repeats are in the brain only INRs viable plant. So we can remember things much better when a new NO can be translated completely into existing NRs. In order to compete with existing NRs is avoided as far as possible, because existing neural structures to be used. As this recombination of NRs happens, of course, also be stored as NR or better than INR. Again, the principle: greater INRs prevailed.
call from now on I NRs, which are constructed by existing NRs at least in part, "Reconstructed Neuronal Represaentationen" or RNR.
This mechanism is well understood and practical.
when you think about a topic informed by simply memorizing information, it is not long until you forget them. If one understands the other, which mean that information, how they related and perhaps even detect connections and parallels to other topics, they will keep them much better forget literally.
what's going on?
Any information that inferences on the correlations between the individual NR learned by heart and INRs existing permits, helps an intelligent translation into a RNR.
Even if a new NR will be fully integrated is to not yet reached the same optimum efficiency. The possibility can have completely different Provide alternatives to integrate a NO. The stronger the average INRs, with which to integrate the new NR, the more efficient. The better one is reminded of the individual components of an information, the clearer the memory of the resulting RNR.
INRs strong natural selection inevitably leads to information in our brains are not stored as individual packages, but as a hierarchical structure, which works with as few basic INRs, of which all can INRs and other NRs can be reconstructed.
0 comments:
Post a Comment