Evolutionary algorithms and brain functioning
A person's DNA has about 3 billion base pairs, each of which can take one of four possible sets of bases, for a total information capacity of 6 billion bits. The number of neurons in the human brain is about 30 times the number of base pairs in the human genome, or 1011, and each neuron connects to about ten thousand other neurons, for a total of 1015 neural connections, whose strengths can be varied. Each neural connection could conceivably serve as a bit, since it can be strong or weak. This huge number of neural connections is what enables our brains to store a large amount of information and perform such a wide variety of functions. However, the numbers tell us that the human genome does not have the capacity to store more than a tiny fraction of the information that is ultimately stored in the brain. How, then, can something relatively small and definable like a set of DNA molecules set in motion the processes that create a functioning human being with a highly complex brain?
The answer to this lies in the fact that many of the brain's functions are encoded in response to environment. This happens while the person is in the womb, in early childhood, and indeed throughout the person's life. There must be some basic mechanism that enables the brain to write efficient "programs" for itself for the performance of basic tasks. This mechanism, then, could be encoded in the brain from the beginning, saving the space that could have been taken up by multitude of programs needed to function in daily life. One way to achieve, in essence, a program for writing programs is through the use of evolutionary algorithms.
When using an evolutionary algorithm to solve a problem, the programmer must supply a set of basic functions that the program should be able to use in order to accomplish its goal, as well as supply a definition of how close the program came to achieving its goal.
A relatively simple example is as follows: an object is placed somewhere within reach of a robotic arm. Given the position of the object, a program must be devised to cause the arm to reach out and touch the object in the shortest possible time. This is a non-trivial geometrical problem, since the commands sent to the mechanisms in the arm tell it by what angle to bend at the shoulder and at the elbow in each moment of time, rather than simply telling it where to place its hand. Furthermore, there are many roundabout ways that the arm could reach out to the object, whereas the desired program would cause it to reach in one fluid motion.
In this example, allowed actions could include bending forward or backward at the elbow and forward or backward at the shoulder. Allowed comparisons could include checking the vertical and horizontal distances between the hand and the object. In the initial generation, a number of tree-like programs are created at random. The structure of the programs is as follows: internal nodes make checks about whether the distance to the object satisfies certain conditions. The program follows one branch if the test is affirmative and a different branch if the test is negative, and eventually there are terminal nodes that give commands as to how the arm should move. Each of the input programs is tested on a variety of object placements. The fitness of the input programs is judged on the basis of how long it took the arm to reach the object, or, if the arm never reached the object, fitness is judged upon how far the hand was from the object after a specified time limit. The fittest individuals are probabilistically chosen and these programs "mate" with each other by randomly breaking off at an internal node and switching subtrees. Offspring then form a second generation of programs, and the process continues for as many generations as the programmer deems necessary.
How can an evolutionary algorithm of this sort help to explain how a brain's functions are encoded? The example of a robotic arm is a particularly helpful one for explaining the possible relation between human learning and evolutionary algorithms. A baby is born without very much hand-eye coordination, but after a time he or she is able to reach for and point at interesting objects. This could indicate that the baby has used some process similar to that described above, in order to learn how to convert visual input into the muscle contractions necessary in order to reach an object at a certain point in the visual field. Once a successful algorithm is found, it is no longer so difficult to perform the required task.
There is evidence that modifications to this "reaching" algorithm occur later in life, as well. For one thing, as people grow, the dimensions of their bodies change, so the exact motions required in order to perform specific tasks are modified. If these exact motions were hard-wired into the genes, it would be necessary for modifications to these motions to be hard-wired as well so that they still functioned as the person grew. It seems more efficient for the algorithms in the brain to undergo gradual updates as the person grew. This could happen by some process resembling the evolutionary algorithm, with modifications occurring to keep the algorithm at its most efficient as conditions changed. A more impressive example of the adaptation of the reaching algorithm is as follows: a person is placed in a spinning room, where there is a Coriolis force. Initially, when the person tries to lift his arm, this force causes the arm to go flying in unexpected directions. After a few tries, however, the person is able to lift his arm in a controlled manner without thinking about it. This indicates that, after testing out a few algorithms for moving the arm, the brain is able to find an algorithm such that the result matches the expectation as closely as possible, in other words, such that the arm goes where the person wants it to. The modification of the algorithm could take place by some means similar to the evolutionary algorithm discussed. Instead of starting with a population of random algorithms for reaching, however, you start with just one algorithm that is nearly correct, that is, the reaching mechanism that works in the absence of a Coriolis force.
Another example of the brain coming up with better algorithms for doing things, thus showing that many basic brain functions are not hard-wired, involves the use of prism lenses. In an experiment, people are made to wear, for long periods of time, lenses that cause their field of vision to be turned upside down. After a while, the person reports that things have become right side up again. Then, taking off the glasses makes everything upside down. It seems that even this basic fact of how we perceive what is around us is not hard-wired into the brain. Maybe we see right side up because it simplifies the calculations that we need to make in order to perform everyday tasks. Seeing upside down is actually the default, in a certain sense, because the lenses in our eyes turn the received light into an upside down image on our retina. It is the brain that causes the perceived objects to be right side up. The evidence that even this is not hard-wired into the brain is rather interesting, as it indicates that everyone's brain independently and without our conscious knowledge comes to the decision that seeing right side up is the most efficient way to allow performance of daily tasks. Again, the process by which our brain figures out that seeing things right side up is beneficial could be related to evolutionary algorithms.
The processes I have discussed are very basic brain functions. I chose these processes because they seem to indicate learning on an instinctive level. Evolutionary algorithms could also be used in developing higher brain functions, however. For instance, when confronted with a new kind of math problem, one may struggle to solve it, but after that one has a program set up in the brain which helps one to solve the problem more efficiently. The forging of the neural connections necessary to create this program may have been accomplished by a process similar to evolutionary algorithms.
The study of evolutionary algorithms provides insight into ways in which the brain might be able to develop new and efficient functions. The analogy leaves some questions unanswered, however. In the brain, there is of course no conscious "programmer" who decides upon a reasonable framework in which to solve the problem. That is, this model leaves unanswered certain questions such as how to choose the basic functions that will be combined into a program, and how to make initial guesses as to the program to be used. Nonetheless, especially in the example of reaching for an object, there are interesting parallels between the computer's learning process and that of the human.