About Us

Name: Rob
Biography
Name: filia_evae
Location: philadelphia, PA
Loading...

Create Your Own Blog Find Other Townhall Blogs

Comments

The Front-loading Fiction

In responding to an email about "front-loading" as a Deistic solution to the universe that does not require an interventionist (theist) God, I replied that I have some philosophical problems with the phrase "front-loading". It is a concession to Deism that doesn't have to be made. Trying to describe a "front-loaded algorithm" highlights the problem with the philosophical solution.

Historically, the argument for front-loading came from Laplacian determinism based on a Newtonian or mechanical universe--if one could control all the initial conditions, then the outcome was predetermined. First quantum mechanics, and then chaos-theory has basically destroyed it, since no amount of precision can control the outcome far in the future. (The exponential nature of the precision required to predetermine the outcome exceeds the information storage of the medium.)

But "front-loading" permitted Deists to say that God designed the Universe, and then stepped back and let "natural" forces operate, thereby removing any "supernatural" interference of the sort that Lucretius fumed about in 50BC. So if Newtonian determinism was now impossible, perhaps there could be some sort of algorithmic determinism (which I'll call Turing determinism) which could step in and permit a Deist to avoid the supernatural. That is, God doesn't have to create the oak from the acorn anymore, but the biological program He inserted in the acorn can handle all the intermediate steps. So perhaps, God didn't have to create humans, but the biological program in the first living cell He created, started the ecosystem that eventually evolved humans.

This remains, of course, the principle argument of theistic evolutionists, and was Howard Van Till's favored method before he stopped teaching at Calvin College and gave up on theism.

But this argument assumes that one can separate algorithms from the machinery that executes them, the information from the storage medium, the supernaturally contingent from the naturally necessary. The Newtonian revolution was to view the universe as a complicated machine where "natural" laws were the function of the machinery, and "supernatural" interference was information not incorporated into the gears. The fact that a watch tells time was "natural", whereas the setting to Eastern Standard Time was "supernatural" because it was contingent.

ID (Intelligent Design) makes the argument that the gears are just as supernatural as the time zone, because they are designed to function in a certain way. But such an argument doesn't escape the TE (Theistic Evolutionist) defense that the time zone setting is just as "natural" as the gears, because there were no laws of nature broken. This would all be semantics, if it were not for the corollary, that ID claims to probe the character of the designer by studying the design, whereas TE claims that front-loading is indistinguishable from chance, making the designer inscrutable. (Which keeps his faith transcendentally Kantian, and science a-theistically independent of God.)

But is it true that algorithmic front-loading can be naturalistic, independent of God, Turing-deterministic, and thus incapable of revealing anything about a living God?

I'd like to make the argument that Turing determinism is impossible for several reasons, and therefore front-loading is indistinguishable from the supernatural, from the actions of God intervening in history.

a) The Turing problem

Alan Turing himself addressed a number of algorithmic dilemmas with the thought experiment of the deterministic computer now called a Turing machine. He asked if the outcome of such a computer can always be predicted, and demonstrated several examples of completely unpredictable behavior. Applying this to our biological example, it says that some organisms may act/evolve unpredictably, though perhaps not the ones God programmed.

But Turing went beyond this existence proof, and demonstrated necessity--a computing machine with feedback, where the output tape went into the input, was always unpredictable. In our biological example, we have to define the input and the output. TE tells us that the input is an organism, and the output is more organisms, and the computer is the organism too. In other words, the type of algorithmic determinism required by TE is not weakly, but strongly recursive, and therefore doubly unpredictable.

Even should God have infinite knowledge of the outcome of such a biological algorithm, the information regarding its outcome cannot be contained within the system itself. Therefore if the system is determined, it must be determined externally, with constraints outside of biology, which is exactly the definition of the supernatural that the TE "front-loading" was intended to remove!

b) The Functionalism Dilemma

Let us pretend for the moment, that the feedback between the output and the input doesn't exist, that DNA is a code that is itself independent of the cell that houses it. This is similar to the argument that consciousness is just the program running on the computer of the brain with no ability to change the brain circuits. Also note the similarity to Darwinist evolution, where the evolving critter has no ability to modify his genes or plan his progeny (unlike, say, Larmarckian evolution). Why do we have to pretend there is no feedback when there is plenty of scientific evidence for it? Because otherwise the output becomes unpredictable, or more accurately, purposeful. That is to say, contingent indeterminate things allow for will, for consciousness, for decisions in a way that necessary determined things do not.

So ignoring the feedback, or better, disallowing the feedback, means that we must keep the software and the hardware distinct. But this is difficult for a cell, since the DNA software is a subset of the cellular hardware. In physics lingo, we might try to separate the contingent initial conditions from the determined physical laws. Or in computer science lingo, we might try to make a distinction between the arbitrary data and the fixed algorithmic code. The DNA is the contingent data, whereas the RNA and ribosomes are the fixed machinery for transcribing it.  Then TE Turing-determinism would consist of God creating this cellular duplication hardware aeons ago, and feeding it a long tape of DNA instructions that eventually result in us. Presto, theistic evolution with entirely natural development from a long-forgotten supernatural initial condition!

Now the problem with this scenario is not that the one-celled amoeba has little of the DNA that make up humans (the storage problem), nor even that many inter-dependent organisms are needed for humans to exist (the irreducible complexity problem), but that it is impossible to keep the data and code separate.

For example, many computer viruses operate on the principle of "buffer overflow" where a program expects data--say, a last name--and the hacker gives it a program instead which overflows the memory space allocated for it and ends up in the part of memory allocated for code. Voila, the hacker has taken over the program. In just such a way, the DNA encodes not just copies of itself, but for making changes to the copying machinery. Viruses multiply in biology as they do in computers, by hijacking the system machinery. The only safe system is one that is never networked, has no USB port, no floppy or DVD slot, and does only what the factory loaded into it. Since that would be a pretty useless system, I cautiously network my laptop, but spend countless CPU cycles on anti-virus programs, just as much of the cellular DNA machinery is dedicated to making sure it is not hijacked.

Which is to say, because there cannot be any clear separation of data and code, there must exist mechanisms for eliminating hackers. By definition, such systems span the interface between data and code, looking for "foreign" code and "bad" data. But who tells the machine what is "foreign" and what is "bad"? My anti-virus files get updated periodically, but who updates the DNA? How does the cellular immune response recognize itself? If it is not accomplished externally (supernaturally!), then we are acknowledging a process that "contaminates" the determinate code with the contingent data.

Now we begin to recognize the hatred Darwin felt toward Lamarck. For the slightest amount of contingent data (hacker) destroys all determined (legitimate) code. Determinacy is a fragile state, intolerant of any amount of contingency, whereas contingency is robust, absorbing large amounts of determinacy while still remaining contingent.

But the problem is bigger than the need for an immune system. There is information stored in the computer operation, in the motion of the cellular machinery. A computer with a robust immune system, impeccable data storage, and factory-fresh operating system can still hang, "blue-screen" or crash. The system is inoperable until it is "rebooted", which involves taking it back to a known state and starting forward again, and you may still have lost that document you were working on. Every level of hierarchy in a computer has information, from the code, to the data, to the state it was in just before it crashed. And if, God forbid, you were installing the latest operating system service pack from Microsoft when your colleague impatiently pulled the power cord out, you may never be able to recover the system without asking Microsoft to send the system disks and wiping the disk.

So not only is the code indistinguishable from the data in principle, but the operation of the computer, the dynamical state of the system is in principle indistinguishable from the code and the data. The only escape from viruses and incompetent colleagues is an external system fixer, which is precisely the "supernatural" interference that Turing-determinism was intended to avoid. It is even worse than this, for disallowing any external influence means that the code and data get progressively damaged, until they are inoperable; what John Sanford called "Genetic Entropy".  It would appear that without God, our race is doomed.

c) Intelligent Design

Having made the argument that Turing-determinism is not possible, have we fallen into the accursed "God-of-the-Gaps" pitfall which is so reviled by TE? Have we argued that Genetic Entropy will require God to continually tweak the genome to keep it on track? If so, then we have reduced God to a lowly grease-monkey, whose main job is running around defending ID from TE entropy? Surely this is too demeaning a job description for the Creator of the Universe!

Well actually, I think "God of the Gaps" is a very good way to describe God, if you take "gaps" to be fractal. Georg Cantor proposed the Cantor Set, constructed by taking a line and removing the middle third. Then take the two remaining lines and remove their middle thirds. Repeat this process ad infinitum, and what do you have left? Well, we can write a mathematical series of what we have taken from the interval (0,1) and sum the infinite series--e.g. 1/3 + 1/3 (2/3) + 1/3 (4/9) ... and when we are done the series = 1. That is, there is nothing left. Yet something remains, because the number 1/4 still is there! In fact, there are an infinite number of fractions still left in the interval. Cantor described this as a "perfect set that is nowhere dense". It is a set where everything is a gap, yet something remains. It is often used as an early example of a fractal, which I think of as mathematical foam, looking the same no matter how microscopically you examine it.

So once we define a "God of the Gaps" as a "God at every time and spatial scale," then I am quite comfortable with this solution. For if a "gap" represents something we don't know, then we are saying our knowledge is 0% of interval, while God's is 100%. Which is not to say that we have no knowledge, but rather that God's knowledge is infinite compared to ours. (Cantor would have said "trans-finite", and took a lot of grief for suggesting that some infinities are bigger than others, so that there are an infinite number of infinities.)

But we can take the idea of a fractal in another direction altogether. Something that is everywhere the same is homogeneous and boring, lacking information. Something everywhere the same with repetitive structure, say, a salt grain with a face-centered cubic crystal, has very little information.  But if it has a complex structure it contains abundant information, yet if the structure looks the same at every magnification, then not only does it have information, but the information is non-local. The only way that point A far from point B can look like the same structure, is if there is a global requirement imposed on it. Mathematically, a fractal is the solution to a non-local equation. In William Dembski's terminology, it is specified complex information, or CSI.

So paradoxically, when we find that microscopes with higher and higher magnification give us more of the same, we are witnessing something much bigger than our microscope. When whales and elephants and people and shrews and mites and rotifers and bacteria are all made out of cells, we are observing something bigger than the Earth. When stromatolites and trilobites and dinosaurs and mastodons and people are all made out of cells, we are observing something older than the Earth.

In contrast, TE argues that naturalist processes are purely local, indistinguishable from chance and law. But such processes can never produce fractals in either space or time, for like Democritus' atoms, they have no spatial or temporal memory, knowing only themselves and the violent blows of hitting others in their way. But the fractal properties of nature indicate that non-local or global processes are at work, which by definition are external to the individual ignorant participants. In other words, the naturalist understanding of fractal space-time requires a supernatural explanation.

Conclusion

"Front-loading" is the TE attempt to stretch a Newtonian concept of determinism into an algorithmic form to avoid the collapse of Laplacian determinism. We have tried to show that algorithmic or Turing-determinism is incapable of describing biological evolution, for at least three reasons: Turing's proof of the indeterminancy of feedback; the inability to keep data and code separate as required for Turing-determinancy; and the inexplicable existence of biological fractals within a Turing-determined system.
Email ItEmail It | Print ItPrint It | CommentsComments (0) | TrackbacksTrackbacks (0) | Flag as offensiveFlag as Offensive