go here for the Shufflebrain main menu


Paul Pietsch

Originally published in the May, 1972 issue of Harper's Magazine (vol. 244, No. 1464), this article won the 1972 Medical Journalism Award of the American Medical Association and was featured on 60 Minutes in August, 1973. The author is now a Professor Emeritus at Indiana University, Bloomington.
PUNKY WAS A SALAMANDER. Or at least he had the body of a salamander. But his cranium housed the brains of a frog. I'd spent an entire season at the fringe of his clear-water world, asking who he was, with the neural juice of a totally different animal racing around inside, turning him on, tuning him in to his environment at a wave band beyond a normal salamander's spectrum. The answers, borne by his actions, flattened my scientific detachment, I confess.

Punky was only one in a long and varied series of brain transplants, experimental tests of the holographic theory, a theory about the language of the brain, a scientific treatment of nothing less than memory itself--the watering hole on the great subjective plain where thoughts and dreams, hopes and fears, pride and guilt, love and hate must drink to live, or else dry up, to vanish, like bone dust.

Years before, in Philadelphia, when I was first learning how to do operations like those on Punky, I was an instructor in a gross- anatomy dissecting lab. Class met in the afternoon. Insecure in my grip on what was then a newly acquired subject, I went in early each morning to do a dissection of my own. With class in session, the place roiled with the hurly-burly of people, alive and busy. But in the morning, when I arrived, it was silent, a room of death in the most complete sense of the word. Ugly gray light glared in through frosted windows and, without color, illuminated the rows of rag-swaddled, tarp-wrapped cadavers. It wasn't frightening; it was lonely, the loneliest place I'd ever seen. Its tables were the biers of the world's unwanted, unremembered, unclaimed--as people. And they'd been forgotten long before their corpses were hoisted up and flopped naked on the diener's soapstone prep table. Nameless now, serial-numbered metal-ring tag tied around big toe, dirt still under cracked nail or maybe half-peeled-away red or pink nail polish. Valuable, in death, as things. Valueless before, as people. They were the unloved dead . For to be loved is to be remembered. They were the unhated dead, for the same abstract reasons. The unremembered dead, the truly dead. For memory is our claim to identity, and when it stops, we are no more.

At the end, when we were finished, my department held funeral services for the bodies. I went. But I went with a generalized grief that I carried back whole because my memory found no place to assign any part of it.

Still, in time, I did forget the details. But Punky revived my memories of those mornings back in Philadelphia. That's probably why I gave him a name. For the Existent of Punky and his pals didn't stop with salamanders and frogs. It included my own species.


I will be talking here about the neural hologram, but I really should speak of brain information--a holologic principle, not only memory of past experiences. For the theory seeks to explain all the brain's stored programs, whether learned or wired in during embryonic life. It covers the mental yardgoods we unwrap to tailor "go: no-go" in reflexes. It supplies the cash for complex, reasoned associations. It works when the brain issues instructions to tune the A-string on a viola, or to make the baby cry because the milk is sour.

But holographic theory deals with the mode of neural messages, not specific molecules, mechanisms, or cells, as such. Like a multiplication or counting system, it commits grand polygamy with place and time and circumstance. It treats the how rather than the who--like gravity acting on the apple, instead of the meat, the freckles, or the worm.

The holographic theory had its crude origins in the 1920s when psychologist Karl Lashley began a lifelong search through the brain for the vaults containing memory. By then, students of behavior had been readied for angry debate by a paradox that had begun to emerge on the surgical tables of the nineteenth century. Clearly, the mental world had its biological base in the brain. Yet war, disease, and the stroke of the scalpel had robbed human brains of substance without necessarily expunging the mind. Lashley carried the problem to the laboratory and pursued it with precision tools, mazes, rats, controls, statistics.

Lashley also brought along the knife. With it, he found he could dull memory in proportion to the amount of cerebrum he cut out. But if he left a rat with any cerebrum at all, the animal could still remember. Not only did he fail to amputate memory, but one area of the cortex would serve it as well as another. He came to two controversial conclusions: intensity of recall depends on the mass of brain, but memory must be divvied up equally. "Mass action" and "equipotentiality" became his theme.

"Equibull!" a neuroanatomist friend of mine once declared. For the knives and battery poles of others had struck and dug into what seemed to be the specific loci of sight, scent, sound. Moreover, no clear and obvious physical precedent existed for equipotentiality. "I'm a scientist," my friend used to say, "not a goddamn Ouija board operator!"

But in 1948 physicist Dennis Gabor, trying to improve the electron microscope, accidentally stumbled over the optical hologram, a discovery that earned him the Nobel Prize in 1971. Lensless, 3-D photography was born. Within twenty years, the same principles had been extended to the brain.

Holograms take getting used to--like the idea that light can be both waves and particles, or that a curve gets you more quickly from star A to star B than Euclid's straight line. It's like getting accustomed to the notion that energy and mass are different ways of saying the same thing, or that time might shrink and expand. For holograms package information in a form disguised from our common sense, invisible behind the nominalistic curtains of our culture. But with patience, and a little open-mindedness, the intuition soon begins to drink up the principles--like relativity after Einstein or the shape of the earth after Columbus.

Familiar modes of information, even as complicated codes, reduce to bit parts, held, stored, according to the summum bonum of home economics and gross anatomy: "A place for everything, and everything in its place!" Not so a hologram (holo means whole). In it, the entire shtick of information, tamped down into a minuscule transcendental code, repeats itself, whole, throughout whatever the system happens to be. Trim a hologram down to a tiny chip and the message still survives, whole, waiting only to be decoded. One piece will work as well as another. But the fewer the parts used in decoding, the less intense the regenerated image. In other words, holograms work in precisely the same way that the memories in Lashley's rats did-- mass action and equipotentiality.

Gabor's discovery was for years a scientific curiosity, unknown outside a small circle of physicists. It remained so until the advent of laser technology. Holograms in physical media depend on coherent, orderly waves. To do anything other than just look at holograms, the waves must be fairly powerful. Laser beams not only have this property but they can be made very coherent.

Holography itself has bloomed into a new technology. There are even such people as holographers nowadays. They construct physical holograms for a living, and they are paid well to do so because the hologram may be the method of sending and storing information in the future.

To construct a physical hologram, a holographer uses two sets of waves. He shines one set through an object. He angles the other to miss the object but to collide with the waves that have passed through. He then collects the results of the collision on film or a cathode ray tube. His record, the hologram, represents the reaction between the distorted and undistorted waves.

In appearance, physical holograms resemble Platonic ideas of a shivering tiger or zebra or the signature of an artist suffering the shakes of a bad whiskey hangover. But the holographer can regenerate an image of the original object by retracing his construction procedures.

A hologram captures not a thingy thing. It captures rules--a harmonic syllogism, a holologic. And it is the stored record of Hegelian skid marks produced when points and counterpoints bang into each other, physical or numerical, concrete or abstract. Mathematics in reverse. Indeed, they take getting used to. But the glory of holograms glows through during decoding back to the original image, when they not only behave like Lashley's rats but reveal feature upon feature of human brain function.

Holographers can construct, say, acoustical holograms and call back the original, not with sound, but with light or waves in some other form. Thus, built into holographic grammar is the automatic mechanism to shift gears, instantly, from one modality to another-- how, for example, you can listen to someone and write what you hear him say as fast as you can work the muscles in your hand.

Such rapid, whole-scene shifts, involving forests of data, would be out of the question with the conventional message that must be translated bit by bit. In a hologram, it's all part and parcel of the principle. And the same thing shows up again in adding and modifying holograms. Holographers can construct multicolored, composite holograms, in steps, by adjusting wave length, thus mimicking how we might anneal present and past into a totality. Or they can de code several holograms of the same thing into a multicolored original. In the process they can even change colors. When the brain does these things on its munificent scale, we talk in terms of abstract reasoning or imagination. And in this capacity the human brain outshines the largest digital computers. For computers digest bits. But the brain's motifs are informational wholes that can meld and blend without the go-between of a finger-counting bureaucrat.

The flexible rules of holography even allow, automatically, for a subconscious, a bad word in my own particular profession. But consider an optical hologram. In decoding, it's possible to select a wavelength invisible to the naked eye, yet of sufficient energy to burn a monogram permanently onto the retina of an unwary onlooker. As with the subconscious, you don't have to see its wounds to ache from them.

Holographic theory would also explain the chemical transfer of memory--how information from the brain of one worm, rat, mouse, or hamster might be extracted into a test tube and injected into another animal, there to mediate recall in the absence of the recipient's previous experience. Such reports from a dozen laboratories over the past few years have excited the press and reading public. But in conventional scientific circles, I've heard them called such things as "oozings from the stressed seams of cracked pots." Yet a hologram can write itself into anything, including a molecule. At the very same time, the theory in no way at all restricts the brain's programs to molecules, as such. There's no rule against using, say, molecules, voltages on cells, or groups of neurons to carry the information. The program might even be carried at many different levels simultaneously.

Just who deserves credit as the first to apply holographic principles to the brain I'm going to allow historians of science to fight out. Lashley, of course, saw them at work in his rats and had both the genius and the courage to describe what nature showed. Certain of Pavlov's conclusions look holological. Gabor's powerful mind must have snared the notion the moment he tripped on the optical effect. Years later, in fact, he published a mathematical scheme of reminiscing. Philip Westlake, a brilliant UCLA cyberneticist, has shown that equations of physical holograms match what the brain does with information. Karl Pribram and an army of colleagues at Stanford's medical school have invested a decade and a thousand monkeys, using the theory to work out details of how living brains remember.

Predictably, holographic talk provokes hot controversy. I recall not long ago delivering a lecture on the subject, when out of the audience jumped a neuropharmacologist, trembling with rage, demanding to know: "How can you account for something like Broca's area?" He was referring to a part of the cerebrum known for 100 years to be vulnerable to stroke accompanied by the loss of speech. I cleared my throat to answer. But before I had the chance, a young psychophysicist, sprawled in a front-row seat, whipped his shoulder-length mane around and fired back, "You can't draw beer out of a barrel without a bung!"

It was a perceptive reply. For in holographic theory, functional centers such as Broca's area represent processing stations rather than storage depots. Rage, fear, hunger centers, the visual cortex at the back of the brain, or auditory areas at the sides--these would act not to house specialized information but to pump it in or to call out programs in the form, say, of snarl, smile, utterance, equation, kiss, or thought. And sharp lines of distinction between innate and acquired information fade as far as storage itself is concerned. Still, the theory does not completely rule out uneven distribution of memory, particularly in the complex brains of higher animals. Indeed, it is not hard to make a case for different storage within the two hemispheres of the human cerebrum. Michael Gazzaniga recently published an intriguing book on what has been known for almost twenty years as "split-brain" research. Begun in the early 1950s by Meyers and Sperry at Cal Tech, the technique involves cutting the corpus callosum, a broad thick strap of nerve fibers between the hemispheres. Success in the lab with cats and monkeys prompted neurosurgeons to split the corpus callosum in the human brain. They did so to alleviate violent, prolonged, drug-resistant grand mal epileptic seizures, and they had remarkable success, medically. But the patients emerged from surgery with two permanently disconnected personalities. With more such operations, the left cerebral hemisphere emerged as the dominant, verbal, arithmetic side, while the right brain held recollections of form and texture. The tendencies appear to hold whether patients were left- or right-handed. Early in 1971, music was found among the repertoire of the right hemisphere. Yet the outcome of split-brain surgery has never been absolute, nor the individual patient's subsequent behavior totally predictable. Both hemispheres can generate music in some people, and the right may have a vocabulary. In addition, a totally illiterate right hemisphere can learn to read and write in less than six months--as though it had a tremendous head start. On top of this, Gazzaniga's observations convince him that the consignment of memories to one side of the brain emerges with maturity. Children seem to employ both hemispheres. Thus it would seem that the brain can reshape its contents and make decisions about what will go where. But it is also quite possible that split-brain research identifies not unequal storage but unequal access. Like the reflected image of a written message, meaning would stay the same but translation would entail different steps. The cerebral hemispheres, after all, do mirror rather than carbon-copy each other.

At any rate, the brains of human beings and our close relatives seem to be many brains, orchestrated by virtue of connections like the corpus callosum. Moreover, our multisystem cranial contents seem to be in flux, physiologically. Different lights can flash off and on, moment to moment. Some of the switches lie under our direct control; others are no more within our deliberate, intellectual reach than the impulses driving a hungry shark or an amorous jackrabbit. Holographic theory does not deny conclusions of split-brain research. But it insists that, whatever the system used for storage, the information shall be layered in whole and repeated throughout. It denies that memory depends on minced-up and isolated bits filed in specific pigeonholes. Just what happens to be going on inside a brain when it's loading up with a particular hologram may determine which areas may and may not act as targets-- or how vivid the reconstructed scene becomes during some later translation into conscious form.


It's one thing, though, to use a theory to draw complex sets of data or weird collections of observations into a larger body of knowledge. It's quite another to subject a theory to logically valid, epistemologically sound laboratory experiments. Personally, I think it's legitimate to employ a theory without really bothering with formal tests and even to nurture belief in its truth based on its usefulness. For theories supply powerful intellectual tools. Those that don't work very well become ornaments if they're beautiful, junk otherwise.

However, the theory that holographic principles could account for neural-information storage was testable. Before getting into those tests, we need to talk some about theory and experiments. For they belong to very different realms.

Theories are perfectible and can be made ubiquitous within confines set down by their inventor. When they try to say something about physical things, they reach for the harmony and simplicity in nature, for a side of it the inveterate experimentalist believes he can comprehend by observation. To some biologists, for example, the Cell is a fiction. Only cells exist. At best, the theorist regards experience as a start on the road to truth, as Einstein did. At worst, the theorist might tell you that God contrived experience to pollute man's view of the truth. Whatever the experimenter concludes, the theorist seeks an ever-larger synthesis. For the particulate, nominalistic character of an experiment means that it cannot extend far enough and spread out wide enough to cover the expanse of a theory. Thus explanation demands theory.

Even so, because experiments take place in experience, they keep the experimenter alive to a side of nature that theory misses-- its variety, its individuals. Theory would turn nature into a peneplain, smooth, unenriched, simpler as the theory reaches higher and higher abstraction until even a speck of dust would become something to cherish. Experience returns hair, lips, smiles, surprises. Experience is where doves coo, horses snort, and robins lay little blue eggs. We spend most of our time, mentally, where experiments go on. And if there is some harmonious thread weaving through the universe, we still have a right to want to connect the abstract world and the world we call real.

How is this done? By poking around in theoretical constructs for testable predictions: if such and such a theory is "true," then such and such an outcome will happen. This is how experiments come in; they are ways of setting a trap for the predictable elements of a theory--the parts of it that make the rules credible to the human mind.

My purpose in working with Punky and his pals was to make or break my faith in the holographic theory of neural storage. And I was a skeptic, at the outset.

When I began this work the only prima facie experimental evidence to link the general theory involving holographic principles to brains had come from ablation studies--subtracting from brain substance. Subtraction is an incomplete test. To see the incompleteness is to see how the salamanders relate to the theory. Thus, let's spend a little time doing a few imaginary experiments.

Imagine several hundred Xerox copies of this unholographic page, but reproduced on transparent plastic sheets. Now stack the sheets so that each letter, word, and line forms a perfect overlay with its replicates below. Now subtract a sheet--two, three, or any number, for that matter--only keeping the stack straight. What happens? Loss or unevenness in density, perhaps. But as long as we keep the equivalence of one page, we preserve the message. The reasons are obvious. First, we're working with a system containing a redundant message. Secondly, when we eliminated some parts, we merely allowed what was beneath to shine through. But we certainly don't have a holographic system. This is how I viewed the results of ablation studies.

Let's try another series of experiments with the transparencies. Let's throw the pile up in the air, arrange some of the sheets in a new order, cut some of the sheets into pieces and reglue the pieces randomly--reshuffle, in other words. Now we would distort the message and know it very quickly. Why? Meaning in a conventional message ( or pattern ) depends on relationships among parts and subparts--sets and subsets. When we scrambled relationships, when we messed up the system's anatomy, we wrenched the carriers of meaning. We might also have done this by adding a transparency with a different message. But when we merely took away parts from our redundant system, we created empty sets and voided rather than distorted relationships.

But suppose the linotype operator had set a hologram? Then our reshuffling experiments would have produced far different results. We would not have introduced changes in the meaning of the message. For in a hologram meaning lies within--not among--any sets we might produce by simple physical means. And in reshuffling we would be shifting whole messages around, exchanging their positions without really getting at components. Trying to dissect out a hologram's subunits is like trying to slice a point, or stretch that infinitesimally small domain by an amount no larger than itself. No, a knife won't reach inside the heart of a hologram. Of course, in practice we might trim a system to such small proportions that the image upon decoding would be too dim to register. Or in a physical experiment we could destroy or distort the medium and make it technically impossible to decode. That's why we opted for imagination--to bypass engineering details.

But look at the implications of our imaginary experiments. Look at the predictions. If we really want to test holography against redundancy, we ought to shuffle the brain. If it houses conventional messages, we would find out very quickly. But if programs exist in the brain according to holographic principles, scramble though we may, we won't distort their meanings. And that is where salamanders come in.


A peaceful, quiet world, the salamander's --unless you happen to be a dainty little daphnia or a cockeyed mosquito larva whiplashing to the surface for a gulp of air. Or even worse, the crimson thread of a tubifex worm. For it is the destiny of the salamander to detect, pursue, and devour all moving morsels of meat small enough to fit inside his mouth. He eats only what moves. And he adjusts his attack to fit the motion of his fated quarry. When he sees the tubifex worm, or picks it up on sonar with his lateral line organs, he lets you know with a turn of his head. Position fixed, half-swimming, half-walking, he glides slowly, deliberately, along the bottom of his dish, careful not to create turbulence that, in the wilds, would send the worm burrowing deep into the safety of the mud. Reaching his victim, he coasts around it, moving his head back and forth, up and down, to catch swelling and shrinking shadows and vibrations and permit his tiny brain to compute the tensor calculus of the worm's ever-changing size.

The size of a four-year-old's little finger, salamanders sustain injury and recuperate like few other creatures on earth. Consider, for example, what I call the Rip Van Winkle paradigm. Remove a salamander's brain. The behaviorally inert body continues to live, indefinitely. Transplant the brain to the animal's broad, jelly-filled tail fin for storage. After a month or two, slide the brain out of the fin and return it to the empty cranium. In a couple of weeks, after the replant takes, the animal behaves as if the operation had never occurred. He's awake again, a free-living, prowling organism, like his normal brothers and sisters.

That same tail fin will accommodate hunks of brain pooled ad hoc from several different salamanders. The pieces quickly send out thousands of microscopic nerve fibers that weave a confluent network. Does such a mass of brain tissue work? Communicate impulses? Splice a length of spinal cord on each end of the mass as a conduit to the skin. Then, on one side, graft an eye, pressing the cut optic nerve against the piece of spinal cord. On the other side transplant a leg, making sure that it touches the conduit. Wait a couple of weeks to allow the optic nerve to invade the spinal cord on the one side and the cord on the other to sprout fibers into the leg to reinnervate its muscles. Now aim a spotlight at the tail and focus on the grafted eye. If you can hit the light switch at the correct tempo, you can make the transplanted leg stomp a tarantella.

Yet if my experiments were to be a fair test of the holographic theory, I'd have to insure two things. First, the experimental salamander would have to be capable of sensing a tubifex worm. Secondly, he'd have to be able to command his body and jaw muscles into action. I was sure this could be done with salamanders by preserving the medulla, the transitional region between spinal cord and the rest of the brain. In the medulla lie input stations for touch from the head, the salamander's efficient sonar system, and the sense of balance from a carpenter's level-like internal ear. Also, impulses that bring jaw muscles snapping to life are issued directly from the medulla. It does for head muscles what the spinal cord does for, say, the biceps or muscles in the thigh. And in salamanders the medulla serves as a relay station for information to and from spinal cord and brain. Higher animals have such stations too. But evolution added long tracts that function like neural expressways.

There are actually five main parts of the brain common to all vertebrates, including man. The cerebral hemispheres that predominate within our own heads are small lobes on the tip end of a salamander's brain. But during embryonic life our own cerebral hemispheres pass through a salamander stage.

The next region back, known as the diencephalon, is where the optic nerves enter the brain. Distorting this region would and did create blindness in certain experiments. A so-called mesencephalon or mid-brain connects diencephalon to medulla. These were the parts I would shuffle.

Amputating brain in front of the medulla turned off the salamander's conscious behavior and, of course, feeding along with it. But, if I stayed out in front of the medulla, I'd be leaving sufficient input and output intact for whatever programs surgery might deliver up.

This is not surgery in the nurse-mask-sutures-and-blood sense. It goes on under a stereoscopic microscope. Very little bleeding. No stitches. Just press the sticky, cut tissues together and permit armies of mobilized cells to swarm over and obscure the injured boundary line. There is only room in the field of operation for a single pair of human hands. The animals sleep peacefully in anesthetic dissolved in the water. Trussed lightly against cream-colored marble clay, magnified, they look like the prehistoric giants of their ancestry. A strong heart thrusts battalions of red blood corpuscles through a vascular maze of transparent tissues. No bones to saw. Under fluid your instruments coax like a sable-hair brush.

In more than 700 operations, I rotated, reversed, added, subtracted, and scrambled brain parts. I shuffled. I reshuffled. I sliced, lengthened, deviated, shortened, apposed, transposed, juxtaposed, and flipped. I spliced front to back with lengths of spinal cord, of medulla, with other pieces of brain turned inside out. But nothing short of dispatching the brain to the slop bucket --nothing expunged feeding!

Some operations created permanent blindness, forcing animals to rely on their sonar systems to tell them what was going on outside. But the optic nerves of salamanders can regenerate. Still, for normal vision to return, regenerating optic nerves need a suitable target, as Roger Sperry showed many years ago. I was able to arrange for this, surgically. And when I did, eyesight recovered completely in about two weeks-- even when the brains came from a totally different species of salamander and contained extra parts. As far as feeding was concerned, nature continued to smile on holography. Not one single thing about the behavior of this group of animals suggested the drastic surgery they had undergone.

The experiments had subjected the holographic theory to a severe test. As the theory predicted, scrambling the brain's anatomy did not scramble its programs. Meaning was contained within the parts, not spread out among their relationships. If I wanted to change behavior, I had to supply not a new anatomy but new information.

Suppose, though, that parts of a salamander brain in front of the medulla really have no direct relationship to what a salamander does with a worm? Suppose feeding stations exist in the medulla or spinal cord (or left leg), awaiting only consciousness to ignite them? If this were true, the attack response on worms-- the principal criterion in the study--would be irrelevant, and shuffle brain experiments would say very little about the holographic theory. A purist might have taken care of this issue at the outset.

"New experiments required," I scribbled in my notes. "Must have following features. Host: salamander minus brain anterior to medulla. Donor: try a vegetarian, maybe young Rana pipiens tadpole. But, first, make damn sure donor brain won't actively shut off salamander's attack on worms."

My working hunch was that the very young leopard frog tadpole would make a near-perfect donor. His taste for flies comes much later on in development. While he's little, he'll mimp-mouth algae from the flanks of a tubifex and harm nothing but a little vermigrade pride. Then, too, from experiments I'd carried out years before, I knew frog tissues wouldn't manifestly offend salamander rejection mechanisms, not to the extent that they would be destroyed. Thus, if grafted brains didn't perish in transit across the operating dish, they would become permanent fixtures in their new heads.

Whether a tadpole brain would or would not actively shut off worm-recognition programs in salamanders I had to settle experimentally before calling Punky into the game. Here, I transplanted tadpole brain parts but left varying amounts of host salamander brain in place. These animals ate normally, thus showing that tadpole brain, per se, would not overrule existing attack programs. As I had guessed, it was like adding a zero to a string of integers as far as feeding was concerned.

Now the scene was ready for Punky, the first of his kind through the run. He would surrender his own cranial contents in front of the medulla to the entire brain of a frog. If his new brain restored consciousness but gave him a tadpole's attitude about worms, he'd vindicate the shuffle brain experiments.

For controls, I carried out identical operations but used other salamanders as donors. Also, to assure myself that frog tissue itself would not affect appetite, I inserted diced tadpole in the fins and body cavities of still other salamanders. This procedure had no effect on feeding. Moreover, I had a hunch that Punky would remain blind. So I removed eyes from other salamanders to get fresh data on feeding via sonar.

Punky awoke on the seventeenth day. Very quickly, he became one of the liveliest, most curious-acting animals in the lab. He did remain blind but his sonar more than compensated. A fresh worm dropped into his bowl soon brought him over. He'd nose around the worm for several minutes. He lacked the tadpole's sucker mouth. And I couldn't decide whether he wanted algae, or what. But he spent a lot of time with the worms. In the beginning, he had me watching him, wondering in a pool of clammy sweat if he'd uncork and devour the holographic theory in a single chomp. Yet, during three months, with a fresh worm in his bowl at all times, in more than 1,800 direct encounters, Punky never made so much as a single angry pass at a tubifex. Nor did any of his kind in the months that followed. The herbivorous brain had changed the worms' role in the paradigm. They were to play with now, not to ravage.

I kept Punky's group nourished by force-feeding them fresh fillets of salamander once a week. This meant the same thing had to be done with each and every control animal too. While the extra food did not blunt control appetites, the added work left me looking groggily toward pickling time when I could preserve the specimens on microscopic slides.

I routinely examine microscopic slides as a final ritual. But Punky's slides weren't routine. And on the very first section I brought into sharp focus, the truth formed a fully closed circle in the barrel of my microscope. His tadpole brain, indeed, had survived. It stood still in terms of development, but it was a nice, healthy organ. And from its hind end emerged a neural cable. The cable penetrated Punky's medulla, there to plunge new holographic ideas into his salamander readout, and into the deepest core of my own beliefs. #

Want to see some pictures of Punky's slides? Click here if you do.

The editor of 'Shuffle Brain' was Nelson W. Aldrich, Jr., then of Harper's magazine. I've yet to see his equal as an editor.
click here to return to the Shufflebrain menu