Happy E.T. Jaynes Day Thursday, Jul 5 2007 

Today is the birthday of Edwin Thompson Jaynes, a pioneer in probability theory, pictured above from his time at Berkeley in 1946. If Jaynes were alive today, he would be 85 years old. A world-class genius and devoted man of science, Jaynes made serious contributions to statistical mechanics, quantum physics, probability theory, philosophy of science, and even the physiology and mechanics of piano playing. His amusing and straightforward writing style make his works a pleasure to read.

Jaynes is primarily known for advancing the maximum entropy interpretation of thermodynamics, or MaxEnt approach, which, along with Bayesian inference, gives a mathematically optimal way of analyzing large amounts of input data, extracting patterns, and predicting future input. Maximum entropy methods are very popular (Google returns over a million results for the term) and are used for automated data analysis in dozens of disciplines, including medicine, economics, physics, chemistry, astronomy, and more. These methods are widely used in machine learning and can be considered a form of AI.

Most fascinating of all is Jaynes’ interpretation of probability theory. He realized that probability theory is a generalization of Aristotlean logic and by introducing degrees of belief this logic can be made much more flexible, as well as capable of dealing with uncertainty. This view is explained at length in his last work, Probability Theory - the Logic of Science. Although some parts of the book are fairly math-heavy, you can still get a lot out of the first few chapters with basic arithmetic.

For shorter pieces by Edwin Jaynes, see his page of unpublished works, which papers and lectures such as “How Does the Brain Do Plausible Reasoning?”, and his page of published works, including the fascinating “Prior Probabilities”.

Jaynes’ bio can be found here.

Brown’s Human Universals Wednesday, Jun 20 2007 

Anthropologist Donald E. Brown’s landmark book Human Universals points out over 200 behavioral and cognitive features it is suspected are common to all human beings. The list is very instructive for thinking about this species that we so happen to have been born into, and how it might be different from future species we engineer or otherwise create. Here are a few of the more interesting ones:

  • tabooed foods
  • childhood fear of loud noises
  • husband older than wife on average
  • anthropomorphization
  • reciprocal exchanges (of labor, goods, or services)
  • dreams, interpretation of
  • statuses on other than sex, age, or kinship bases
  • onomatopoeia
  • magic to win love
  • language, prestige from proficient use of

See the full list here. Human cognitive biases may also be universal. Also related is the search for a list of inductive biases.

The Longest Word in the English Language Monday, Jun 18 2007 

The following, the name of a protein coat used by a certain strain of Tobacco Mosaic Virus, is the longest word used in the English language in a serious context, i.e., published not just for the sake of the length of the word itself:

    acetylseryltyrosylserylisoleucylthreonylserylprolylserylglutaminyl-
    phenylalanylvalylphenylalanylleucylserylserylvalyltryptophylalanyl-
    aspartylprolylisoleucylglutamylleucylleucylasparaginylvalylcysteinyl-
    threonylserylserylleucylglycylasparaginylglutaminylphenylalanyl-
    glutaminylthreonylglutaminylglutaminylalanylarginylthreonylthreonyl-
    glutaminylvalylglutaminylglutaminylphenylalanylserylglutaminylvalyl-
    tryptophyllysylprolylphenylalanylprolylglutaminylserylthreonylvalyl-
    arginylphenylalanylprolylglycylaspartylvalyltyrosyllysylvalyltyrosyl-
    arginyltyrosylasparaginylalanylvalylleucylaspartylprolylleucylisoleucyl-
    threonylalanylleucylleucylglycylthreonylphenylalanylaspartylthreonyl-
    arginylasparaginylarginylisoleucylisoleucylglutamylvalylglutamyl-
    asparaginylglutaminylglutaminylserylprolylthreonylthreonylalanylglutamyl-
    threonylleucylaspartylalanylthreonylarginylarginylvalylaspartylaspartyl-
    alanylthreonylvalylalanylisoleucylarginylserylalanylasparaginylisoleucyl-
    asparaginylleucylvalylasparaginylglutamylleucylvalylarginylglycyl-
    threonylglycylleucyltyrosylasparaginylglutaminylasparaginylthreonyl-
    phenylalanylglutamylserylmethionylserylglycylleucylvalyltryptophyl-
    threonylserylalanylprolylalanylserine

The Wikipedia entry is here. The word contains 1185 letters. A much longer word is the full chemical name for titin, the longest known protein, weighing in at 189,819 letters. Thanks to our wonderful computer technology, this word could probably be stored on a hard drive the size of a microbe.

I look forward to a day when superintelligent agents will toss words like these back and forth in microseconds, comprehending their full significance and cross-referencing them effortlessly. I’m excited about this not merely for the sake of grandiosity or hubris, but in anticipation of the new ideas that would become accessible through engaging in discourse on the superhuman level.

It’s interesting that humans usually find long words humorous. We like to laugh off things we don’t understand very well.

Intelligence Augmentation vs. Artificial Intelligence Friday, Jun 8 2007 

To some, it seems “obvious” that significant human intelligence augmentation will come before human-level AI. To others, it’s the reverse that’s obvious. I don’t think either is obvious, but I believe there’s a strong likelihood AI will come first.

In the IA camp, one of the arguments goes, [Brain+Computer] will always be more intelligent than [Computer] alone. But this is untrue, as the I/O channels between brain and computer make all the difference, and with today’s technology, these channels are quite limited. Even if we had million-electrode brain-computer interfaces, it would be a cybernetics problem to ask which outputs to plug into which inputs, and what changes might need to be made to the central executive to handle the new cognitive architecture without information overload or psychosis. Reprogramming the executive center of the human brain would require advanced neurosurgery and extensive knowledge of the brain, knowledge that could take decades of research and advanced experimental techniques to uncover.

Other cons for IA, in my view:

  • Experimentation on the human brain is likely to be made illegal globally
  • The design-and-test cycle is on the order of weeks or months
  • Lack of human volunteers willing to die for the cause of IA research
  • Someone left out the line notes for the brain’s code
  • Experimenting on the deep brain is difficult because neocortex is in the way
  • All that medical hardware is really expensive
  • The human brain was not designed to be upgraded
  • Gene therapies not likely to give enough improvement for takeoff speed

A remark on that last one… the issue of takeoff speed. It’s not enough to create an Einstein with IA. You have to create an Einstein that can go immediately to work on new intelligence augmentation techniques, and actually come up with something of use in a reasonable amount of time, before AI is developed. It seems more likely to me that an intelligence-enhanced human would just go into the business of creating AI. Smarter-than-human intelligence cannot just be a really smart human being - it has to be something qualitatively off the scale. Manipulating the genes associated with genius, as James Miller suggested, would likely produce “only” human geniuses at first. You’d need to go an extra level of theory and genetic engineering to get something genuinely smarter-than-human in a human-like package. I’m not saying it couldn’t be done, but that the whole process could drag on for a number of years.

Benefits of IA:

  • Evolution has already done a lot of work for us
  • Some might think a human seed is more predictable
  • Sparks human-centric patriotism in ways AI doesn’t

On to the cons of AI:

  • Present-day computers might not be fast enough to implement AI
  • You have to build everything from scratch yourself
  • Everyone is working on narrow AI, but AGI is unpopular
  • Requires strong theory of general intelligence, difficulty unknown
  • Stigma of excessive past claims

And the benefits of AI:

  • Design-and-test cycle can be very rapid
  • All aspects of the AI are read/write friendly
  • Line notes are included with the code
  • Cognitive features can be optimized for self-improvement
  • Computational power can be expanded as funds allow
  • Virtual worlds are available as a flexible training zone
  • Hardware can be used to “overclock” beneficial functions
  • Probabilistically realistic, flexible learning can be implemented
  • Nascent AIs can share information with each other rapidly
  • Much larger regions of the mind configuration space can be tested
  • AIs can be copied indefinitely, allowing to commercial spin-offs
  • Substantial advances in AI, but not IA, have already been achieved
  • The hardware itself is inherently cheaper
  • Little to no legal concerns

Comment away. Whether or not IA or AI reaches smarter-than-human intelligence first is pretty important, as the step into this new domain could spark a runaway self-improvement process, something I.J. Good called an “intelligence explosion”. This is normally what we think of when we hear the word superintelligence.

Denying Superintelligence Friday, May 25 2007 

There are quite a few individuals that react to the idea of qualitatively smarter-than-human intelligence, AI or otherwise, with extreme skepticism and derision. My guess is that there are four possible reasons for this, which different people display in different combinations and intensity levels.

The first is the folk theory that intelligence is a light bulb - either it’s on, or it’s off. No in between. If you have it, it only varies to a matter of degree, not qualitatively. Humans have intelligence and animals don’t, which is why it’s okay to raise animals for food, for instance. Intelligence and subjective consciousness go hand in hand.

The second is the argument from divine privilege. Man, being made in God’s image, has been given the gift of reason. We cannot magnify this gift on our own any more than we can engineer a machine that turns us into angels. This “gift of reason” argument is what I was taught by my parents as a child.

The third is technological skepticism. For example, my grandfather, who is an atheist, believes it will be centuries before we understand the brain in enough detail to manipulate it significantly. This skepticism derives partially from a linear intuitive view of technological progress, and partly from a pseudo-spiritual worship of brain complexity.

The fourth is outright denial based on fear. Some people associate superintelligence with heartlessness, boring rationality, ruining all the fun, threatening to replace us, and so on. This is primarily based on fictional portrayals. There are dozens of films and books in which superintelligences are the bad guys. Astonishingly, the dumber good guys always seem to triumph in the end.

Can you think of any others?

What Smartness Means Tuesday, May 22 2007 

Bacterial cells have little organelles in them called mesosomes. According to the Wikipedia article, “Mesosomes may play a role in cell wall formation during cell division and/or chromosome replication and distribution and/or electron transfer systems of respiration. Electron transport chains are found within the mesosome producing 32-34ATP. They act as an anchor to bind and pull apart daughter chromosomes during cell division.” Various subscription-required articles, though some free, go on and on about the possible functions of these small organelles in the bacterial division, respiration, etc. Mesosomes were originally discovered in 1960.

Small problem. Sometime in the mid-70s, scientists realized that mesosomes weren’t even real. They were just artifacts caused by freeze-fractures in the chemical fixation process for electron microscopy. Little intrusions produced where the plasma membrane and cell wall came apart from the stress of the fixation process. So much for that idea.

If you figure that biologists get paid something like $60,000 per year, and it takes a couple months to do research and write a paper, and maybe something like 500 papers were published on mesosomes before they realized that what they were studying was pure bunk, then the biology community as a whole burned through ~$5 million chasing a ghost.

What does this have to do with the subject matter of this site? I often talk about intelligence enhancement and the recursive snowballing effect that I and many others predict would occur soon after its development. If a sufficiently intelligent biologist were on the research team that first discovered “mesosomes” in 1960, they could have discovered these were just artifacts by replacing the water used in the fixation process with an inorganic solvent, and all this confusion would have been saved. Our society has a bias against being too hard on people for these little mistakes, because, at least they tried. People would be pointing fingers non-stop if we always judged past events with the knowledge of hindsight. And we’re only human, right?

The magical difference that increased intelligence produces is getting it right the first time. It’s very tough for us to imagine a slightly-smarter-than-human intelligence that constantly solves difficult problems right off the bat, because we’ve never seen one. If the smartest human we can throw at the problem is just about as good as anyone else, then we project the quality of hardness onto the problem - not onto the abstract recognition that “human intelligence isn’t good enough”. This is the mind projection fallacy. But what we naively label “impossible” might be “easy” even to a mild version of superintelligence, say a human being with an artificially expanded neocortex. We may say, “this problem inherently requires five years of research!”, but a superintelligence walks along, says, “no it doesn’t”, and solves it in five minutes. We’re too quick to label things extremely difficult or impossible, but if we don’t, we lose our self-respect as a species, so many would argue we have to.

It seems like only transhumanists are capable of really stepping outside of that box of Homo sapiens and saying, “what if we were really and truly fundamentally smarter?” If more people could do this, then pursuing intelligence enhancement technology might become a national or even global priority.

The Human Importance of the Intelligence Explosion Tuesday, Apr 10 2007 

Next Page »