THE TRANSHUMANIST FAQ

Nick Bostrom et al. [See endnote.]

worlds, worlds, worlds

Version of May 13, 1999

 

CONTENT

GENERAL QUESTIONS ABOUT TRANSHUMANISM
  • What is transhumanism?
  • What is a transhuman?
  • What is a posthuman?
     
  • TRANSHUMAN TECHNOLOGIES AND PROJECTIONS

    SOCIETY AND POLITICS

    TRANSHUMANISM AND NATURE
  • Why do transhumanists want to live longer?
  • Isn’t transhumanism tampering with nature?
  • Won’t transhuman technologies make us inhuman?
  • Isn't death part of the natural order of things?
  • Are transhumanist technologies environmentally sound?
     
  • TRANSHUMANISM AS A PHILOSOPHICAL AND CULTURAL VIEWPOINT

  • What are transhumanism’s philosophical and cultural antecedents?
  • Is extropianism the same as transhumanism?
  • What currents are there within transhumanism?
  • Is transhumanism a cult/religion?
  • Won’t things like uploading, cryonics and AI fail because they can’t preserve or create the soul?
  • Is there transhumanist art?
     
  • TRANSHUMAN PRACTICALITIES
  • What evidence is there that it will happen?
  • Won't these transhumanist developments take thousands or millions of years?
  • What if it doesn’t work?
  • How can I use transhumanism in my own life?
  • How could I become a posthuman?
  • Isn’t the possibility of success in cryonics too small?
  • Won't it be boring to live forever in the perfect world?
  • How can I become involved in transhumanism?
     
  •  

    [NOTE: You can place comments on this document, and read others' comments, by viewing it through CritLink: http://crit.org/http://www.transhumanist.org. This might increase the downloading time though.]

     

    GENERAL QUESTIONS ABOUT TRANSHUMANISM

    What is transhumanism?

    Transhumanism represents a radical new approach to future-oriented thinking that is based on the premise that the human species does not represent the end of our evolution but, rather, its beginning. We formally define it as follows:

    (1) The study of the ramifications, promises and potential dangers of the use of science, technology, creativity, and other means to overcome fundamental human limitations.

    (2) The intellectual and cultural movement that affirms the possibility and desirability of fundamentally altering the human condition through applied reason, especially by using technology to eliminate aging and greatly enhance human intellectual, physical, and psychological capacities.

    Transhumanism can be described as an extension of humanism, from which it is partially derived. Humanists believe that humans matter, that individuals matter. We might not be perfect, but we can make things better and promote rational thinking, freedom, tolerance and democracy. Transhumanists agree with this but they also emphasize what we have the potential to become. Not only can we use rational means to improve the human condition and the external world; we can also use them to improve ourselves, the human organism. And we are not limited only to the methods, such as education, which humanism normally espouses. We can use technological means that will eventually enable us to move beyond what most would describe as human.

    Transhumanists think that through the accelerating pace of technological development and scientific understanding, we are entering a whole new stage in the history of the human species. In the near future, we will face the prospect of real artificial intelligence. New kinds of cognitive tools will be built that combine artificial intelligence with new interface technology. Molecular nanotechnology has the potential to create abundant resources for everybody and to give us complete control over the biochemical reactions in our bodies, thereby allowing us to eliminate disease. Through the redesign or pharmacological enrichment of our pleasure-centers we may enjoy a richer diversity of emotions, life-long happiness and exhilarating peak experiences every day. On the darker side of the spectrum, transhumanists recognize that some of these coming technologies could potentially cause great harm to human life; even the survival of our species could be at risk. Although these are extreme possibilities, they are taken seriously by an increasing number of scientists and scientifically literate philosophers and social thinkers.

    Transhumanism has experienced exponential growth worldwide over recent years. Presently two international transhumanist organizations exist, Extropy Institute and the World Transhumanist Association, both of which publish online journals and organize conferences. There are local transhumanist groups in many countries and in the US one can find discussion groups in almost every major city. A growing body of transhumanist thinking is being published on the web as well as in books and journal articles. Transhumanists also conduct discussion online on several open-subscription Internet mailinglists.

    References:
    Extropy Institute. http://www.extropy.org

    World Transhumanist Association. http://www.transhumanism.com

    Transhumanist mailinglists: http://www.transhumanism.com/lists.htm

     

    What is a transhuman?

    'Transhuman' is a shorthand term used to refer to a 'transitional human', a sentient being first described at length by the futurist FM-2030 as a potential step towards evolution into a posthuman [See "What is a posthuman?"]. Calling transhumans the 'earliest manifestation of new evolutionary beings,' FM suggests that some signs of transhumanity include bodily augmentation with implants, androgyny, asexual reproduction, and distributed identity.

    In FM's original formulation, transhumans are not necessarily the most future-oriented or technologically adept persons, nor would they necessarily be aware of their 'bridging role in evolution'. As FM's ideas spread and more humans began to consider themselves transhumanists, however, the concept of the transhuman has taken on aspects of self-identification and proaction, as shown in this definition from the Transhuman Terminology SubPage:

    TRANSHUMAN: Someone actively preparing for becoming posthuman. Someone who is informed enough to see the radical possibilities and plans ahead for them, and who takes every current option for self-enhancement

    Many transhumanists already consider themselves transhuman, because our use of tools has greatly expanded the capabilities of the human body and mind. The trend is one of continuing progress in the development and use of global communications, body modification, and use of life extension techniques. Any human who takes advantage of this trend can achieve transhuman status within a lifetime.

    References:
    FM-2030. 1989. Are You a Transhuman? Warner Books, New York.

    Transhumanist Lexicon: http://www.transhumanism.com/lexicon/

     

    What is a posthuman?

    A posthuman is a human descendant who has been augmented to such a degree as to be no longer a human. Many transhumanists want to become posthuman.

    As a posthuman, your mental and physical abilities would far surpass those of any unaugmented human. You would be smarter than any human genius and be able to remember things much more easily. Your body will not be susceptible to disease and it will not deteriorate with age, giving you indefinite youth and vigor. You may have a greatly expanded capacity to feel emotions and to experience pleasure and love and artistic beauty. You would not need to feel tired, bored or irritated about petty things.

    The means by which transhumanists hope to achieve posthuman status include, but are not limited to, the following: molecular nanotechnology, genetic engineering, artificial intelligence (some think artificial intelligences will be the first posthumans), mood drugs, anti-aging therapies, neurological interfaces, advanced information management tools, memory enhancing drugs, wearable computers, economic inventions (such as Idea Futures, Collaborative Information Filtering etc.), and cognitive techniques. [More detailed explanations of how these things could make us posthumans are given in later sections of this FAQ.] In general, technological or social inventions that improve overall economic efficiency tend to benefit transhumanist aims.

    Posthumans could be completely synthetic (based on artificial intelligence) or they could be the result of making many partial augmentations of a biological human or a transhuman. Some posthumans may even find it advantageous to get rid of their bodies and live as information patterns on large super-fast computer networks. It is sometimes said that it is impossible for us humans to imagine what it would be like to be a posthuman. They may have activities and aspirations that we can’t even begin to fathom, much as an ape could never hope to understand the complexities of a human life.

     

     

    TRANSHUMAN TECHNOLOGIES AND PROJECTIONS

    What is nanotechnology?

    Nanotechnology is an anticipated manufacturing technology giving thorough, inexpensive control of the structure of matter.

    (The terms "nanotechnology" and "molecular nanotechnology" are sometimes used to refer to any technology able to work at a submicron scale, but the concept we have in mind here implies the ability to accurately place individual atoms. The more recent term "molecular manufacturing" is sometimes used to avoid this ambiguity.)

    Nanotechnology will enable the construction of giga-ops computers smaller than a cubic micron; cell repair machines; personal manufacturing and recycling appliances; inexpensive space-colonization equipment, and much more.

    Broadly speaking, the central thesis of nanotechnology is that almost any chemically stable structure that can be specified can in fact be built. Some aspects of the idea can be traced back to a talk by Richard Feynman in 1959, but it was only after Eric Drexler's substantive analyses in the early eighties that molecular nanotechnology became a research area and a long-term engineering project. In the last few years, the field has seen an explosion of interest and investment.

    Drexler has proposed the "assembler", a device having a submicroscopic robotic arm under computer control. It will be capable of holding and positioning reactive compounds in order to control the precise location at which chemical reactions take place. This general approach should allow the construction of large atomically precise objects by a sequence of precisely controlled chemical reactions, building objects molecule by molecule. If designed to do so, assemblers will be able to build copies of themselves, that is, to replicate.

    Because they will be able to copy themselves, assemblers will be inexpensive. We can see this by recalling that many other products of molecular machines---firewood, hay, potatoes---cost very little. By working in large teams, assemblers and more specialized nanomachines will be able to build objects cheaply. By ensuring that each atom is properly placed, they will manufacture products of high quality and reliability. Left-over molecules would be subject to this strict control as well, making the manufacturing process extremely clean.

    The plausibility of this approach can be illustrated by the ribosome. Ribosomes manufacture all the proteins used in all living things on this planet. A typical ribosome is relatively small (a few thousand cubic nanometers) and is capable of building almost any protein by stringing together amino acids (the building blocks of proteins) in a precise linear sequence. To do this, the ribosome has a means of grasping a specific amino acid (more precisely, it has a means of selectively grasping a specific transfer RNA, which in turn is chemically bonded by a specific enzyme to a specific amino acid), of grasping the growing polypeptide, and of causing the specific amino acid to react with and be added to the end of the polypeptide.

    In an analogous fashion, an assembler will build an arbitrary molecular structure following a sequence of instructions. The assembler, however, will provide three-dimensional positional and full orientational control over the molecular component (analogous to the individual amino acid) being added to a growing complex molecular structure (analogous to the growing polypeptide). In addition, the assembler will be able to form any one of several different kinds of chemical bonds, not just the single kind (the peptide bond) that the ribosome makes.

    One consequence of the existence of assemblers is that they are cheap. Because an assembler can be programmed to build almost any structure, it can in particular be programmed to build another assembler. Thus, self-reproducing assemblers should be feasible and in consequence the manufacturing costs of assemblers would be primarily the cost of the raw materials and energy required in their construction.

    A main difficulty with nanotechnology is the bootstrap problem---how to build the first assembler. There are several promising routes. One is to improve a scanning tunneling microscope or an atomic force microscope, giving it the requisite positional flexibility and gripping ability to allow us to position atoms and molecules with sufficient precision in a 3-d grid. Progress is being made on this front; the IBM trademark, spelt out on a surface with 35 precisely positioned xenon atoms, was front-page news a few years back.

    Another route to the first assembler is through synthetic chemistry. One can imagine synthesizing cleverly designed chemical building blocks that can self-assemble in solution phase.

    Yet another route is through biochemistry. Ribosomes are special-purpose assemblers and we could use them to make assemblers of more generic capabilities. A serious obstacle on this route is the protein folding problem---predicting the solution-phase shape of a given sequence of amino acids. While the general solution to this problem may be computationally intractable, it may be possible to predict the shape of the resulting protein in certain special cases, and these predictable proteins may form a rich enough set of structures that we can use them to build a more general-purpose assembler.

    That general assemblers are consistent with the laws of chemistry was showed in Drexler's Nanosystems (1992). This book also showed that general assemblers could build a very wide range of useful structures, including ultra-powerful computers. In fact, virtually any structure that is specified in atomic detail and which is consistent with the laws of chemistry could be built by molecular assemblers, cheaply and with almost no waste. It is widely believed that mature nanotechnology would also enable the reanimation of cryogenically suspended persons and make uploading possible [see "What is uploading?"].

    While it seems fairly well established that molecular nanotechnology is in principle possible, it is harder to determine how long it will take to develop. A common guess among the cognoscenti is that the first general assembler will be built around the year 2017 give or take a decade, but there is large scope for disagreement about that.

    Because the ramifications of nanotechnology are so immense, it is imperative that people begin thinking seriously about this topic now. If nanotechnology is abused, it could have devastating consequences; society needs to develop ways of minimizing that risk. [See also "What happens if these new technologies are used in war?"]

    References:
    Drexler, E. 1986. The Engines of Creation: The Coming Era of Nanotechnology. http://www.foresight.org/EOC/index.html

    Drexler, E. 1992. Nanosystems, John Wiley & Sons, Inc., NY.

    Foresight Institute. http://www.foresight.org

     

    What is superintelligence?

    A superintelligence is any intellect that greatly outperforms the best human brains in practically every field, including scientific creativity, general wisdom and social skills.

    Sometimes a distinction is made between weak and strong superintelligence. Weak superintelligence is what you would get if you could run a human-like brain at an accelerated clock speed, perhaps by uploading a human mind onto a computer [see "What is uploading?"]. If the upload’s clock-rate were a thousand times that of a biological human brain, it would perceive reality as being slowed down by a factor thousand. This means it could think a thousand times more thoughts in a given time than its natural counterpart.

    Strong superintelligence refers to an intellect that is not only faster than a human brain but also qualitatively superior. No matter how much you would speed up a dog brain, you would not get a human-equivalent brain. Similarly, some people think that there could be strong superintelligence that no human brain could match no matter fast it runs. (However, the distinction between weak and strong superintelligence may not be at all clear-cut. A sufficiently accelerated human brain that didn't make any errors and had enough memory capacity (or scrap paper) could in principle compute any Turing computable function. According to Church's thesis, the set of Turing computable functions is identical to the set of mechanically computable functions.)

    Many (but not all) transhumanists think that superintelligence will be created in the first half of the next century. This requires two things: hardware and software.

    When chip-manufacturers design the next generation of chips, they rely on a regularity called "Moore’s law". It states that processor speed doubles about every eighteen months. Moore’s law has been true for all computers, even going back to the old mechanical calculators. If it continues to hold true for a few decades then human-equivalent hardware will have been achieved. Moore’s law is mere extrapolation, but the conclusion is supported by more directly by looking at what are the physical limits and at what is being developed in the laboratories today. Massively parallel computers may also be a way to achieve human-level computing power even without faster processors.

    As for the software problem, progress in computational neuroscience will teach us about the computational architecture of the human brain and what learning rules it uses. We can then implement the same algorithms on a computer. By using a neural network approach, we would not have to program the superintelligence: we could make it learn from experience exactly like a human child does. A possible alternative to this route might be to use genetic algorithms and methods from classical AI to create a superintelligence that may not bear a close resemblance to human brains.

    The arrival of superintelligence will clearly deal a heavy philosophical blow to any anthropocentric world-view. Much more important, however, are the practical ramifications. Creating superintelligence is the last invention that humans will ever need to make, since superintelligences could themselves take care of further scientific and technological development more efficiently than could humans. The human species will no longer be the smartest life-form in the known universe.

    The prospect of superintelligence raises many big issues and concerns that need to be thought hard about now, before the actual developments occur. The big question is: What can be done to maximize the chances that the arrival of superintelligences will benefit humans rather than harm us? The range of expertise needed to address this question extends far beyond that of AI researchers. Neuroscientists, economists, cognitive scientists, computer scientists, philosophers, sociologists, science-fiction writers, military strategists, politicians and legislators and many others will have to pool their insights in order to deal wisely with what may be the most important task the human species will ever have to tackle.

    Transhumanists tend to want to grow into and become superintelligences themselves. There are two ways in which they hope to do this: (1) Through gradual augmentation of their biological brains, perhaps using nootropics ("smart-drugs"), cognitive techniques, IT tools (e.g. wearable computers, smart agents, information filtering systems, visualization software etc.), neurological interfaces and bionic brain implants. (2) Through mind uploading.

    References:
    Moravec, H. 1998. "When will computer hardware match the human brain?" Journal of Transhumanism. Vol. 1. http://www.transhumanist.com/volume1/moravec.htm

    Bostrom, N. 1998. "How Long Before Superintelligence?". International Journal of Futures Studies. Vol. 2. Also at http://www.hedweb.com/nickb/superintelligence.htm

    Kurzweil, R. 1999. The age of spiritual machines. Viking Press.

    What is virtual reality?

    A virtual reality is an environment you experience without being physically situated in it. Theatre, opera, cinema and television are all primitive precursors to virtual reality. Some of these (precursors to) virtual realities are modeled on physical realities. For example, when you are watching the Olympics on TV, you may sit in your living room and hear and see more or less the same things that you would have heard and seen if you had been present at the event. In other cases, you are experiencing environments that have no counterpart in physical reality, as for instance when you are watching a Thom&Jerry cartoon. The latter kind of virtual realities are also called artificial realities.

    The degree of immersion that you experience watching television is quite limited---watching the Olympics on TV doesn’t really compare to being there---for several reasons. First, the resolution is poor. An ordinary TV has too few pixels to give you the illusion of real perception. High Resolution TV improves on this, but even with a very large screen, there are still large peripheral areas of your retina that aren’t stimulated; 3-D vision is also missing. These problems can be solved by using a head-mounted display that writes directly on your retina with a laser beam. One would also like to involve more sensory modalities---head phones for stereo sound, and perhaps a haptic interface for tactile stimulation. A crucial element is interactivity; watching TV is a passive experience, but a full-blown virtual reality would allow you to manipulate the objects you perceive. For this, you need sensors that measure your responses so that the virtual reality simulation can be updated accordingly.

    Primitive virtual (and artificial) realities have been around for some time. The earliest applications were as training modules for pilots and military personnel. Increasingly, they are also being used in arcade games for entertainment. Because VR is computationally very intensive, simulations are still very crude. As computational power increases, and as sensors, affectors and displays improve, VR will begin to approximate physical reality in terms of fidelity and interactivity.

    VR will open unlimited possibilities for human creativity. Humans will construct artificial experiential worlds that are not bound by the laws of physics, but which will be appear as real as physical reality to the participants. People will visit these worlds for entertainment and to work or socialize (or have sex) with other people who may be physically situated on a different continent.

     

    What is uploading?

    Uploading (sometimes called "mind uploading" or "brain reconstruction") is the hypothetical process of transferring a mind from a biological brain to a computer.

    The idea is that after scanning the synaptic structure of a brain, we could implement the same computations on an electronic medium that would normally take place in the neural network of the brain. A brain scan of sufficient resolution could be produced by disassembling the brain atom for atom by means of nanotechnology. Other approaches, such analyzing small the brain slice by slice in an electron microscope with automatic image processing have also been proposed.

    A distinction is sometimes made between destructive uploading, in which the original brain is destroyed in the process, and non-destructive uploading, in which the original brain is preserved intact alongside the uploaded copy.

    It is a matter of debate under what conditions personal identity would be preserved in destructive uploading. Most philosophers who have analyzed the problem think that at least under some conditions, an upload of your brain would be you. The idea is that you survive as long as certain information patterns are conserved, such as your memories, values, attitudes and emotions; it matters less whether they are implemented on a computer or in that gray, cheesy lump inside your skull.

    Tricky cases arise, however, if we imagine that several similar copies are made of your uploaded mind. Then which one is you? Are they all you or is neither of them you? Who has the right to your property? Who is married to your wife/husband? Philosophical, legal and ethical challenges abound. Maybe these will be hotly debated political issues in the next century.

    Some facts about uploading:

    • Uploading should work for cryonics patients provided their brains are frozen in a sufficiently intact state.
    • Uploads could live in an artificial reality (i.e. constructed computer-simulated environment). An option would be to have robot bodies and sensors so they can resume their lives in physical reality.
    • The subjective time of uploads would depend on how fast the computers are on which they are running.
    • Uploads could be distributed over vast computer networks and they could make frequent backup copies of themselves. This should make it possible for uploads to have indefinite life spans.
    • Uploads could subsist on a very small amount of resources compared to a biological human, since they don’t need physical food or shelter or transportation.
    • Uploads could reproduce extremely quickly (simply by making copies of themselves). This implies that resources could quickly become scarce unless reproduction is regulated.

     

    What is the singularity?

    The technological singularity is a hypothetical point in the future where the progress-curve becomes nearly vertical, i.e. where the pace of technological development becomes extremely rapid. The concept was introduced by Vernor Vinge who thinks that provided we manage to avoid destroying civilization beforehand, a singularity will happen because of advances in artificial intelligence, computer-human integration or other forms of intelligence amplification. Enhancing intelligence will, according to Vinge, at some point lead to a positive feedback loop: more intelligent systems can design even more intelligent systems, and can do so quicker than the original human designers. This positive feedback is assumed to be powerful enough that within a very short time (months, days, or even just hours) the world is transformed beyond recognition and is suddenly inhabited by superintelligent beings.

    Often associated with the singularity is the idea that it is impossible to predict what comes after it. The resulting posthuman world may be so alien that we can know absolutely nothing about it. One exception might be the basic laws of physics, but even there it is sometimes speculated that there may be undiscovered laws (we don't yet have a theory of quantum gravity) or poorly understood consequences of the known laws (traversible wormholes, spawning basement universes, time travel etc.) which posthumans can utilize to do things we would normally think of as physically impossible.

    It has been pointed out that what's unpredictable at one point may be predictable as you move closer to the event. A person living in the 1950's could predict more features about today's world than could a Renaissance person, who in turn could predict more than somebody from the stone age. Since the predictability horizon recedes as we are moving forward in time, maybe there will never be a leap totally into the dark. At each step you could foresee a lot of what was going to happen at the next step, although the endpoint might be completely invisible from the starting point.

    The issue of predictability is important because without the ability to predict at least some of the consequences of your actions, there is no point in trying to steer the development in a desirable direction.

    Transhumanists differ widely in the probability they assign to Vinge's scenario. Almost all of those who do think that there will be a singularity think it will happen in the next century, and many think it is most likely to happen within a few decades.

    References:
    Vinge, V. 1993. "The Coming Technological Singularity". http://www-rohan.sdsu.edu/faculty/vinge/misc/singularity.html

    Hanson, R. (ed.) 1998. "A Critical Discussion of Vinge's Singularity Concept" Extropy Online. http://www.extropy.com/eo/articles/vi.html

     

     

    SOCIETY AND POLITICS

    Won’t new technologies only benefit the rich and powerful? What happens to the rest?

    One could make the case that the average American today has a higher standard of living than any king five hundred years ago. The King might have had a court orchestra, but you can afford a CD player that enables you to listen to the best musicians any time you want. If the King got pneumonia he might well die, but you would take antibiotics. The King might have a carriage with six white horses, but you could have a car that goes faster and is more comfortable. And you have a television, Internet access, Coca Cola, a shower, you can speak to relatives on a different continent over the phone, and you know more about the Earth, nature and cosmos than the King ever did.

    The typical pattern with new technologies is that they become cheaper as time goes by. In the medical field, for example, experimental procedures are usually only available to research subjects and the very rich. As these procedures become routine, their cost decreases and more people can afford them. Even in the poorest countries, millions of people have benefited from vaccines and penicillin. In the field of consumer electronics, the price of advanced computers and calculators drops as more complicated models are designed.

    It is clear that everybody can benefit greatly from improved technology. In the beginning, however, the greatest advantage will go to those who have the resources, the knowledge and especially the willingness to learn to use new tools. One can speculate that some technologies may cause social inequalities to widen. For example, if some form of intelligence amplification becomes available, it may at first be so expensive that only the richest can afford it. The same could happen when we learn how to genetically augment our children. Wealthy people would become smarter and make even more money. This phenomenon is not entirely new: rich people can give their kids a better education, and they may use tools, such as information technology and well-placed personal contacts, that are not accessible to the less privileged.

    Trying to ban technological innovations on these grounds would be misguided. If a society judges these inequalities to be unacceptable, it would be wiser for that society to increase wealth redistribution, for example by means of taxation and the provision of free services (education vouchers, IT access in public libraries, genetic enhancements covered by social security etc.). For economical and technological progress is not a zero sum game. It's a positive sum game. It doesn't solve the old political problem of what degree of income redistribution is desirable, but it can make the pie that is to be divided enormously much greater.

     

    Might transhuman technologies be dangerous?

    Yes, and this implies the need to analyze and discuss the problems before they become real. Biotechnology, nanotechnology and AI all have the potential to create major and complex dangers if used carelessly or maliciously [see "What happens if these new technologies are used in war?"]. Transhumanists urge that it is of the greatest importance that we begin to take these issues seriously. Now.

    There are huge ethical, social, cultural, philosophical and scientific questions that need to be thought through in detail. Research is needed, as well the widest possible public debate. We also need to create institutions and an international framework that will enable responsible policies and well-considered regulations to be implemented. All this will take time, and the sooner we begin the better are our chances of steering clear of the worst pitfalls.

    A good example is the Foresight Institute which for several years has been promoting research into and public understanding of emerging transhumanist technologies, focusing especially on molecular nanotechnology.

    References:
    The Foresight Institute: http://www.foresight.org

     

    Shouldn’t we concentrate on current problems like improving the condition of the poor people or solving international conflicts, instead of putting effort into foreseeing the "far" future?

    We should do both. Only concentrating on the current problems and trying to apply the current solutions doesn't work---we will both be unprepared for the new problems and our current methods are often inadequate.

    Many of the transhuman technologies or trends already exist and have become part of current debate. Biotechnology is already a reality. Information technology has transformed large sectors of our economies. As far as transhumanism is concerned, the future happens all the time.

    Most of the transhuman technologies work well together, creating synergetic effects with other parts of human society. One important factor in life expectancy is access to good medical care---improvements in medical care will extend life, and work at life extension is likely to benefit ordinary care. Work at amplifying intelligence has obvious applications in education, rational management and improving communications. Improvements in communications, rational thinking, trade and education are a very powerful means to promote peaceful solutions to international conflicts. Nanotechnological manufacturing promises to be both economically profitable and environmentally sound.

    Working towards a world order characterized by peace, international cooperation and respect for human rights would much improve the odds that the dangerous applications of certain future technologies will not be used irresponsibly or in warfare. It would also free up resources currently spent on military armaments, and possibly channel them to improve the condition of the poor.

    Transhumanists do not have a simple patent solution that would achieve this outcome any more than anybody else, but no doubt technology has a part to play. For example, improved communications may build more understanding between people. As more and more people get access to the Internet and are able to receive satellite radio and television broadcasts, dictators and totalitarian regimes will find it harder to silence voices of dissent and to control the information flow to their populations. And as many users of the Internet are discovering, the World Wide Web gives you friends and acquaintances and business partners from all over the world. This can only be a good thing.

     

    Won’t extended life worsen overpopulation problems?

    Population increase is an issue we would ultimately have to come to grips with even if life-extension were not to happen. Some people blame technology for having given rise to the problem of overpopulation. Another way of looking at it is to consider that were it not for technology then most people alive today would not have existed---including the ones who are complaining about overpopulation! Were we to stop using modern agriculture, most humans would soon die of starvation and the diseases following in its trail. Where it not for antibiotics and medical intervention especially at child birth, many of us would have died in our infancy. It’s worth thinking twice before calling something a "problem" when we owe our very existence to it.

    This is not to deny that too rapid population growth causes crowding, poverty and depletion of natural resources. In this sense there is a real problem. Programs to provide contraception and family planning, especially to couples in the poorer countries where population growth is fastest, should be supported. The constant lobbying by some religious pressure groups in the United States to block these humanitarian efforts are seriously misguided in the opinion of transhumanists.

    How many people the Earth can sustain at a comfortable standard of living and without damaging the environment is a function of technological development. New technologies, from simple improvements in irrigation and management to current breakthroughs in genetic engineering should continue to improve world food output (while reducing animal suffering).

    One thing that the environmentalists are right about is that the status quo is unsustainable. Things cannot, as a matter of physical necessity, remain the way they are today indefinitely or even for very long. If we continue to use up resources at the current pace then we will run into serious shortages sometime in the first half of the next century. The deep greens have an answer to this: they suggest we turn back the clock and return to an idyllic pre-industrial age that was in harmony with nature. The problem is that the pre-industrial age was anything but idyllic---poverty, misery, disease, heavy manual toil from dawn to dusk, superstitious fear and cultural parochialism (and it wasn't environmentally sound either---witness deforestation of England and the Mediterranean, desertification of large parts of the middle east, and soil depletion by the Anansi indians). We don’t want that. Also, it’s hard to see how more than a few hundred million people could be maintained at a reasonable standard of living with pre-industrial production methods, so 90% of the world population would somehow have to be got rid of.

    Transhumanists propose a much more realistic alternative: not to go backward but to push ahead as hard as we can. The environmental problems that technology creates are problems of intermediary, inefficient technology. Technologically less advanced industries in the former Soviet-block pollute much more than do their Western counterparts. High-tech industry is relatively benign. When we develop molecular nanotechnology we will not only have perfectly clean and efficient production of most any commodity but we will also be able to clean up the mess created by today’s crude production methods. This sets a standard for a clean environment that transhumanists challenge any traditional environmentalist to try to match.

    Nanotechnology will also make it cheap to colonize space. From a cosmic point-of-view, Earth is a totally insignificant little speck. It has been suggested that we ought to preserve space it in its pristine glory and leave it untouched. This view is hard to take seriously. Every hour, through entirely natural processes, vast amounts of resources---thousands of times more than the total of what the human species has used throughout its career---are transformed into radioactive substances or wasted as radiation escaping into intergalactic space. One has a very limited imagination if one cannot think of some more creative way of using all this matter and energy.

    Even with full-blown space colonization, however, population growth can continue to be a problem (even if we assume that an unlimited number could be transported from Earth into space). If the expansion speed is limited by the speed of light then the amount of resources under human control will only grow polynomially (~ t3). Population on the other hand can easily grow exponentially (~ et). If that happens, then, since a factor that grows exponentially will eventually overtake any factor that grows polynomially, the average income will ultimately drop to the Malthusian subsistence level, forcing population growth to slow. How soon this would happen depends primarily on reproduction rates. An increase in average life span does not have a big effect. Even vastly improved technology can only postpone the inevitable for a relatively brief time. The only long-term solution is population control restricting the number new persons created per year. This does not mean that population could not grow, only that the growth would have to be polynomial rather than exponential.

    A few more points to consider:

    • In technologically advanced countries, couples tend to have fewer children---below the replacement rate. The only cause of population growth in much of the West is immigration. As a matter of empirical fact, giving people increased rational control over their lives (and especially female education and equality) causes them to have fewer children.
    • If one took seriously the idea of limiting life span to control population, why not be more active about it? Why not encourage suicide? Why not execute anyone reaching the age of 75?---That is clearly absurd.
    • Extending the human life span would not need to worsen the overpopulation problem any more than would improving automobile safety or worker safety, or reducing violent crime.
    • When transhumanist say they want to extend life spans, what they mean is they want to extend health spans. No point living an extra ten years in the state of dementia. This means that the extra man-years would be productive and would add economic value to society.
    • The population growth rate has been decreasing for several decades. It reached its peak in 1970 at 2.07%. In 1998, the rate was about 1.33%. It’s expected to drop below 1% in 2016. [UN-report (1998)]. The doomsday predictions by the Club of Rome from the early 1970’s have consistently turned out to be wrong.
    • The more humans there are, the more brains there will be working to invent new ideas and solutions.
    • If people can look forward to a longer life, they will have a personal stake in the future and will hopefully be more concerned about the long-term consequences of their actions.
    References:
    United Nations. World Population Prospects: The 1998 Revision (United Nations, New York). http://www.popin.org/pop1998/

     

    Is there any ethical standard by which transhumanists judge "improvement of the human condition"?

    Transhumanism is compatible with a variety of ethical systems, and transhumanists themselves hold many different views. Nonetheless, the following seems to constitute a common core of agreement:

    According to transhumanists, the human condition has been improved if the conditions of individual humans have been improved. In practice, the individual is usually the judge of what is good for himself or herself. Therefore, transhumanists advocate individual freedom, especially the moral right for those who so wish to do to use technology to extend their mental and physical capacities and to improve their control over their own lives.

    From this perspective, an improvement to the human condition is a change that gives increased opportunity for individuals to shape themselves and their lives according to their informed wishes. Notice the word "informed". It is important that people are aware of what they are choosing between. Education, freedom of information, information technology, idea futures, and potentially intelligence amplification, can help people make choices that are more informed. (Idea futures is a proposed market where people would place bets on uncertain scientific hypotheses or predictions about the future, thus encouraging an honest consensus. Hanson (1990).)

    References:
    Hanson, R. 1990. "Could Gambling Save Science?". Proc. Eighth Intl. Conf. on Risk and Gambling, London. http://hanson.berkeley.edu/gamble.html

     

    What kind of society would posthumans live in?

    Not enough information is available at this time to provide a full answer to this question. The type of society that posthumans would live depends on the type of posthuman that evolves from present-day humans. Right now, transhumanists can project various paths of possible posthuman evolution [See "What is a posthuman?"]. Some of these paths may result in a single posthuman, but only time will reveal which of these paths, if any, results in an entire society of posthumans.

    Transhumanists can speculate about how a posthuman might interact with humans---provided that a posthuman would want to interact with humans at all---but it is difficult to imagine how a society of posthumans might conduct their lives. Any construction of a posthuman society at this point would be based on current experiences and desires of humans or transhumans, concerns that may have no relevance at all to posthumans. Posthumans will most likely invent entirely new forms of societal living. As the seeds of a posthuman society develop, some of us hope to have the opportunity to observe their interactions with humans, transhumans, and other posthumans, from which one might formulate an idea of what kind of posthuman society could develop.

     

    What happens if these new technologies are used in war? Might they cause our extinction?

    Some of the technologies that will be developed in the next century will be very, very powerful. If used for the wrong purposes, they could inflict great harm to humans and the environment. Some could even, in a worst-case scenario, bring about the extinction of intelligent life. This is the worst possible outcome, and it must be avoided at any cost.

    Here are some species-destroying scenarios that transhumanists have discussed:

    Gray goo.---Self-replicating nanomachines [See "What is nanotechnology?"] accidentally get out of control and consume the whole biosphere, turning it into "gray goo". Since molecular nanotechnology will open up new chemical reaction pathways, there is no reason to suppose that the ecological balances that limit the growth of organic self-replicators would pose an obstacle to the nano-replicators.

    In principle, it would be relatively easy to build in multiple safeguards that would make it impossible for this scenario to happen. For example, one could make the self-replicating machines dependent on some "vitamin"---a rare chemical that they need to function. Or one could make adaptive mutations arbitrarily unlikely through the right kind of design. Experiments with self-replicators could be confined to "sealed labs", small chambers that automatically explode, evaporating their content, if anything attempts to penetrate their walls (either from the inside or from the outside.) Consequently, provided the development of nanotechnology is done by responsible people and with stringent safeguards, the gray goo scenario could be avoided.

    Black goo.---The consensus is that "black goo" is a much bigger problem, referring to deliberately manufactured destructive nanomachines.

    One way of meeting the threat of black goo is to develop "active shields"--- automated defense systems with built-in constraints to limit or prevent their offensive use. One can imagine a global immune system consisting of nanomachines roaming the surface of the Earth in search for dangerous replicators. A problem with this approach is that even though it may ultimately be possible to build a reliable global immune system, it might be much more difficult to do so than to build destructive nanomachines. If so, there will be a time interval during with the world would be unprotected. It is essential that anti-proliferation treaties and global regulation prevent aggressors from abusing nanotechnology during this period.

    Another way to reduce the risk of extinction would be to create distributed space colonies. Again, the problem is that it might take too long before this becomes possible on a large scale.

    How long the critical period is (from the development of dangerous nanomachines to the development of adequate defenses) depends on the rate of technological progress during this interval. People who think there will be a singularity [See "What is the singularity?"] believe this period may be very brief.

    Superintelligence.---While transhumanists in general want superintelligence, some worry that a badly programmed superintelligence might decide to annihilate human beings or even all intelligent life including itself. What fuels this concern is the idea that a superintelligence would be intellectually so alien and superior to the human mind that it would be hard for us to anticipate or regulate its motivations and impossible for us to control it against its will. [See "How will posthumans or superintelligent machines treat humans who aren’t augmented?"]

    Nuclear and biological weapons.---Nuclear and biological weapons continue to be a threat. Today’s arsenals seem too small to put an end to our species. However, it is not implausible that biological agents even deadlier than the current ones will be produced through genetic engineering. Hopefully, the development of vaccines and antidotes will keep up with the development of toxins and plagues, but there is no way of knowing.

    Counteracting the proliferation of weapons of mass-destruction should be a top priority of every responsible nation. Even apart from a species-destroying major war, it is all to easy to imagine a rouge country or a terrorist group using weapons of mass destruction to inflict great civilian casualties and to disrupt civilization, perhaps in a blackmail scenario.

    Other doomsday scenarios.---A runaway greenhouse effect, in which warming releases more and more methane, a powerful greenhouse gas (very unlikely to cause our extinction in the opinion of most transhumanists); naturally occurring pandemics, spread rapidly through intercontinental travel (unlikely to kill us all but should be taken seriously); comet and asteroid strike (highly unlikely); causing the decay of a metastable vacuum through high-energy accelerator experiments (the energies we reach today are much less than what occurs all the time in the cosmic background radiation, but more powerful ways of accelerating particles in the future might possibly pose a danger). No doubt there are other dangers that we haven’t yet thought of. Important in this context is the controversial Carter-Leslie Doomsday argument, which purports to derive from Bayesian probability theory and a few trivial empirical assumptions that the risk of human extinction has hitherto been systematically underestimated [see reference].

    References:
    Drexler, E. 1986. The Engines of Creation: The Coming Era of Nanotechnology, chapters 11-15. http://www.foresight.org/EOC/index.html

    Leslie, J. 1996. The End of the World: The Ethics and Science of Human Extinction. Routledge.

    Bostrom, N. 1996. "Investigations into the Doomsday argument" http://www.anthropic-principle.com/preprints.html

     

    How will posthumans or superintelligent machines treat humans who aren’t augmented?

    That depends on the motivations of posthumans, and nobody knows the precise answer. Let’s examine three possible scenarios:

    (a) It is possible that a future society will encompass both humans and posthumans, as well as a many different kinds of transhumans. Especially if posthumans develop gradually, one can easily imagine that there will be a period during which very different life forms will coexist peacefully. Maybe humans will initially dominate because of their vast numbers, but the influence of posthumans will increase by and by.

    When the posthumans become much more powerful than the humans (and this could happen quickly or it could take decades), it is likely that the relationship changes from one between equals to something else. Here we can distinguish two possibilities, one optimistic and one pessimistic.

    (b) The optimistic outcome is if posthumans continue to respect and tolerate humans. Posthumans could live as benevolent demigods among humans and help them out when they are in trouble, for example by taking care of the environment or ensuring that every human will have enough to eat. Any human who would like to become a posthuman would be given the opportunity, but those who preferred to remain human would do so and they would be able to continue to live traditional human lives. If unaugmented humans preferred not to have posthumans living among them, the posthumans could find ample Lebensraum ("space to live in") on other planets and in other solar systems.

    (c) The pessimistic outcome (at least from a human perspective) is if the posthumans decide that humans beings represent hopelessly inefficient ways of using matter and energy that could be put to better uses. If the posthumans are not bound by human-friendly laws and they don’t have a moral code that says that it would be wrong, they might then decide to take actions that would entail the extinction of the human species. Maybe they would transform our planet into a giant computer or into space probes that would be sent out to speed up the process of colonizing the universe.

    Humans and transhumans can be proactive in making (b) more likely than (c). For even though posthumans will ultimately become much more powerful than humans, the posthumans will either be artificial intelligences originally constructed by humans, or they will actually be humans who have transcended. In the first case, we could make sure that the values of tolerance and respect for human well-being are incorporated as core elements of the programming, making part of an inviolable moral code. In the second case, we could improve the odds by fostering those same values among humans today, so that the humans that ultimately transcend will have high ethical standards. And in both cases, it could help if we continue to build stable democratic traditions and constitutions, ideally expanding the rule of law to the international plane as well as the national.

     

    Do transhumanists think technology will solve all problems?

    Technology will not solve any problem. What technology will do is give us increasingly powerful tools that humans can use to solve most any material problem (including giving abundant wealth to everybody)---provided we have the foresight to take the necessary safety measures and provided we are cooperative enough that we don’t use the new technologies to wage war against each other.

    These are big ifs, and they indicate that the greatest difficulty we will face is not technological or scientific. Hard as the technical obstacles are, they will almost certainly be overcome sooner or later. Technological development is pretty much going in a transhumanist direction by its own momentum.

    The really tricky part will be political. Can people around the world and their leaders muster enough foresight and cooperation to pass and enforce international agreements that will prevent hostile military applications? Or at least delay them until effective defense systems have been developed? Nobody knows, but our survival might depend on it.

     

     

    TRANSHUMANISM AND NATURE

    Why do transhumanists want to live longer?

    Have you ever been so happy you almost wanted to scream? Was there a moment in your life when you felt something so deep and sublime that it seemed like all your everyday life was but a dull gray slumber?

    It is so easy to forget how good things can be when they are at their best. But on those rare occasions when you do remember---whether it’s through being totally absorbed in creative work, or it’s the sense of achievement, or the ecstasy of romantic love---you realize just how valuable every single minute of existence can be. And you may have said to yourself: "It ought to be like this always. Why can’t this last forever?"

    Well, what if it could?

    When transhumanists seek to extend human life span, they are not trying to add a couple of extra years of senility and sickness at an old persons’ home. That would be pointless. No, what they want is to create more healthy, happy, productive years. Ideally, everybody should have the right to choose when and how they want to die---or not to die at all. Transhumanists want to live longer because they want to do, learn and experience more than they can in a normal human life span. They want to continue to grow and mature and develop for much more than the meager eight decades allotted to us by our evolutionary past. As the sales pitch for one cryonics organization goes:

    "The conduct of life and the wisdom of the heart are based upon time; in the last quartets of Beethoven, the last words and works of 'old men' like Sophocles and Russell and Shaw, we see glimpses of a maturity and substance, an experience and understanding, a grace and a humanity, that isn’t present in children or in teenagers. They attained it because they lived long; because they had time to experience and develop and reflect; time that we might all have. Imagine such individuals---a Benjamin Franklin, a Lincoln, a Newton, a Shakespeare, a Goethe, an Einstein---enriching our world not for a few decades but for centuries. Imagine a world made of such individuals. It would truly be what Arthur C. Clarke called ‘Childhood's End’---the beginning of the adulthood of humanity. You could be a part of this. And you should be. Join us. Choose life." (The Cryonics Institute)

     

    Isn’t transhumanism tampering with nature?

    This question goes to the very heart of transhumanism. Transhumanists say it is right to tamper with nature. It is nothing to be ashamed of. There is absolutely no moral or ethical reason why we should not interfere with nature and improve it if we can, whether it’s by eliminating diseases, improving the efficiency of agriculture to feed a growing world population, or putting communication satellites up into orbit to provide homes with satellite news and entertainment.

    In many particular cases there are of course good practical reasons why we do best to rely on "natural" processes. The point is, you can’t decide whether something is good or bad simply by asking whether it’s natural or not. Some natural things are bad, like starvation, tuberculosis, or being eaten alive by a tiger. Some synthetic things are bad, like DDT-poisoning, car accidents, and nuclear weapons.

    To take an example, consider the debate about human cloning. Some argued that cloning humans was not unnatural because human clones are essentially just identical twins. They were right. But the more fundamental point to make is that it doesn’t matter whether human clones are natural or not. When we are discussing whether we should clone humans, we have to compare the various possible desirable consequences with the various possible undesirable consequences. We then have to try to estimate how likely each of these consequences is. This debate is much harder than simply dismissing cloning as unnatural, but it’s also more likely to result in good decisions.

    Does all this seem trivial? Well, it should! Yet, it’s amazing how people can still get away with arguments that are basically (thinly disguised) ways of saying, "It’s good because it’s the way it has always been!" or "It’s good because that’s the way Nature made it!".

     

    Won’t transhuman technologies make us inhuman?

    This question is based on confusing "human" with "humane". Human means "Belonging to man or mankind; having the qualities or attributes of a man; of or pertaining to man or to the race of man" (Webster's Revised Unabridged Dictionary 1913). Transhumans will change many of these attributes and qualities. Many human attributes are inconvenient or destructive; most transhumanists want to promote the positive sides of humanity (such as being "humane" - kind or compassionate) and get rid of (or at least control) the bad.

    There is no intrinsic value in being human, just as there is no intrinsic value in being a rock, a frog or a posthuman. The value resides in who we are as individuals, and what we do with our lives.

     

    Isn't death part of the natural order of things?

    Transhumanists insist that whether something's natural or not isn't relevant to whether it's good or desirable [see "Isn't transhumanism tampering with nature?" and also "Won't extended life worsen overpopulation problems?"].

    The quest for immortality is one of the most ancient and deep-rooted of human aspirations. It has been a key theme in human literature from the earliest known narrative poem, The Epic of Gilgamesh, through innumerable myths and poetic stories since. It underlies the teachings of the world religions about spiritual immortality and the hope of an afterlife. If death is part of the natural order, so is the human craving to overcome death.

    Before transhumanism, the only hope of evading death was through reincarnation or otherworldly resurrection. People who saw such religious doctrines as figments of human imagination were left with no alternative than to accept that death was inevitable. Secular worldviews, including traditional humanism, would typically include some sort of explanation of why death was not such a bad thing after all. Some existentialists even maintained that death was necessary to give life meaning!

    It is understandable that people make excuses for death. Until recently there was absolutely nothing we could do about death and it made some degree of sense to create these comforting philosophies (transhumanists call them "deathism") according to which dying of old age is natural and good. Such beliefs used to be relatively harmless. But they have outlived their purpose. Today, we can begin to foresee the possibility of eventually abolishing aging and we have the option of taking active measures to stay alive until then, through life extension techniques or cryonics. This makes such comforting illusions dangerous, indeed mortal, since they teach us helplessness and encourage passivity.

    A common myth, held especially among the young, is that old people get "sated" with life. In reality, many older people enjoy being alive as much as ever. Some do feel tired of life when they get very old, but that is usually because they are sick without hope of improvement; they sense their minds and bodies wasting away; their best friends are dead or dying. Under such circumstances, death can come as a welcome relief. But imagine that you could be given a new shot of life, that it would be possible to restore your mind and body to full youthful vigor (while retaining all the knowledge and experiences of a life time) and maybe bring some of your old friends back to life. Would you reject such an offer? Even if you now think you would, chances are you would change your mind if you ever faced this choice as a concrete reality.

    A few people might still choose death. That's fine too, as long as it is an informed choice. The rest of us could look forward to an indefinite life-span in the posthuman era.

    The transhumanist position is clear about the ethics of death. According to transhumanists, death should be voluntary. This means that everybody should be free to extend their life spans and to arrange for cryonic suspension of their bodies. It also means that voluntary euthanasia should be regarded as a basic human right.

     

    Are transhumanist technologies environmentally sound?

    Transhumanist technologies are generally environmentally sound. Intermediary technology typically pollute far more than more advanced technologies. The industrial sector of the former Soviet Union, for example, is far more polluting than its more sophisticated counterparts in the West. Information technology, medical procedures and high-tech in general are relatively clean.

    Transhumanists can make a stronger claim regarding the environment---current technologies are not sustainable. We are using up essential resources (oil, metals, atmospheric pollution capacity) more quickly than they can regenerate. At the present rate of consumption, we will exhaust these resources some time in the next century. Realistic alternatives that have been proposed involve the transhumanist recommendation: to take technology to a more advanced level. Not only are transhumanist technologies ecologically sound, they may be the only environmentally viable option for the long term.

    With mature molecular nanotechnology we will have a way of producing most any commodity with absolutely no waste or pollution whatever. What’s more, it will enable us to clean up the mess created by the rather primitive technology we have today. That sets a standard that other approaches to the environment cannot hope to match. Nanotechnology would also make it economically feasible to build space-based solarplants, to mine extraterrestrial bodies for ore and minerals and to move heavy industries off-earth. The only true long-term solution to resource shortage is space colonization.

    It should also be noted that from a transhumanist point of view, humanity and its artifacts and actions are part of the extended biosphere, and human intervention is a legitimate part of it.

     

     

    TRANSHUMANISM AS A PHILOSOPHICAL AND CULTURAL VIEWPOINT

    What are transhumanism’s philosophical and cultural antecedents?

    The human desire to acquire godlike attributes is presumably as ancient as the human species itself. Humans have always sought to expand the boundaries of their existence, be it geographically, ecologically or mentally. There is a tendency in at least some individuals to always try to find a way around every limitation or obstacle.

    Ceremonial burial and preserved fragments of religious writings show that prehistorical humans were deeply disturbed by the death of their loved ones and sought to reduce the cognitive dissonance by postulating an afterlife. Yet, despite the idea of an afterlife, people still strove for extended life. In the Sumerian story of Gilgamesh (approx. 2000 B. C.), a king sets out on a quest to find an herb that can make him immortal. It's worth noting that it was both assumed that mortality was not inescapable in principle and that there existed (at least a mythological) means of achieving it. That people really strove to live longer and richer lives can be seen in the development of the various systems of magic and alchemy; lacking practical means, one took to magical means. A typical example are the various schools of esoteric Taoism in China, which sought physical immortality and control/harmony with the forces of nature.

    The Greeks were ambivalent about humans transgressing their natural confines. On the one hand, they were fascinated by the idea. We see it in the myth of Prometheus, who stole the fire from Zeus and gave it to the humans, thereby permanently improving the human condition. In the Daedalus myth, the Gods are repeatedly challenged, quite successfully, by the clever engineer and artist Daedalus who applies non-magical means to extend human capabilities. On the other hand, there is also the concept off hubris: that some ambitions are off-limit and will backfire if pursued. In the end, Daedalus' enterprise ends in disaster (which however was not a punishment by gods, but was entirely due to natural causes).

    Greek philosophers made the first attempts to create systems of thought that were based not purely on belief but on logical reasoning. Socrates and the sophists extended the application of critical thinking from metaphysics and cosmology to include the study of ethics and questions about human society and human psychology. From this inquiry arose cultural humanism, a very important current throughout the history of Western science, political theory, ethics and law.

    The Renaissance meant an awakening from the mediaeval way of reasoning, and the human being and the natural world again became legitimate objects of study. Renaissance humanism encouraged people to rely on their own observations and their own judgement rather than defer in everything to religious authorities. Renaissance humanism also created as an ideal the well-rounded personality, one that is highly developed both scientifically, morally, culturally and spiritually. A milestone is Giovanni Pico della Mirandola's "Oration on the Dignity of Man" (1486), where he explicitly says that man does not have a ready form but that it is man's task to form himself into something. Modern science begins to take form, through the work of Copernicus, Kepler and Galileo.

    The Age of Enlightenment can be said to have started with the publication of Francis Bacon's Novum Organum, "the new tool" (1620), where he proposes a new scientific methodology based on empirical investigation rather than a priori reasoning. Bacon advocates the project of "effecting all things possible", by which he meant: achieving mastery over nature in order to improve the condition of human beings. The heritage from the renaissance combines with the influences of Columbus and Isaac Newton, Thomas Hobbs, John Lock, Immanuel Kant and others to form the basis for rational humanism, which emphasizes science and critical reasoning---rather than revelation and religious authority---as means of finding out about the natural world and the destiny and nature of man and giving a grounding for morality. Rational humanism is a direct predecessor of transhumanism.

    In the eighteenth and nineteenth centuries we begin to see glimpses of the idea that even humans themselves can be developed through the appliance of science. Benjamin Franklin and Voltaire speculated about extending human life span through medical science. Especially after Darwin's theory of evolution, atheism or agnosticism came to be seen as increasingly attractive alternatives to Christianity. However, the optimism of the late nineteenth century often degenerated into positivism and the belief that progress was inevitable. When this view collided with reality, it caused a reaction and many turned irrationalism, making the mistake that since reason was not sufficient it was worthless. This resulted in the anti-technological, anti-intellectual attitudes that are still with us today, for example in the New Age movement.

    An important stimulus in the formation of transhumanism was the essay "Daedalus: Science and the Future" (1923) by the British biochemist J. B. S. Haldane, where he discusses how scientific and technological findings may come to affect society and improve the human condition. This essay set off a chain-reaction of future-oriented discussions, including "The World, the Flesh and the Devil" by J. D. Bernal (1929), which speculates about space colonization and bionic implants as well as mental improvements through advanced social science and psychology; the works of Olaf Stapledon; and the essay "Icarus: the Future of Science" (1924) by Bertrand Russell, who took a more pessimistic view, arguing that without more kindliness in the world, technological power will mainly serve to increase men's capacity to inflict harm on one another. These ideas, which were developed further in Aldous Huxley's novels and later by many science fiction writers, have all been influential in transhumanist thinking and in futures studies.

    The second world war changed the direction of many of those currents that today have led up to transhumanism. The earlier eugenics movement had been seriously discredited, and the idea of creating a new and better world became taboo and passé. (Even today's transhumanists remain deeply suspicious of collective change; the goal is rather to redesign oneself and maybe one's own descendants.) Instead, optimistic futurists directed their attention more toward technological progress, such as space travel, electronics and computers. Science began to catch up with speculation.

    Transhumanist thoughts during this period were mostly discussed and analyzed in the literary genre of science fiction. Authors such as Arthur C. Clarke, Isaac Asimov, Heinlein, Stanislaw Lem, and later Bruce Sterling, Greg Egan, Vernor Vinge and many others explored various aspects of transhumanism and contributed to its proliferation.

    Robert Ettinger played an important role in giving transhumanism its modern form. He started the cryonics movement with the publication of his book The Prospect of Immortality (1964). He argued that since medical technology seems to be constantly progressing, and since chemical activity comes to a halt at sufficiently low temperatures, it should be possible to freeze a person today and preserve her until such a time when technology is advanced enough to repair the freezing damage and other diseases she might have had. In 1972, Ettinger published Man into Superman, where he discussed a number of conceivable improvements to the human being, continuing the tradition started by Haldane and Bernal.

    Another influential early transhumanist is F. M. Esfandiary, who later changed his name to FM-2030. One of the first professors of future studies, FM taught at the New School for Social Research in New York in the 1960’s and formed a school of optimistic futurists known as the UpWingers. In his 1989 book Are you a transhuman?, one can find the first description of the concept of the transhuman as an evolutionary bridge towards posthumanity. (A note on terminology: FM also referred to transhumans as ‘trans’. The word ‘transhuman’ was first used in a science fiction story by Damien Broderick in 1976, although it stood for a somewhat different concept there. The word ‘transhumanism’ was coined by Julian Huxley in New Bottles for New Wine (1957).)

    In the seventies and eighties, many organizations appeared for life extension, cryonics, space colonization or futurism. They were generally isolated from one another even though many of them shared similar views and values. One prominent voice for a transhumanist standpoint during this epoch was Marvin Minsky.

    In 1988, the first issue of the Extropy Magazine was published by Max More and T.O. Morrow, and in 1992 they founded the Extropy Institute. The magazine and the institute served as catalysts for bringing together many of the earlier separate groups. Max More wrote the first definition of the word 'transhumanism' in its modern sense. If one wants to put a date and a place on the birth of modern transhumanism, it happened in America in the late eighties. The transhumanist arts movement also became self-aware around this time through the works of Natasha Vita-More.

    Eric Drexler's Engines of Creation (1986) was the first book-length treatment of molecular nanotechnology, its potential uses and abuses, and the strategic issues raised by its development. This groundbreaking book had a huge and lasting impact on transhumanist thought. Also influential was robotics researcher Hans Moravec's Mind Children (1988), and his more recent Robot (1999). Drexler and Moravec remain at the cutting edge of transhumanist thinking today. Two other influential contemporary transhumanists are Anders Sandberg and the American economist and polymath Robin Hanson.

    Many transhumanists do not agree with all the political views of the Extropy Institute. The World Transhumanist Association was therefore founded in 1998 by Nick Bostrom and David Pearce to complement the Institute and act as an umbrella organization for all transhumanist-related groups and interests. Focusing on supporting transhumanism as a rigorous academic and scientific discipline, the WTA publishes the Journal of Transhumanism, the first peer-reviewed scholarly journal for transhumanist research.

    References on the web:
    Giovanni Pico della Mirandola. 1486. Oration on the Dignity of Man. http://www.physics.wisc.edu/~shalizi/Mirandola/

    Haldane, J. B. S. 1923. Daedalus: Science and the Future. http://www.physics.wisc.edu/~shalizi/Daedalus.html

    Russell, B. 1924. Icarus: The Future of Science. http://www.physics.wisc.edu/~shalizi/Icarus.html

    Bernal, J. D. 1929. The World, the Flesh & the Devil. http://www.physics.wisc.edu/~shalizi/Bernal/

    Ettinger, R. 1964. The Prospect of Immortality. http://www.cryonics.org/book1.html

    Ettinger, R. 1972. Man into Superman. http://www.cryonics.org/book2.html

    Drexler, E. 1986. The Engines of Creation: The Coming Era of Nanotechnology, chapters 11-15. http://www.foresight.org/EOC/index.html

    Journal of Transhumanism. http://www.transhumanist.com

     

    Is extropianism the same as transhumanism?

    Extropianism represents one distinctive subset of transhumanist thought (so all extropians are transhumanists but not vice versa). Extropians take their name from the concept of "extropy", developed by Max More and Tom Morrow, that refers to a system's growth and vitality.

    Extropianism is defined by the Extropian Principles, a document authored by the founders and members of the Extropy Institute. Version 3.0 of the Principles lists seven principles that are important for extropians in the development of their thinking: Perpetual Progress, Self-Transformation, Practical Optimism, Intelligent Technology, Open Society, Self-Direction, and Rational Thinking.

    Politically, the extropians oppose authoritarian social control and favor the rule of law and decentralization of power. Transhumanism as such does not advocate any particular political viewpoint, although it does have political consequences. Transhumanists themselves hold a wide range of political opinions (there are liberals, social democrats, libertarians, green party members etc.), and some transhumanists have elected to remain apolitical.

    References:
    More, M. 1998. The Extropian Principles, v. 3.0. http://www.maxmore.com/extprn3.htm

     

    What currents are there within transhumanism?

    A rich variety of opinion exist within transhumanist thought, and many subgroups have formed based on their specific interests, views, values, or geographical locations.

    Groups distinguished by their interests include: cryonicists, life extensionists, nanotechnology specialists, the Wired community, space enthusiasts, transhumanist artists and performers, science fiction aficionados, cypher punks and people experimenting with alternative societal groupings.

    The extropians constitute a prominent transhumanist group that places a high value on self-ownership, self-transformation, individual freedom, and freedom from state coercion [see "Is extropianism the same as transhumanism?"].

    Another transhumanist current is represented by advocates of the sort of "paradise-engineering" outlined in David Pearce's Hedonistic Imperative. Pearce argues on ethical grounds for a biological program to abolish all forms of cruelty, suffering and malaise. In the short-run, our emotional lives can be enriched by designer mood-drugs (i.e. not street-drugs). In the long-term, however, it is technically feasible to rewrite the vertebrate genome. Biotechnology can abolish suffering throughout the living world. Pearce argues that "post-Darwinian superminds" will be animated purely by gradients of genetically pre-programmed well being.

    Transhumanists hold differing opinions about the time scale of future changes, as well as how radical those changes might be. The singularians, persons who predict the occurrence of a singularity [see "What is the singularity?"], represent one end of the spectrum, while other transhumanists make predictions based on a more gradual progress of evolution.

    Local transhumanist discussion groups have formed in major American cities and in European countries. Although transhumanism is cosmopolitan, these groups have each established a distinct character, possibly because of local memetic conditions.

    Leading transhumanist thinkers often defy labeling. Each represents a distinct flavor of transhumanism, holding complex and subtle views that are under constant revision and development

    References:
    Links to individual home pages and transhumanist-related special interest groups: http://www.transhumanism.com/hotlinks.html

     

    Is transhumanism a cult/religion?

    Transhumanism is definitely not a cult; it does not fulfill any of the criteria for a cult as established by the Cult Awareness Network and other organizations. Transhumanism is not a religion either, although it serves some of the functions for which people have traditionally relied on religion. Transhumanism offers a sense of direction and purpose, and a vision that humans can achieve something greater than our present condition. Unlike most religious believers, transhumanists seek to make their dreams come true in this world, by relying not on supernatural powers but rational thinking and empiricism, through continued scientific, technological, economic and human development. Even what used to be the exclusive thunder of the Churches, such as immortality, constant bliss, and a godlike intelligence, are being discussed by transhumanists as hypothetical engineering achievements!

    Transhumanism is a naturalistic philosophy. At the moment, there is no hard evidence for supernatural forces or irreducible spiritual phenomena, and transhumanists prefer to rely on rational methods, especially the scientific method, to understand and intervene in the world. Although science forms the basis of much transhumanist endeavor, transhumanists realize that the scientific method has its own fallibilities and imperfections.

    Religious fanaticism, superstition and intolerance are not acceptable among transhumanists. They think many biases can be overcome through a scientific and humanistic education, training in critical thinking, and interaction with people from different cultures.

    It's worth emphasizing that transhumanism is not a fixed set of dogmas. It's an evolving world-view. Or rather, it's a family of evolving world-views---for transhumanists tend to disagree with one other on many issues. The transhumanist philosophy, still in its formative stages, is meant to keep developing in the light of new experiences and new opportunities. Transhumanists want to find out where they are wrong and to change their views accordingly.

     

    Won’t things like uploading, cryonics and AI fail because they can’t preserve or create the soul?

    While the concept of a soul is not a very useful or coherent one for a naturalistic philosophy such as transhumanism, many transhumanists do take an interest in the related problems concerning personal identity and consciousness. These problems have been the subject of lively debate among contemporary analytic philosophers, and though some progress has been made (e.g. in Derek Parfit's work on personal identity) they have still not been resolved to general satisfaction. An easily accessible introduction to the mind-body problem is Churchland (1988).

    If one believes that there is a soul and that it enters the body at conception, then cryonics may be able to work since human embryos have been successfully been frozen, stored for extended periods and then implanted into their mothers, resulting in healthy and ordinary children (who presumably have souls). Uploading would in many ways be an empirical test of many views on the soul. If uploading turns out to work, certain views on the soul must be revised. The same holds for machine intelligence. (It is interesting to note that Dalai Lama has not ruled out the possibility of reincarnating into computers.)

    References:
    Churchland, P. 1988. Matter and Consciousness. MIT Press, MA.

    Parfit, D. 1984. Reasons and Persons. Oxford Univ. Press, Oxford.

    (Interview with Dalai Lama http://www.aleph.se/Trans/Global/Uploading/lama_upload.txt)

     

    Is there transhumanist art?

    Yes. Emotions are an essential tool in sensing and understanding life. Transhumanist artists seek to intuitively grasp and interpret the transhuman condition and the world-picture that science reveals. In transhumanist art, the merging of human culture with science and technology is often characteristic of both content and medium. Transhumanist art expresses transhumanist values such as extending life, increasing vitality and creativity, exploring the world, pursuing limitless self-transformation and increased sensory experiences. Some transhumans use art as a way of living out their philosophy.

    Transhumanist art is created by transhumans from multiple disciplines. It encompasses the known arts of literature, music, visuals, electronic, robotic and performing arts, as well as modes of expression yet to be designed. The Transhumanist arts also include works by scientists, engineers, technicians, philosophers, athletes, educators, and mathematicians, for example. Ideas and visions about evolution, transhumans, biotechnology, A-Life, extropy, and immortality have become part of the art world.

    Some of the subgenres include: extropic art, automorph art (an individualistic approach to extropic self-transformation encompassing mind and body---transhuman being as art), and exoterra art (a fusion of art and the universe).

    References:
    http://www.transhuman.org

    http://www.extropic-art.com

     

     

    TRANSHUMAN PRACTICALITIES

    What evidence is there that it will happen?

    Take a look around in today's world. Compare what you see now with what you would have seen only sixty years ago. It's not an especially bold conjecture that in sixty years from now the state of technology and the way people conduct their lives will be pretty wondrous by our present standards. The conservative projection, even assuming just that the world will continue to develop in a gradual way as it has done since the seventeenth century, would imply that you could expect to see dramatic developments over the coming decades.

    This expectation is reinforced when one considers that many crucial areas seem poised for critical breakthroughs. The World Wide Web is beginning to link up the world's people, adding a new layer to human society, a layer where information is supreme. We are completing the mapping of the human genome and are developing the genetic engineering techniques to use this information to intervene in the adult human organism or to give our offspring desirable traits. The performance of computers doubles every eighteen months and will fairly soon be approaching human-level computational power. Pharmaceutical companies are refining drugs that allow us to regulate human mood and aspects of human personality with few side-effects. Many transhumanist aims can be pursued with present technologies. Can there be any doubt (barring a civilization-destroying cataclysm) that technological progress will give us much more radical options in the future?

    Molecular manufacturing has the potential to totally transform the human condition. Is it a feasible technology? Eric Drexler and others have showed in detail how nanotechnology is consistent with the laws of chemistry and have outlined several routes by which it could be developed [see "What is nanotechnology?"]. Nanotechnological development may seem farfetched, maybe because the consequences seem too overwhelming, but nanotechnology specialists point out that there currently exists no published critique of Drexler's technical arguments. Nobody has been able to point to any error in his calculations. Meanwhile, investment in the field (already billions of dollars) is growing rapidly, and at least the less visionary aspects of molecular manufacturing are already mainstream.

    There are many independent methods and technologies that can enable humans to become posthuman. There is uncertainty about which technology will be perfected first, and we have a choice about which methods to use. But provided civilization continues to flourish, then it seems inevitable that we will have the option of becoming posthumans. And, unless suppressed and prevented by force, many will choose to explore that option.

    References:
    Drexler, E. 1992. Nanosystems, John Wiley & Sons, Inc., NY.

     

    Won't these transhumanist developments take thousands or millions of years?

    It is often very hard to predict how long a certain technological development will take. The moon landing happened sooner than most people had expected; but fusion energy is still eluding us after half a century of anticipation. The reason why it is difficult to make accurate estimates about the timeframe lies partly in the possibility of unexpected technical obstacles, and partly in the fact that the rate of progress depends on levels of funding, which in turn depend on hard-to-predict economical and political factors. Therefore, while one can in many cases give good grounds for thinking that a technology will be developed sooner or later, one can usually only make informed guesses about how long it will take.

    The vast majority of transhumanists think that superintelligence and nanotechnology will both happen in less than a hundred years, and many predict that they will happen well within the first third of the next century. [The reasons are outlined in the sections about these two technologies, respectively.] Once there is both nanotechnology and superintelligence, a wide range of special applications will swiftly follow.

    It would be possible to give a long list of examples where people in the past have solemnly declared that something was technologically absolutely impossible:

    "The secrets of flight will not be mastered within our lifetime---not within a thousand years." (Wilbur Wright, 1901)

    or socially irrelevant:

    "There is no reason why anyone would want a computer in their home." (Ken Olsen: President, Chairman and founder of Digital, 1977)

    ----only to see it happen few years later. However, one could give an equally long list where people have predicted that some breakthrough would occur and it didn’t. The question cannot be settled by enumerating historical parallels.

    A better strategy is to look directly at what a careful analysis of the underlying physical constraints and design-problems might reveal. In the case of the most crucial future technologies---superintelligence and nanotechnology---such an analysis has been carried out and many experts think that these will likely be achieved within the first few decades of the next century. Other experts think it will take much longer.

    Another way of forming an opinion of where we are heading is to look at past trends. At least since the late nineteenth century, science and technology (as measured by a wide range of indicators) have doubled about every 15 years. Extrapolating this exponential progress, one is led to expect to see even more dramatic changes in the near future. It would require a total break of current trends, an unexpected deceleration, in order for the changes that transhumanists foresee not to happen within the next century.

    References:
    Erroneous Predictions: http://www.foresight.org/News/negativeComments.html

    Drexler, E. 1992. Nanosystems, John Wiley & Sons, Inc., NY.

    Moravec, H. 1998. Robot: Mere Machine to Transcendent Mind. Oxford Univ. Press.

    Kurzweil, R. 1999. The age of spiritual machines. Viking Press.

     

    What if it doesn’t work?

    Then, presumably, we would find ourselves back at ‘status quo’, but also enriched by many new insights and techniques discovered in the attempt. But the issue is not so much whether it will work as what will work and when. With many potentially transforming technologies already available and others indisputably in the pipeline, it’s clear that there will be large scope for human self-augmentation. The more superlative transhuman technologies, such as nanotechnology and superintelligence, can be reached through several independent paths. If one path turns out to be blocked, another can be tried, adding to the likelihood of success.

    If for some unforeseen reason, scientists cannot develop molecular nanotechnology and superhuman artificial intelligence, along with the technologies these would support---uploading, cryonics, indefinite life extension---transhumanists would consider that as immensely tragic. You would then perhaps never see a world free from suffering, disease and death; never experience the higher reaches of intellectual creativity and understanding that are accessible by better information processors than the human nervous system; never have the opportunity to try out the emotions and deep states of consciousness that your unaugmented brain cannot reach or sustain; you would never know what level of personal development you could attain by living with youthful vigor for 120, or 400, or even 50,000 years. Sure, humans would find some solace from all the useful tools that would no doubt be discovered on the journey---new tools for genetic engineering, mood-drugs, information technology, faster computers, new useful chemicals, new medical treatments, organ transplantation techniques, better computer memories---but transhumanists have bigger ambitions.

     

    How can I use transhumanism in my own life?

    Transhumanism is a practical philosophy that can be very down-to-earth. There are consequently innumerable ways in which you can apply it to your own life, ranging from the use of diet and exercise to improve health and life-expectancy; to signing up for cryonic suspension; making money from investing in technology stocks; using clinical drugs to adjust parameters of your mood and personality, or nootropics to improve cognitive performance; applying various cognitive or psychological self-improvement techniques (study techniques, NLP, time management; mnemonics; meditation; critical thinking); learning to take advantage of new information technologies; taking nutritional supplements (vitamins, minerals, essential fatty acids, hormones) to decrease your risk of heart disease and cancer and hopefully slow aging; creating transhumanist art; and in general taking steps to live a richer and more responsible life. An empowering mind-set that is common among transhumanists is dynamic optimism (also called practical optimism): the attitude that desirable results can in general be accomplished, but only through hard effort and smart choices (More (1997)).

    You may also want to get involved in transhumanist-related research or organization building [see "How can I become involved in transhumanism?"].

    References:
    More, M. 1998. Dynamic Optimism. http//:www.maxmore.com/optimism.htm

     

    How could I become a posthuman?

    At present, there is no manner by which any human can become a posthuman, a primary reason for the strong interest in life extension and cryonics among transhumanists. Those of us who can live long enough to see the results of technological development over the long term may get to become posthumans.

    Each of us has the potential to achieve transhuman status within our lifetime, however, and that in itself represents an exciting milestone in human evolution. We live in an era in which (at least in the democratic world) we are free to adopt an outlook that is shaped less by national boundaries, family loyalties, and political partisanship. In this era, human minds have been expanded and rewired by extended educations, multiple careers, increasing global networks of personal relationships, and computer communications. Human bodies are augmented through improved childhood nutrition, implants, replacement body parts, and life extension programs. We have combined our physical bodies and our minds with biological science and technology to break the barriers that kept our ancestors from living boundless lives.

     

    Isn’t the possibility of success in cryonics too small?

    Cryonics, the freezing of people who are legally "dead", can be regarded as an experimental medical procedure. It is in the nature of cryonics that it cannot presently be subjected to clinical trials to determine its effectiveness. What we know is that it is possible to stabilize a patient's condition by cooling them in liquid nitrogen (- 196 C°). A considerable amount of cell damage is caused by the freezing process, but once frozen the patient can be stored for millennia with virtually no additional tissue degradation. The hypothesis on which cryonics rests is that at some point in the future the technology will be developed that will enable us to revive the cryonics patient, reversing the freezing damage and the original cause of deanimation.

    In order to show that cryonics will not work, it is necessary to show that no future technology, no matter how advanced, will ever be able to revive the suspended patient. When we consider what is routine today and how it might have been viewed in (say) the 1700's, we can begin to see how difficult it is to make a well-founded argument that future medical technology will never be able to reverse the injuries that occur during cryonic suspension.

    Seen in this light, signing up for cryonics (which is usually done by making the cryonics firm the beneficiary of your life insurance) can look like a reasonable insurance policy. If it doesn't work you would be dead anyway; if it works it can save your life. (And your saved life would then likely be a very long and healthy one, given how advanced medicine must be to reanimate you.)

    Experts in molecular nanotechnology generally believe that in its mature stage, nanotechnology will enable the reanimation of cryonics patients. Thus, it is possible that the suspended patients can revived in as little as a few decades from now. The uncertainty about the ultimate technical feasibility of reanimation may very well be smaller than other uncertainty factors, such as the possibility that you deanimate in the wrong kind of way (lost at sea; information content in your brain erased by Alzheimer's disease), that your cryonics company goes bust, that civilization collapses, or that the people in the future won't be interested in reviving you. So a cryonics contract is of course far from a 100% guarantee of survival. As the cronicists say, being cryonically suspended is the second worst thing that can happen to you.

    By no means all transhumanists are signed up for cryonics, but a significant fraction find that for them the cost-benefit analysis justifies the expense.

    References:
    Merkle, R. 1994. "The Molecular Repair of the Brain". Cryonics magazine, Vol. 15 No's 1 & 2. http://www.merkle.com/cryo/techFeas.html

    Cryonics FAQ: http://www.cs.cmu.edu/afs/cs/user/tsf/Public-Mail/cryonics/html/0018.1.html#index

    Alcor: http://www.alcor.org/

     

    Won't it be boring to live forever in the perfect world?

    Transhumanism does not promise a perfect future world. If all goes well, however, the future will be one in which each individual has limitless freedom to develop their potential and to express their creativity and live out their dreams.

    In comparison to other transhumanist projects, the abolition of boredom will probably prove quite easy. Indeed, we already have reliable (though potentially toxic) means to dispel boredom for limited periods e.g. by taking psychostimulants such as amphetamines. Present clinical mood-drugs can enhance interest and enthusiasm for life in some, including people who are not suffering from depression. (These examples may be misleading, however. They conjure up just the crudest foretaste of what's in store.) Only by compartmentalizing our thinking to a high degree can we imagine a world where there is mature molecular nanotechnology and superhuman artificial intelligence, and the means are still lacking to control the brain circuitry of boredom.

    It may be useful to retain some functional analog of boredom, since boredom can serve to prevent us from wasting too much time on monotonous and meaningless activities. Perhaps an affective gradient is needed to motivate us to strive for improvements. Thus, some chores in tomorrow's world may come to be avoided because they are perceived as merely intriguing rather than exhilarating.

    Ed Regis (1990, p. 97) suggests the following points also be considered:

    1. Ordinary life is sometimes boring. So what?

    2. Eternal life will be as boring or as exciting as you make it.

    3. Is being dead more exciting?

    4. If eternal life becomes boring, you will have the option of ending it at any time.

    References:
    Pearce, D. 1998. The Hedonistic Imperative. http://www.hedweb.com

    Regis, E. 1990. Great Mambo Chicken and the Transhuman Condition. Penguin Books

     

    How can I become involved in transhumanism?

    There are a growing number of organizations that have been formed to explore and develop transhumanist technologies and contemplate the issues on the road to posthumanity.

    The World Transhumanist Association was formed in 1998 as an umbrella organization to publicize transhumanist ideas and to seek academic acceptance of transhumanism as a philosophical and cultural movement.

    Local transhumanist organizations exist in several European countries and American cities. In America there is also the Extropy Institute which promotes extropian transhumanism. The Foresight Institute and the Institute for Molecular Manufacturing work toward the development and understanding of nanotechnology and its peaceful application to human affairs. Alcor is one non-profit organization offering cryonic suspension to its customers. The Life Extension Foundation, another non-profit organization, provides information about nutritional supplements and offers a range of products.

    All these organizations offer opportunities to learn more about transhumanism and the various technologies and ideas that transhumanists seek to apply to human life. They organize conferences and meetings and sponsor various electronic fora to network with other people interested in furthering the transhumanist agenda. New business ideas are constantly being explored by their members and opportunities for employment in work that is actively and explicitly aimed at developing a transhuman future lie in the near future. There are innumerable companies, university departments and other institutions that do work that is directly relevant to transhumanism.

    References:
    World Transhumanist Association. http://www.transhumanism.com (from which links to the other organizations can be found).
     

    CO-AUTHORS AND CONTRIBUTORS TO THIS DOCUMENT

    The nanotechnology section is based on an introduction by John Storrs Hall which is in turn based on texts by Eric Drexler and Ralph Merkle. The cryonics section is based on the writings of Ralph Merkle, from which many phrases are directly borrowed. The definition of transhumanism used in this FAQ is based on contributions by many people, especially Kathryn Aegis and Max More. The answers to the questions about the soul and about the inhuman/inhumane distinction, and about historical antecedents are mostly Anders’, and several other answers are partly based on input from him and other members of the Swedish transhumanist organization Aleph. Kathryn wrote the answers to the question of what society posthumans would live in, and to "How could I become a posthuman?", and most of the answer to "What is a transhuman?". The answer to the question about transhumanist art is based on a reply by Natasha Vita More. Greg Burch contributed editorial comments on an early stage, and David Pearce and especially Kathryn Aegis and Anders Sandberg offered very extensive editorial comments on a later stage. Ideas, criticisms, questions, phrases and sentences have been contributed by (in no particular order):

    Henri Kluytmans, John S. Novak III, Allen Smith, Thom Quinn, Harmony Baldwin, J. R. Molloy, Greg Burch, Max More, Harvey Newstrom, Brent Allsop, John K Clark, Randy Smith, Daniel Faublich, Scott Badger, mark@unicorn.com, Anders Sandberg, Dan Clemmensen, Kathryn Aegis, Shakehip@aol.com, Natasha Vita More, Michael Nielsen, Geoff Smith, Eugene Leitl, William John, pilgrim@cyberdude.com, Joe Jenkins, Damien Broderick, David Pearce, Michael Lorrey, Bryan Moss, Derek Strong, Wesley R. Schwein, Peter C. McCluskey, Tony Hollick, zebo@pro-ns.net, Michelle Jones, Dennis Stevens, Damon Davis, Jeff Dee, Andrew Hennessey, Doug Bailey, Brian Atkins, Erik Moeller, Alex (intech@intsar.com), David Cary, EvMick@aol.com, Arjen Kamphius, Remi Sussan, Dalibor van den Otter, Robin Hanson, Eliezer Yudkowsky, Michael Wiik, Dylan Evans, Jean-Michell Delhotel.

    I would like to thank you all for helping creating this FAQ and for making transhumanism possible.
    Nick Bostrom