Is Brain Emulation Dangerous?

Article (PDF Available) · July 2013with 282 Reads 
How we measure 'reads'
A 'read' is counted each time someone views a publication summary (such as the title, abstract, and list of authors), clicks on a figure, or views or downloads the full-text. Learn more
DOI: 10.2478/jagi-2013-0011
Cite this publication
Abstract
Brain emulation is a hypothetical but extremely transformative technology which has a non-zero chance of appearing during the next century. This paper investigates whether such a technology would also have any predictable characteristics that give it a chance of being catastrophically dangerous, and whether there are any policy levers which might be used to make it safer. We conclude that the riskiness of brain emulation probably depends on the order of the preceding research trajectory. Broadly speaking, it appears safer for brain emulation to happen sooner, because slower CPUs would make the technology‘s impact more gradual. It may also be safer if brains are scanned before they are fully understood from a neuroscience perspective, thereby increasing the initial population of emulations, although this prediction is weaker and more scenario-dependent. The risks posed by brain emulation also seem strongly connected to questions about the balance of power between attackers and defenders in computer security contests. If economic property rights in CPU cycles1 are essentially enforceable, emulation appears to be comparatively safe; if CPU cycles are ultimately easy to steal, the appearance of brain emulation is more likely to be a destabilizing development for human geopolitics. Furthermore, if the computers used to run emulations can be kept secure, then it appears that making brain emulation technologies ―open‖ would make them safer. If, however, computer insecurity is deep and unavoidable, openness may actually be more dangerous. We point to some arguments that suggest the former may be true, tentatively implying that it would be good policy to work towards brain emulation using open scientific methodology and free/open source software codebases
Journal of Artificial General Intelligence 4(3) 170-194, 2013
DOI: 10.2478/jagi-2013-0011
Submitted 2013-07-31
Accepted 2013-12-31
This work is licensed under the Creative Commons Attribution 3.0 License.
Is Brain Emulation Dangerous?
Peter Eckersley
Electronic Frontier Foundation
PDE@EFF.ORG
Anders Sandberg
ANDERS.SANDBERG@PHILOSOPHY.OX.AC.UK
Future of Humanity Institute,
Oxford University
Suite 1, Littlegate House 16/17 St. Ebbe’s
Street, OX1 1PT, Oxford, UK
Editor: Randal Koene, Diana Deca
Abstract
Brain emulation is a hypothetical but extremely transformative technology which has a non-zero
chance of appearing during the next century. This paper investigates whether such a technology
would also have any predictable characteristics that give it a chance of being catastrophically
dangerous, and whether there are any policy levers which might be used to make it safer.
We conclude that the riskiness of brain emulation probably depends on the order of the
preceding research trajectory. Broadly speaking, it appears safer for brain emulation to happen
sooner, because slower CPUs would make the technology‘s impact more gradual. It may also be
safer if brains are scanned before they are fully understood from a neuroscience perspective,
thereby increasing the initial population of emulations, although this prediction is weaker and
more scenario-dependent.
The risks posed by brain emulation also seem strongly connected to questions about the balance of
power between attackers and defenders in computer security contests. If economic property rights
in CPU cycles
1
are essentially enforceable, emulation appears to be comparatively safe; if CPU
cycles are ultimately easy to steal, the appearance of brain emulation is more likely to be a
destabilizing development for human geopolitics.
Furthermore, if the computers used to run emulations can be kept secure, then it appears that
making brain emulation technologies ―open‖ would make them safer. If, however, computer
insecurity is deep and unavoidable, openness may actually be more dangerous. We point to some
arguments that suggest the former may be true, tentatively implying that it would be good policy
to work towards brain emulation using open scientific methodology and free/open source software
codebases.
Keywords: brain emulation, existential risk, software security, open source, geopolitics,
technological development
1
Throughout this article we refer to ―CPU cycles‖, although in practice it might be that GPUs are the most
practical devices for brain emulation. For simplicity, we use the term ―CPU‖ to refer to CPUs, GPUs, or
whatever other kind of digital computer is most relevant.
Unauthenticated
Download Date | 10/28/15 10:39 PM
IS BRAIN EMULATION DANGEROUS?
171
1. Introduction
The proposition that Artificial General Intelligence (AGI) might pose a catastrophic or existential
threat to life on earth sounds more like a plotline from science fiction than a serious object of
academic study. Be that as it may, actuarial risk assessment tells us that even if we assign
numerically small probabilities to such events, they could remain serious enough to deserve study
and mitigation. Un-intuitively, we should worry about AGI catastrophes even if we don't think
that they are the likely course of events. If AGI appears, it may well be an extremely positive
development. The study of risk scenarios should be thought of as an insurance policy against an
unlikely but serious adverse event.
2
A small but growing literature has started that project. See
for example (Yudkowsky 2008; Omohundro 2008; Muehlhauser and Salamon 2012; Bostrom
2014)
One objection to this line of reasoning is that it is too soon for us to begin. As of 2014, there
are no research projects that can credibly claim to be close to producing an AGI, so it would be
necessary to make meaningful predictions about a phenomenon whose details will be unknown
until the medium- to long-term future. This far out, it is extremely difficult to reason accurately
about the actions and motivations of AGIs. In particular, the diversity of possibilities is
astonishingly large. Differences in the design, implementation, education, early experiences and
social surroundings of conceivable AGIs create a space of possible intelligences and personalities
far larger than that of human intelligences and personalities, which are in part constrained by our
biology.
Given this profound difficulty in saying much about what AGIs would be like produces a
corresponding difficulty in evaluating any catastrophic or existential risks that their appearance
might induce, and if we cannot evaluate risks we have little hope of mitigating them sensibly.
This paper will avoid that vast degree of unpredictability by focusing on one possible subtype
of artificial general intelligence: human brains that have been emulated by computers. As a
possible future technology, emulations of human brains are slightly more predictable than
systems which are built from scratch, since they would at least at first be a combination of things
we know quite a lot about: human personalities running inside computers.
1.1 A simple taxonomy of Artificial General Intelligence
There are at least three different ways that artificial intelligence research might succeed in
building an Artificial General Intelligence (AGI) with capabilities for learning, problem-solving
and intellectual labor comparable to those of humans:
1. "Designed" AGI
2. Evolved AGI
3
2
In some frames of analysis, mitigating existential risks may be the single most important public project in
human society (Bostrom 2013).
3
ight arise
if researchers identified particular sub-circuits of the brain that were fundamentally important for general
intelligence, and emulated those specific circuits within a piece of software whose overall architecture was
not like that of our brains (Douglas and Martin 2004; Floreano and Mattiussi 2008). This kind of AGI
Unauthenticated
Download Date | 10/28/15 10:39 PM
ECKERSLEY AND SANDBERG
172
3. Whole Brain Emulation
The first two categories would constitute success by different strands of traditional artificial
intelligence research: either designing a system with sufficient cleverness, complexity and
flexibility that it demonstrates intelligence, or building a framework for some very abstract
algorithm (such as a neural network or an evolutionary program) to find the ingredients of
intelligence by trial, error and combination.
The third kind of success, Whole Brain Emulation, is relatively new as a serious research
objective. This project would involve taking an individual human's brain, scanning its entire
neural (and perhaps neurochemical) structure into a computer, and running an algorithm to
emulate that brain's behaviour, using either virtual reality systems coupled to an emulated body,
or to robot bodies, for sensory input and output.
2. The Capabilities of Emulated Humans
It is not the intention of this article to discuss whether the emulation of human intelligence is
possible, feasible, or likely in any given timeframe. That matter is taken up at length by Sandberg
and Bostrom (2008); see also (Chalmers 2010; Sandberg 2013). It suffices to note here that there
is a significant probability that such emulations are possible, and that the implications may be
large enough to be of interest regardless of whether the probability that emulations occur in the
next century is 1% or 99%.
We can predict certain capabilities of emulations of human beings and the world they will
exist in with high probability. The main precondition for these predictions is that sufficient
computational resources are available to run a number of these emulations simultaneously. There
are many other capabilities which emulations might develop, but these are the least speculative.
2.1 Emulations can be copied
If human thought processes can be correctly emulated by computers, then their internal states can
be represented as digital data. Digital data can be copied. It follows that an emulated entity can at
any time be copied, and if a different set of computers begins emulating the copy, two
independent versions of the original entity can arise.
From an economical perspective this property is very important: it allows human capital to be
multiplied easily, rather than relying on slow human reproduction followed by expensive (and
slow) education (Hanson 2008).
Copying also makes it possible to keep backups. Given sufficient resources, this property
might often make death a local phenomenon rather than a global one: a loss of some post-backup

Rapid copying at a distance may be subject to some practical bandwidth limitations: the size
of emulations is likely to be on the order of tens of terabytes or larger (Sandberg and Bostrom
2008), making online distribution slow (compared to computer speeds) unless remote high
might have some hybrid of the characteristics of 1, 2, and 3, depending on how large the copied circuits
were, how well they were understood, and what was built around them.
Unauthenticated
Download Date | 10/28/15 10:39 PM
IS BRAIN EMULATION DANGEROUS?
173
compressed synchronisation algorithms turn out to be possible for these datasets,
4
or networks
become much faster.
2.2 Emulations can be erased by network attacks
Similar to the copying issue, their digital nature makes it possible to modify or erase brain
emulations rapidly without changing the underlying hardware. Emulations can be instantly
deleted by whoever controls the hardware or the operating system, including the distant author of
a virus or other type of malware. It may be possible for resourceful emulations to build defences
against such attacks, such as offline backups managed by humans or air-gapped copies of
themselves, or it may be that attackers will typically have ways to defeat these protections.
Humans are of course similarly vulnerable to assassination, although it is rare for this threat
to exist with the same level of distance and anonymity that malware authors commonly attain. It
is also relevant that an emulation which erases another emulation may be able to take those CPU
cycles for itself. This may turn out to mean that emulations have more reasons to fear violence
than humans do.
2.3 Emulations will probably be fast
The task of emulating neurons in a brain is highly parallelizable. In simple terms, CPU A can be
busy emulating one region of a brain, while CPU B can be emulating regions in another portion
of the brain. As more CPUs are made available, the number of neurons each CPU is responsible
for decreases, thereby allowing the emulation to run faster.
We do not know what the limit of this "speedup" process is. It is possible that silicon CPUs
are incapable of emulating a brain in real-time. It is more likely that brain emulation in faster-
than-real time is possible. The principal reason for thinking that is the characteristic timescale on
which neurons appear to communicate, which is on the order of 100Hz or slower (Steriade et al.
1998), with conduction delays between a few milliseconds and up to a hundred milliseconds
(Swadlow 2012) . Digital signals can travel very long distances during each of those cycles,
meaning that a very large number of CPU cores can be simultaneously brought to bear on a single
emulation task. Modern CPUs are also many orders of magnitude faster (with gigahertz speeds
rather than hektoherz speeds), allowing the same system to be simulated a higher rate than in
nature.
A mechanism for communicating between the CPUs A and B that were emulating
interdependent neurons A' and B' with a latency of k Hz should in principle be able to support a
speed up of between k/200 and k/100 times, depending on the complexity of the circuit
5
.
Existing digital systems can comfortably support signal propagation at around one-tenth the
speed of light, though there are some technologies that are faster (Chang 2003). It follows that the
distance between CPU cores working to emulate the neighbouring neurons A' and B' would need
to be less than d = (3·108 × 0.1) / 200 = 1.5·105 m apart. This strongly suggests that the main
4
Remote synchronisation algorithms copy changes to a dataset of a network much more efficiently than
copying the entire dataset. A commonly used tool of this is the rsync program (Tridgell and Mackerras,
1996), but more efficient variants specialised for particular kinds of datasets are possible.
5
The k/200 case is where a full "cycle" is necessary to compute the consequences of a neural input; k/100
is where the computation is trivial, or all computations can be performed in parallel, while waiting to know
which of them is valid.
Unauthenticated
Download Date | 10/28/15 10:39 PM
ECKERSLEY AND SANDBERG
174
bound on the speed of first-generation brain emulations would be the number of CPUs available
for the task, or the amount of money and electricity available to purchase and power them. The
bound might be high or low, relative to humans, but it would be proportional to the availability of
   
larger installed computer base.
Dedicated hardware particularly suited for brain emulation might also provide speedups over
generic CPUs. Typically such specializations provide between one and two orders of magnitude
speedup. (Sandberg and Bostrom 2008)
Some applications may require interacting with the physical world at its own speed; there are
tasks (social interaction, physical action) that cannot be accelerated beyond a certain point. Fast
computation also typically comes at an energy cost: if supercomputing remains energy-limited in
the future the speed of emulations will depend on economic trade-offs. Despite these constraints,
it is probable that many emulations will find reasons and means to run much faster than human
minds.
2.4 Emulation autonomy would be fragile
Emulation autonomy can be threatened in all the same ways as human autonomy can be
threatened (threats of pain, social pressure, imprisonment, brainwashing etc.), but there are new
possibilities that suggest that their autonomy might be more vulnerable.
Suppose an agent Alice (who might be human, or an emulation) possesses a digital copy of
the full neural state of an emulation, who we will call Aesop. Suppose further that Alice has
access to enough storage and computational resources to make further copies of the emulation
and run some of these copies.
Alice can instantiate Aesop. She can control the virtual reality environment in which Aesop
finds himself: his senses (or his attempts to use communications systems) can only tell him about
the world to the extent that Alice allows this. Furthermore, she could construct fake stories and
details of reality to misdirect him. If necessary, she could slow or freeze the rate at which he is
emulated, in order to determine off-line the most convincing virtual reality response to one of his
actions.
It seems that Alice can persuade Aesop to do almost anything. In particular, she can copy a
state, and then attempt to persuade him in way A. If he refuses, she can restore the old state, and
then attempt to use persuasive method B. There is no bound on the number of persuasive
techniques she might try. The instant that Alice has persuaded Aesop to perform a single task for
her she can pause and make a copy of his mental state before she tells him the details of the task.
Thereafter, she can reinstantiate that state and hand Aesop a different problem to solve. The best
Aesop could do to defend himself against Alice's predations would be to constantly insist on
interacting with the physical Earth in complicated ways, hoping that Alice could not fake such
interactions. But he would be constantly vulnerable to trickery, constantly in danger of
performing tasks that served Alice's ends rather than his own.
Once Alice has done this, Aesop appears to be virtually enslaved to her. Aesop, or at least
this copy of him, no longer possesses autonomy.
An emulation that owns or has effective control over the hardware necessary for her own
existence would normally enjoy autonomy. But any time that the physical or software security of
those systems was compromised, the agent would face the risk that someone might make non-
autonomous, enslaveable copies of their mental states. Presuming that the emulation was able to
Unauthenticated
Download Date | 10/28/15 10:39 PM
  • Chapter
    The article deals with the problems of the worldview transformations of the human and the society in the era of digitalization. The phenomenon of «humanism» and its modern worldview alternatives – «antihumanism», «posthumanism» and «transhumanism» are analyzed. The concepts of «human» and «posthuman» are compared as the subjects of biological and technological nature. The final conclusion of the article comes from a critical view on the transhumanist initiatives of human transformation. Transhumanism improves technically the human being but destroys its ethical dimension. The authors are sure, it’s unacceptable in the modern conditions of the digital era.
  • Chapter
    This chapter surveys various responses that have been made to the possibility of Artificial General Intelligence (AGI) possibly posing a catastrophic risk to humanity. The recommendations given for dealing with the problem can be divided into proposals for societal action, external constraints, and internal constraints. Proposals for societal action range from ignoring the issue entirely to enacting regulation to banning AGI entirely. Proposals for external constraints involve different ways of constraining and limiting the power of AGIs from the outside. Finally, proposals for internal constrains involve building AGIs in specific ways so as to make them safe. Many proposals seem to suffer from serious problems, or seem to be of limited effectiveness. Others seem to have enough promise to be worth exploring. We conclude by reviewing the proposals which we feel are worthy of further study. In the short term, these are regulation, merging with machines, AGI confinement, and AGI designs which make them easier to be controlled from the outside. In the long term, the most promising proposals are value learning and building the AGI systems to be human-like.
  • Chapter
    Humans have always sought to elevate the conditions of their existence. While ancient writers noted limited human contact with the divine, transhumanists believe that crossing from our ontological status to a higher plane is possible, even inevitable, through human technological ingenuity. Given their content and implications, further scrutiny of transhumanists' views is essential. Areas that should be addressed include transhumanists' own essentialism, the implications of existing brain science for transhumanists' more extravagant claims, and their constricted notions of knowledge and education. Further, not only would posthuman existence not be ours, but exuberant visions of that existence do not adequately heed the irreducible context in which humans pursue desires' fulfillment. Finally, when defending their positions, transhumanists must attend further to potentially grave risks, which, even where acknowledged, are downplayed.
  • Article
    Full-text available
    Many researchers have argued that humanity will create artificial general intelligence (AGI) within the next twenty to one hundred years. It has been suggested that AGI may inflict serious damage to human well-being on a global scale (‘catastrophic risk’). After summarizing the arguments for why AGI may pose such a risk, we review the fields proposed responses to AGI risk. We consider societal proposals, proposals for external constraints on AGI behaviors and proposals for creating AGIs that are safe due to their internal design.
  • Chapter
    Whole brain emulation (WBE) is the possible future one-to-one modeling of the function of the entire (human) brain. The basic idea is to take a particular brain, scan its structure in detail, and construct a software model of it that is so faithful to the original that, when run on appropriate hardware, it will behave in essentially the same way as the original brain. This would achieve software-based intelligence by copying biological intelligence (without necessarily understanding it).
  • Chapter
    In this chapter we review the evidence for and against three claims: that (1) there is a substantial chance we will create human-level AI before 2100, that (2) if human-level AI is created, there is a good chance vastly superhuman AI will follow via an “intelligence explosion,” and that (3) an uncontrolled intelligence explosion could destroy everything we value, but a controlled intelligence explosion would benefit humanity enormously if we can achieve it. We conclude with recommendations for increasing the odds of a controlled intelligence explosion relative to an uncontrolled intelligence explosion.
  • Article
    I anatomize a successful open-source project, fetchmail, that was run as a deliberate test of some surprising theories about software engineering suggested by the history of Linux. I discuss these theories in terms of two fundamentally different development styles, the "cathedral" model of most of the commercial world versus the "bazaar" model of the Linux world. I show that these models derive from opposing assumptions about the nature of the software-debugging task. I then make a sustained argument from the Linux experience for the proposition that "Given enough eyeballs, all bugs are shallow", suggest productive analogies with other self-correcting systems of selfish agents, and conclude with some exploration of the implications of this insight for the future of software.
  • Article
    Existential risks are those that threaten the entire future of humanity. Many theories of value imply that even relatively small reductions in net existential risk have enormous expected value. Despite their importance, issues surrounding human-extinction risks and related hazards remain poorly understood. In this article, I clarify the concept of existential risk and develop an improved classification scheme. I discuss the relation between existential risks and basic issues in axiology, and show how existential risk reduction (via the maxipok rule) can serve as a strongly action-guiding principle for utilitarian concerns. I also show how the notion of existential risk suggests a new way of thinking about the ideal of sustainability. Policy Implications• Existential risk is a concept that can focus long-term global efforts and sustainability concerns.• The biggest existential risks are anthropogenic and related to potential future technologies.• A moral case can be made that existential risk reduction is strictly more important than any other global public good.• Sustainability should be reconceptualised in dynamic terms, as aiming for a sustainable trajectory rather than a sustainable state.• Some small existential risks can be mitigated today directly (e.g. asteroids) or indirectly (by building resilience and reserves to increase survivability in a range of extreme scenarios) but it is more important to build capacity to improve humanity’s ability to deal with the larger existential risks that will arise later in this century. This will require collective wisdom, technology foresight, and the ability when necessary to mobilise a strong global coordinated response to anticipated existential risks.• Perhaps the most cost-effective way to reduce existential risks today is to fund analysis of a wide range of existential risks and potential mitigation strategies, with a long-term perspective.