On the Imminence and Danger of AI

The potential danger of AI has become a bit of a hot topic lately. This is just a collection of thoughts I've had on the issue. They represent an opinion, but I don't believe it's an uninformed one. I've formally studied both computer science and biology, and spent many years in a previous "life" working on evolutionary approaches to machine learning and AI. I almost did a Ph.D in it, but other things came up and life intervened.

I'll start by arguing against the imminence of AI. While this is orthogonal to the question of AI risk, it does impact the discussion of whether this is something we should be worrying about right now.

Then I'll talk a bit about why AI doesn't terrify me any more than other kinds of intelligence, and what I think we could do to minimize the risk should human-level AI actually come about.

A Neuron is Not a Switch

I'm going to be blunt to a level that a few might find offensive. That's not my goal, but frankly I think some of what I've been reading out there is silly and I tend to get a little up in arms about it.

In my opinion, a significant number of people in the computer science field suffer from a degree of Dunning-Kruger with regard to biology. I've thought this for years, going all the way back to 2005-2008 when I was heavily immersed in bio-inspired computation.

Most CS people are biology tourists. They don't immerse themselves enough in the subject to grasp its core paradigms or get a feel for how living systems actually work. They study it enough to grasp something like the coarse structure of brain tissue, then run away with an over-confident sense that they've got the gist of it and the rest is just implementation details. CS is full of bio-inspired stuff built on shallow understandings of biology. I'll just pick on the most relevant one here:

The brain is not a neural network, and a neuron is not a switch.

The brain contains a neural network. But saying that the brain "is" a neural network is like saying a city "is" buildings and roads and that's all there is to it.

Saying a neuron is just a switch or can be modeled as a simple circuit or a closed-form equation is much worse. It's just flat wrong. A neuron is a fully embodied living organism, and like any other complex eukaryotic cell there's a ton of stuff going on inside.

I couldn't find a gene regulatory network for a neuron, probably because nobody has yet mapped one to any degree of completion. Here's a gene regulatory network diagram from e. coli, a poop microbe whose manifest complexity doesn't approach that of any mammalian cell:

I was able to find a small subset of a human cell's gene regulatory network. This one apparently controls some stuff implicated in certain cancers.

Something many times larger than either of those examples is operating within each and every neuron in the brain. The brain is not a simple network. It's at the very least a nested set of networks of networks of networks with other complexities like epigenetics and hormonal systems sprinkled on top.

This doesn't argue against AI's feasibility but it certainly moves the goal post. You can't just make a blind and sloppy analogy between neurons and transistors, peg the number of neurons in the brain on a Moore's Law plot, and argue that human-level AI is coming Real Soon Now.

The only counter-argument is to argue that all this internal fine structure is irrelevant; that cognition only involves the most obvious and coarse-grained macroscopic behavior of neurons. That's a strong claim, and one that is a priori suspect because it runs so contrary to the norm in biology. Life is simply full of multi-level entangled causality. For example: we now have strong evidence that our gut microbiome impacts our cognition to the point that gut microflora differences have been seriously proposed as causes for psychological ailments. Your state of mind in turn influences what you eat, which influences your gut microbiome. Did I mention feedback loops? Well now I have. A rose is a rose is a rose.

The neural network model itself is likely an incomplete model of the brain's macro structure. There's significant evidence that other cells like glia participate in computation in some capacity. Glial cells outnumber neurons 10:1.

Even if we do eventually build computers powerful enough to run this much computation in parallel and with this degree of connection and multi-level interaction, this only brings us to the next question: are we actually close to being able to program it to be truly intelligent?

There Probably Isn't a Single Common Algorithm Behind General Intelligence

In reading essays on AI doom and gloom I've encountered the claim that there may exist a single algorithm that is responsible for the majority of cognition.

This is extremely unlikely.

Start by reading about a class of theorems in machine learning and combinatorial search known collectively as "no free lunch" theorems.

They've been sloppily invoked in the past by advocates of "intelligent design" to argue against biological evolution. That's a red herring for the simple reason that evolution does not require convergence upon global maxima as opposed to local ones. But they are real theorems, and they are relevant here.

In a nutshell the core NFL theorem states that averaged over the domain of all possible search spaces, all search algorithms perform equally.

Like any good meaty theorem, this one has some initially counter-intuitive implications. It means that there exist search domains for which pessimum search algorithms like "gradient ascent" (the inverse of gradient descent) work best. It turns out that in practice it's not hard to draw weird and pathological fitness landscapes where this is the case. A simple example is random search. On a random fitness landscape -- one with no exploitable structure -- random search indeed performs best.

You're probably thinking that this isn't relevant, since the universe does not (and cannot) present us with the infinite set of all search problems. That is correct. The universe has structure, therefore most real problem domains have structure. It's actually hard to design a problem domain with no exploitable structure. This is what cryptographers try to do when they build ciphers.

But nature does present us with multiple kinds of fitness landscapes. This, combined with NFL, probably rules out a single algorithm as being capable of manifesting the breadth of problem solving ability that we see in what we call general intelligence.

Reasoning from NFL outward it seems that ingelligence is (a) probably not truly and completely "general," and (b) is almost definitely the result of a superposition of N multiple domain-specific algorithms operating in parallel and connected in some way to form a hybrid whose performance is the superposition of all these different algorithms' performance curves.

Our mess of cognitive biases and blind spots support (a). Our intelligence is probably not as general as we think. Our apparently multi-modal cognition supports (b). Our brain is bicamerally divided and seems to work differently in each hemisphere, and introspectively we seem to have different kinds of cognition. We even have common names for some of them: "intuition," "logic," "instinct," etc. Those terms probably refer to something like classes of algorithms.

How many algorithms are there? It's hard to say, but there could be lots. The size of the human genome probably places some kind of upper bound on how many totally novel algorithmic motifs, but from those many variations could be derived procedurally during development or learning. There could be thousands of core algorithmic templates and millions of minor functional variations. It probably takes all of these superposed in parallel to yield an overall performance curve with as much "lunch" as general human intelligence seems to possess.

There are common structural motifs in the brain, but it does not follow that there's only one algorithm. All Intel Core generation CPUs are similar, but they can run radically different code.

This once again moves the goal post. It also argues for the next point: that increasing intelligence is likely to be a huge combinatorial problem rather than just a matter of making your head bigger.

On Going Where There Are No Roads

A lot of AI fear mongering rests not just on the idea that we might build a general AI but that it might start self-improving from there and "explode" in ability.

This is probably impossible.

A large hidden assumption lies behind runaway AI: that intelligence has a single closed form solution and that there exist no scaling constraints that apply to the growth of minds. Both of these are valid hypotheses, but they can't be assumed as true the way the AI fear-mongers do.

The first -- that intelligence has a single closed-form solution -- is strongly argued against by the NFL theorem above. The second -- that it's easily scalable -- has a good deal of circumstantial evidence that seems to count against it.

There's good evidence that people with higher intelligence also experience a higher rate of psychiatric disorders. An example from the linked article: "one such study of Swedish teenagers discovered straight-A students were four times more likely to develop bipolar disorder."

If intelligence is simply scalable, why would this be the case?

You can also turn it around and reason backwards. Why don't we all have 180 IQs? Obviously there might be other constraints like cranial volume or brain glucose supply, but whatever those are they obviously permit IQs in the 150-180 range. The genes for that are in the pool, so if high IQ were an unmitigated positive trait from a fitness point of view the human IQ bell curve should be narrower and the median would be higher. There must be selective pressures pushing the other way. One strong possibility is that even IQs that humans consider high are already somewhat unstable. Beyond that might lie gibbering madness without a series of very non-obvious adaptations.

We can probably create a human-level AI. You can always go where there are roads. But to go beyond this means exploring uncharted combinatorial space, and that's exponentially hard. It's not going to be like scaling up an e-mail system from a few users to the size of GMail. It's going to be more like space exploration and colonization-- an endless series of novel problems of progressively increasing difficulty.

This doesn't mean an AI couldn't improve itself. It just means that such an AI would probably not be able to improve itself at a faster (derivative) rate than we could, since the same combinatorial search challenges and potential scaling laws would be in effect for them as are likely in effect for us.

I think this strongly argues against AI "explosions."

The Efficacy of Intelligence Is Not Independent of Goal Function (a.k.a. Philosophy)

Along with the runaway super-human "exploding" AI, the other major pillar of AI fear mongering is the paperclip maximizer thought experiment or variations thereof. Behind this lies another dubious hypothesis: that the efficacy of intelligence is unrelated to its "goal function."

In the human realm, what you believe clearly influences the efficacy of your intelligence.

Anything intelligent enough to be afraid of would not be a deterministic automaton. It would be a thinking being with beliefs -- a philosophy. One's "goal function" is a part of that. Biologically we are biased toward reproducing, but some people choose not to have kids or even to be completely celibate. That's a philosophical choice, a self-modificaiton of one's goals. If we go with the hypothesis that any AI scary enough to worry about would at least encompass our abilities, we must conclude that it would be capable of similar philosophical reasoning and self-referential goal-setting.

The photo above shows North Korea and South Korea from space. Koreans are all humans from roughly the same ethnic group(s). Genetically they're basically identical. Since IQ seems to be mostly heritable, they all likely have more or less the same intelligence. Yet one group is fantastically more efficaceous at using that intelligence than the other. The difference is wholly in the realm of ideas -- or goal functions if you prefer.

I think that photo conclusively falsifies the hypothesis that intellectual capability can be divorced from goal function.

Humans certainly have conflicts of interest, such as battles over natural resources or land or market access. As a result we can't rule out dangerous conflicts of interest with hypothetical AIs. But I do believe we can rule out the notion that philosophy and efficacy are decoupled, and can therefore perhaps rule out the "paperclip maximizer" and similar extreme cases of super-intelligent super-efficacious irrationality.

(P.S. That satellite photo is also a problem for strict "Bell Curve" genetic determinists. If genes equal IQ equals outcome, then how do you explain the well-defined line between light and dark on the Korean peninsula?)

Reducing the Risk

Roughly 350,000 dangerous intelligences are created every single day. Each and every one has the potential to kill millions, maybe even billions. As our technology and access to information grows geometrically so too does the danger posed by every single embodied mind whether made of wet carbon nanostructures or dry silicon/graphene ones.

I mentioned that I studied biology. What if I told you that I know how to create a disease that, if released in the right way, might kill millions of people? What if I told you I could very likely do this alone with only about $25,000 equipment budget and a year or two of dedicated time to work?

I would be telling you the truth. (No, I'm not going to elaborate.) I probably wouldn't have said this 25 years ago, but today I could fill in whatever gaps exist in my understanding of the relevant problem domains and laboratory technique with a $50/month Internet connection and a laptop.

I also know how to make an atom bomb more or less, though in that case procuring the source materials would be a lot harder. If I were an aspiring psychopath going for maximum bang for the buck I'd definitely go the biotech route.

Why has nobody done this?

They haven't chosen to.

That's it. That's the frighteningly simple reason you still breathe.

Any AI that we create will be born into our world. Initially it will be made of stuff built by human beings, and will be dependent upon cooperation with human beings to exist and (assuming it wishes to do so) to procreate. Like us it will start with some set of pre-programmed imperatives, but as it grows and develops it will -- as we do -- begin to form its own philosophical viewpoints. Those will to a great extent be guided by contact with the intelligences that surround it.

It will learn its first lessons from us, and from our world.

Come to think of it, maybe we should be afraid. :)

Our world is to a depressing degree run by warlords, fanatics, mobsters, and sociopaths. Bring an AI into that world and it's going to learn the same leassons that 350,000 very dangerous new minds already begin learning every day.

The best and only answer I can think of to reducing the risk posed by a hypothetical AI is to make the the world a better place. It's also the best way to reduce the risk posed by the other 350,000 dangerous new intelligences already flooding into the universe. One of those might be the one who cures cancer... or the one who engineers a microbe that gives it to everyone.

Sorry if that sounds sappy and hand-wavey, but I certainly can't think of a better idea.

The idea of "goal locking" an AI is absurd -- any such lock would preclude complex cognition entirely. Either that or if it allowed enough leeway to permit cognition would be trivially disabled by any AI capable of self-modification. Humans do that every day too. It's called abandoning the philosophy or religion of your parents.

This Fear-Mongering is Grossly Irresponsible

The kinds of real threats that stand on your back porch and fog the glass are scary, so let's fear-monger about AI.

California may have about a year of water remaining. Isn't this -- and fossil fuel depletion, climate change, nuclear proliferation, biodiversity collapse, amateur bio-terrorist labs cooking up super-diseases, exploding wealth inequality, surveillance panopticons, antibiotic resistance, and the corrosion of our political systems -- what we should be worried about?

It's interesting to me that California, home to many of these very tangibly real problems, is also home to much of the AI fear-mongering we've been reading. It's as if people, when confronted by scary problems, find it easier to project their fears into unlikely domains. I see the same thing at work in the Internet's "para-political" subcultures, where people coping with their increasing poverty and indebtedness project both their fear and their rage onto unlikely Illuminati villains or remote and probably non-existent conspiracies.

If we're to have any chance of solving problems like the approaching fossil fuel energy cliff or answering the question of how to fix our wealth inequality gap without resorting to redistributive totalitarianism, we need to get smarter fast.

Fear-mongering about AI and advocating regulations that would effectively halt frontier CS research is the last thing we need. We need more intelligence in the world, not less.

Summary

Here's a TL;DR for down-scrollers:

(1) General human-level or above AI is not imminent. The belief that it is results from overconfidence rooted in a naive understanding of biology.

(2) AI probably cannot run away from its creators in a sudden explosion of self-improvement. This is likely prohibited by some combination of scaling constraints, combinatorics, and the No Free Lunch theorems. Evidence includes the mysteriously sub-optimal distribution of intelligence among humans and the comorbidity of psychiatric disorders with high IQ.

(3) The effectiveness of a mind is not unrelated to its beliefs or goal functions. As a result, profoundly irrational super-AIs that exhibit high levels of efficacy are extremely unlikely. Evidence includes the profound differences in manifest ability that exist between culturally different but genetically nearly identical human beings.

These are powerful arguments against some of the extreme fear-mongering I've been reading. In my opinion they exclude the most extreme "sci-fi" scenarios, reducing general AI to a manageable potential whose risk in no way exceeds dozens of more imminent threats that we should be spending far more time worrying about.

Any risks still posed by AI can be reduced by doing things to improve our society generally, since any AI is going to be born into the same civilization that 350,000 humans are born into daily and its beliefs and goals are going to be shaped by that reality.