Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.

gwern comments on Now I Appreciate Agency - Less Wrong

28 Post author: ShannonFriedman 29 October 2012 05:31PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (126)

You are viewing a single comment's thread. Show more comments above.

Comment author: gwern 31 October 2012 08:49:49PM *  5 points [-]

An even stronger criticism of AGI, both in agent and tool forms, is that a general intelligence is unlikely to be developed for economic reasons: specialzied AIs will always be more competitive.

Economic reasoning cuts many ways. Consider the trivial point known as Amdahl's law: speedups are always limited by the slowest serial component. (I've pointed this out before but less explicitly.)

Humans do not increase their speed even if specialized AIs are increasing their speed arbitrarily. Therefore, a human+specialized-AI system's performance asymptotically approaches the limit where the specialized-AI part takes zero time and the human part takes 100% of the time. The moment an AGI even slightly outperforms a human at using the specialized-AI, the same economic reasons you were counting on as your salvation suddenly turn on you and drive the replacement of any humans in the loop.

Since humans are a known fixed quantity, if an AGI can be improved - even if at all times it is strictly inferior to a specialized AI at the latter's specialization - then eventually an AGI+specialized-AI system will outperform a human+specialized-AI system barring exotic unproven assumptions about asymptotic limits.

(What human is in the loop on high frequency trading? Who was in the loop when Knight Capital's market maker was losing hundreds of millions of dollars? The answer is that no one was in the loop because humans in the loop would not have been economically competitive. That's fine when it's 'just' hundreds of millions of dollars at stake and companies can decide to take the risk for themselves or not - but the stakes can change, externalities can increase.)

Comment deleted 31 October 2012 09:15:50PM [-]
Comment author: gwern 31 October 2012 09:20:18PM 1 point [-]

Wow, way to miss the point and not respond to the argument - you know, the stuff that is not in parentheses.

(And anyway, how exactly am I supposed to give an example where AGI use is driven by economic pressures to surpass human performance, when AGI doesn't yet exist?)