« Feeling at Home in San Diego | Main | How to Keep Social Security Solvent: Eat More, Exercise Less »

March 16, 2005

James Surowiecki on the Unwisdom of Crowds

James_surowieckiI'm still at the O'Reilly Emerging Technologies conference in San Diego. James Surowiecki's talk today, Independent Individuals and Wise Crowds, or Is It Possible to Be Too Connected?, was the highlight of the day, in my opinion. I took near-verbatim notes, published below. Note: by near-verbatim, I mean that I captured almost every thought Surowiecki expressed, using his language for the most part, with a few interpolations and omissions that don't affect the meaning. 

Surowiecki:

This talk won’t end in a coherent answer. But I wanted to raise certain problems or issues around questions of collective intelligence, collaboration, collective action, a whole host of themes that you’ve been hearing about here.

One of the major themes of the last decade: a lot of interest in various ideas about collective action and collaboration. Slashdot, Google. Prediction markets, flash mobs, Wikis, Linux, del.icio.us, are all examples of projects that bring together large groups of people to work together explicitly or implicitly. Google, for example, harvests group intelligence about the Web. All these things have something in common in the way we experience them. In the same sense, there’s been a lot of writing about network effects, emergent behavior (Steven Johnson), smart mobs (Howard Rheingold), the wisdom of crowds (the title of my book). There’s an affinity between all these ideas. But I want to talk about the differences between these things – it’s useful to think about how these things are not alike. Ontology may be overrated, but classification still has uses, in particular because different problems require different solutions. Josh Schachter talked about this. Not all forms of collective action are created equal. If you use the wrong kind of collective action, you can end up with worse problems than the ones you set out to solve. There’s been a lot of fuzzy thinking about what we mean when we talk about collective intelligence, network, and interaction. I want to parse these distinctions.

In The Wisdom of Crowds, I wrote about the power of groups under certain circumstances to be remarkably intelligent. A model of collective intelligence: a large group of people reflecting diverse opinions offering judgments independently with some mechanism to aggregate the judgments, collectively ending up with an intelligent outcome. The book opens with the example of Francis Galton’s observations of a contest to guess the weight of an ox after it had been slaughtered and dressed. The crowd contained a lot were experts, but many were merchants, family members. Galton collected guesses and took the average. The group had guessed that the ox would way 1,197 pounds. And in fact it was 1,198. The group’s judgment was essentially perfect. The argument in the book is that this is not coincidence, nor is it confined to livestock. It can be seen at work in far more complex phenomenon.

At the racetrack, the odds on horses predict almost perfectly how likely a horse is to win. Horses with 3:1 odds win about a quarter of the time. The favorite wins most often, the second favorite wins second most often, etc. Odds are determine collectively through everyone’s bets. Probabilistic judgments, it turns out, are excellent. Corporations have experimented with this model. Eli Lilly has an internal stock market to predict which drug candidates are most likely to make it through Phase III clinical trials. Their whole business is built on this question. It’s open to 100 “semi experts” and collectively they can recognize which candidates are viable and which are not, well in advance. 

The wisdom of crowds works well when there is a true answer, and as long as some choices are better than others. The key is that people are mostly working on their private information, which may not be good, may be fragmented, but it is diverse. Collective wisdom does not emerge out of consensus. The goal is not to get everyone to agree – it’s to tap into people who disagree, into the diverse information everybody has. It works best when people are not paying too much attention to what everyone else is doing. They have some sense – like feedback in the form of odds at the racetrack – but there isn’t a lot of personal interaction.

Contrast this to Linux. It’s a large group working on problems, but ultimately one individual writes the piece of code that gets incorporated. The decision-making process is really centralized. In the end just a few people or even just one person makes the decision about what goes into the kernel. This is a different kind of collaboration from collective judgment.

Contrast this to the anthill, as metaphor for human behavior (Stephen Johnson uses this example in Emergence). Ants don’t know anything. Remember the scene from Antz (or maybe it was A Bug’s Life): leaf falls in the middle of the long line of ants and the ant panics, doesn’t know what to do. Though no individual ant knows much, their interactions produce quite stunningly intelligent results. Ants are remarkably good at finding food with the least amount of energy. The way they do this is by following very simple rules, similar to the way birds flock, and they pay enormous attention to those around them. The interaction is the essence of the intelligence. 

Human beings are not ants. We don’t have the biological programming or tools that ants have. The way ants find food has to do with their formic acid secretions; the more trails the more signals; the entire colony can find its way to the food source. We have no equivalent to this. For us, interaction is incredibly problematic, especially when it comes to group behavior. If there is too much interaction among human beings, groups end up being less intelligent than they would otherwise be. The more we talk to each other the dumber it is possible for us to become. The book has quite a bit about small groups. Put a bunch of smart people into a room and they emerge dumber than when they went in.

Why can interaction have such negative consequences? Firstly, human beings herd. They tend to stick with what others are saying. “It is better to fail conventionally than to succeed unconventionally.” – Keynes. Humans like the comfort of the crowd. Mutual fund managers herd, even though their whole business is predicated on doing better than those around them. It’s a way to appear reasonable. If you want to appear that you have a pretty good idea of what you’re doing, do what those are around you are doing. 

Second, humans imitate. We are imitation machines. The example I use in the book: social scientists put a few guys on a street corner and had them look up at the sky. Pretty soon about a third of the people passing stopped and started looking up at the sky. When the scientists had five or six guys looking at the sky, 60 percent of the passers by looked. When it was 10 or 12 people, 85 percent looked. People do this because we assume that if a lot of people are doing something or think it’s valuable, it very likely is valuable. That’s a tremendously valuable assumption. The problem is that when human beings imitate slavishly or without thinking what they’re doing, you get a bunch of people looking on the street corner looking up at an empty sky. 

Scientists call this an information cascade. You’ve read The Tipping Point. It’s the notion that once an information cascade gets going, it becomes very hard for people making decision later in the process not to do what everyone else has done. Say you have two restaurants, both empty, and there’s no reason to think that one is better than the other. You go to the street corner, look in, and decide I’ll go to this one. The next couple comes along and has the same problem. They see you’re in one restaruant, and they say we’ll go there. Pretty sooon everyone assumes there is some value to the fact that everybody is in one restaurant, even there wasn’t. It can be proved mathematically that after a certain point it becomes rational to do what everyone else is doing, even if you have information that suggests the opposite is true. As long as you assume that everyone else is rational, that is. That’s what The Tipping Point is all about. People no longer making decisions on their own, but simply because those in front of them have done the same. Quality has little to do with what ends up getting chosen. Collective decisions may not be in any sense tied to quality. The result: the group as a whole becomes less intelligent. On the web, the key factor in a site getting more links is how many links it already has. In that model there is no guarantee that the group as a whole is intelligent. The wisdom of crowds does not emerge.

Pascal said all problems in the world arise from one simple fact: a man cannot stay in his room and think quietly by himself. That’s not what I think we should do. Interacting has enormous value, for a variety of reasons. You may have information that would be valuable to me. I may be able to provide a different spin on it. Our exchange may lead to a more diverse and intelligent group forecast. Sometimes feedback is useful: Your judgment may sound craz, but perhaps I’ve overestimated the odds on my horse? Finally some problems just need to worked on collectively—for example, in team sports.

The question for all of us is, how can you have interaction without information cascades, without losing the independence that’s such a key factor in group intelligence? I’m not going to come to a final answer. But there are a few things worth thinking about. First one: the best thing to do is to keep your ties loose. You’re better off, and the group is better off, if the ties are looser, because loose ties minimize the influence of those around you. I don’t think Duncan Watts’ model of the information cascade is quite true. I don’t think peole are as subject to the influences around them as Duncan thinks. But we are clearly shaped by those influences. One way around that: limit the power of the influences.

Second, keep yourself exposed to as much information as possible. Injecting some level of randomnesss into the system is a good thing. Diversity is a good thing. In computer science experiments at the University of Michigan, a researcher, Scott Page, had his agents compete until they differentiated into three groups, Dumb, Intelligent, and Random. Then he had them solve problem as groups. The Intelligent group outperforms the Dumb group, but not by very much. But the Random group almost always outperforms the Intelligent group. Page’s theory is that the reason for this is that even if the less intelligent groups know less, what they know is different.

This has important implications for the way decision making works inside organizations. Make groups that range across hierarchies. The conclusion is that you actually can be too connected, if the connections are of the wrong kind and if they’re reinforcing your existing prejudices rather than altering them. You can pay to much attention to those around you, even if they’re really smart. The flip side of Pascal’s isolation is the cacophony you find on the net; it bombards you with many voices. Isolation and cacophony, interestingly, allow you to arrive at the same place: independence.

March 16, 2005 at 05:13 PM | Permalink

TrackBack

TrackBack URL for this entry:
http://www.typepad.com/t/trackback/2080285

Listed below are links to weblogs that reference James Surowiecki on the Unwisdom of Crowds:

Comments

"It can be proved mathematically that after a certain point it becomes rational to do what everyone else is doing, even if you have information that suggests the opposite is true."

bullsh*t.

i've seen genealogical psychoanalysts, random yéyé psychotherapists and other sect gurus 'prove' their theories in a very like manner.

this is just amalgams, generalizations, and proofs by repetitions. this guy is not worth half of the attention he is getting.

Posted by: Denis de Bernardy | March 17, 2005 03:21 AM

Post a comment