Java Technology Home Page

Advanced Search

Java Technology Home Page
Technologies
- J2EE
- J2SE
- J2ME
- Web Services
- Wireless
- XML
- Other
Downloads
- Early Access
Documentation
- APIs
- Tutorials
- Code Samples
- See All
Industry News
Developer Services
- Bug Database
- Forums
- Support
- See All
Java BluePrints
    Printable Page  Printable Page

Coding from Scratch
A Conversation with Virtual Reality Pioneer Jaron Lanier, Part One

by Janice J. Heiss
January 23, 2003

Jaron Lanier
Jaron Lanier

Jaron Lanier is well known for his work on "virtual reality," a term he coined in the 1980s. Renowned as a composer, musician, and artist, he has taught at many university computer science departments around the country, including Yale, Dartmouth, Columbia, and Penn. He recently served as the lead scientist for the National Tele-Immersion Initiative, which is devoted, among other things, to using computers to enable people in different cities to experience the illusion that they are physically together.

Currently, he is working on something he calls phenotropic computing, in which the current model of software as "protocol adherence" is replaced by "pattern recognition" as a way of connecting components of software systems.

In this first of a two-part interview, we met with him to talk about his vision of the fundamental changes that are needed in software development.


"I think the whole way we write and think about software is wrong. If you look at how things work right now, it's strange -- nobody -- and I mean nobody -- can really create big programs in a reliable way."

- Virtual Reality Pioneer
Jaron Lanier


question What's wrong with the way we create software today?

answer I think the whole way we write and think about software is wrong. If you look at how things work right now, it's strange -- nobody -- and I mean nobody -- can really create big programs in a reliable way. If we don't find a different way of thinking about and creating software, we will not be writing programs bigger than about 10 million lines of code, no matter how fast our processors become. [After publication of this interview, Jaron Lanier realized that his sentence should read: "bigger than about 20 to 30 million lines of code...".]

This current lack of scalability is a universal burden. There are monopolies in our industry because it's so difficult for anyone to even enter the competition; it's so hard to write large software applications. And that's strange to me. If you look at other things that people build, like oil refineries, or commercial aircraft, we can deal with complexity much more effectively than we can with software. The problem with software is that we've never learned how to control the side effects of choices, which we call bugs. We shouldn't be complacent about that. I still believe that there are ideas waiting to be created, and that someday we will have new ways of writing software that will overcome these problems. And that's my principal professional interest. I want to make a contribution to making bugs go away.


Jaron Lanier and Sun Microsystems
In February of 1998, Sun acquired the patent portfolio of VPL Research Inc., Jaron Lanier's former company that pioneered the development of virtual reality. At the time Lanier remarked, "I'm delighted that Sun has acquired VPL's assets. Sun's commitment to open systems and the Java paradigm will provide a superb context for the formulation of competitive strategies by both VR users and developers. The next generation of applications will have to deal with a level of complexity that other leading platforms cannot address. Virtual reality-based applications will be needed in order to manage giant databases and networks, advanced medical imaging, and fast turn-around mechanical design. And all of these mega-applications will have to support real time collaboration over the net. Sun is in an ideal position to enable this new level of productivity."

question Aren't bugs just a limitation of human minds?

answer No, no, they're not. What's the difference between a bug and a variation or an imperfection? If you think about it, if you make a small change to a program, it can result in an enormous change in what the program does. If nature worked that way, the universe would crash all the time. Certainly there wouldn't be any evolution or life. There's something about the way complexity builds up in nature so that if you have a small change, it results in sufficiently small results; it's possible to have incremental evolution. Right now, we have a little bit -- not total -- but a little bit of linearity in the connection between genotype and phenotype, if you want to speak in those terms. But in software, there's a chaotic relationship between the source code (the "genotype") and the observed effects of programs -- what you might call the "phenotype" of a program.

And that chaos is really what gets us. I don't know if I'll ever have a good idea about how to fix that. I'm working on some things, but you know, what most concerns me is what amounts to a lack of faith among programmers that the problem can even be addressed. There's been a sort of slumping into complacency over the last couple of decades. More and more, as new generations of programmers come up, there's an acceptance that this is the way things are and will always be. Perhaps that's true. Perhaps there's no avoiding it, but that's not a given. To me, this complacency about bugs is a dark cloud over all programming work.

Rethinking the History of Computing

question Maybe we need to go back and start all over again?

answer That's what I've been thinking lately. Tracing the history of programming, we can see places where it went wrong, based on the limited experiences and metaphors that were available at the time. It's possible to imagine a different history. Let's go back to the middle of the 20th century, to a very brilliant, first generation of serious hackers that included people like Alan Turing, John von Neumann, and Claude Shannon. Their primary source of coding experience involved coding information that could be sent over a wire. They were familiar with encoded messages on the telegraph and telephone. Everything was formulated in terms of a message being sent from point A to point B, with some advance knowledge on point B about the nature of the message. Or if not that, at least an attempt by point B to recreate that knowledge, in the case of hacking.

Of course, Turing was the first hacker who used a computer to break a secret code. There's a notion that you have some scheme for encoding information in time. So, you have some kind of transport layer that involves a pattern in time, and then you have variations on top of it that actually carry your signal. That's the part that varies. And computer science and basic information theory are both based on this notion. Computer architectures as we know them were first designed around simulated wires. Source code is a simulation of pulses that can be sent sequentially down a wire, as are passed variables or messages.

Now, I think that's a notion that can be extended very, very, far. And if you really want to, you can even use it to describe all of nature, but it's not the only idea that could be extended that far. And it does get kind of awkward because this is not the way that natural systems work.

For example, if you want to describe the connection between a rock and the ground that the rock is resting on, as if it were information being sent on a wire, it's possible to do that, but it's not the best way. It's not an elegant way of doing it. If you look at nature at large, probably a better way to describe how things connect together is that there's a surface between any two things that displays patterns. At any given instant, it might be possible to recognize those patterns.


"If we don't find a different way of thinking about and creating software, we will not be writing programs bigger than about 10 million lines of code no matter how fast our processors become." [After publication of this interview, Jaron Lanier realized that his sentence should read: "bigger than about 20 to 30 million lines of code...".]

- Virtual Reality Pioneer
Jaron Lanier


And so you could think about information science in terms of how different pieces of the universe connect together by recognizing each other's patterns, instead of by adhering to protocol. The difference between two different pieces adhering to a protocol and sending information down a wire, and two different pieces of information having a surface between them, and recognizing patterns from each other, is one of degree. The change in emphasis relates to the kind of information about the other thing that's stored. And, in particular, it relates to temporal encoding, and whether you need a sort of a stack of the past that you use to decode what's coming down the pike.

If you look at the way we write software, the metaphor of the telegraph wire sending pulses like Morse code has profoundly influenced everything we do. For instance, a variable passed to a function is a simulation of a wire. If you send a message to an object, that's a simulation of a wire. And it's not that there's anything wrong with this, but I can make an empirical observation: If you have a time-based protocol to send codes on a wire, it's inefficient to make that kind of coding error-tolerant. It's much more efficient to make it error-intolerant. So errors tend to be catastrophic. You tend to create a situation where, if you get one bit wrong in passing a variable to a function, the function does something that's not just wrong, but chaotically wrong, arbitrarily wrong, terribly wrong.

question It's like a whole bunch of blocks falling down.

answer Right. And it results in a type of error that doesn't teach you anything. You have chaotic errors where all you can say is, "Boy, this was really screwed up, and I guess I need to go in and go through the whole thing and fix it." You don't have errors that are proportionate to the source of the error. And that means you can never have any sense of gradual evolution or approximate systems. So, the real difference between the current idea of software, which is protocol adherence, and the idea I'm discussing, pattern recognition, has to do with the kinds of errors we're creating. We need a system in which errors are more often proportional to the source of the error.

Creating Pattern Recognition Software

question How would you do that with pattern recognition software?

answer In the last five years, particularly in the last year, significant advances in some specific pattern recognition problems have occurred. For instance, I was working for quite a while with a neuroscientist named Christophe von der Marsburg and his student Hartmut Nevern, who were based at the University of Southern California, on facial feature tracking, which should be distinguished from facial recognition. We were dynamically tracking features in a human face to create avatars that automatically track people's faces. What was intriguing is just how well it started to work. And there are really two factors that contributed to that. One is that processors are getting faster. We seem to have hit a threshold where a lot of recognition tasks are becoming possible.


"The problem with software is that we've never learned how to control the side effects of choices, which we call bugs."

- Virtual Reality Pioneer
Jaron Lanier


The other factor is that some of the mathematical techniques available to us have been improving. An example would be wavelets, which provide a way of breaking up a signal into frequency and time-based components. It's a sort of fancier version of a more familiar technique that most programmers will know about, called the Fourier transformation, or FFT for Fast Fourier Transformation. That's one example of a new pattern recognition technique, but there are others as well. So, there's a body of applied mathematics that's becoming better and better. There are also some empirical results from neuroscience that are giving us useful ideas, and that's a particularly delightful development. So just recently, we've seen better face recognizers, better face trackers, better handwriting recognizers, better gate trackers -- all kinds of things.

So pattern recognition is really starting to come into its own. Sadly, a lot of that's driven by security and defense requirements, but for whatever reason, it's becoming viable. And we're at the point where computers can recognize similarities instead of perfect identities, which is essentially what pattern recognition is about. If we can move from perfection to similarity, then we can start to reexamine the way we build software. So instead of requiring protocol adherence in which each component has to be perfectly matched to other components down to the bit, we can begin to have similarity. Then a form of very graceful error tolerance, with a predictable overhead, becomes possible. The big bet I want to make as a computer scientist is that that's the secret missing ingredient that we need to create a new kind of software.

"Phenotropic" is the catchword I'm proposing for this new kind of software. "Pheno" refers to "phenotype," the outward appearance of something. "Tropic" means interaction. I first published the basic ideas in a book, The Next 50 Years: Science in the First Half of the Twenty-First Century published in 2002 by Vintage Books and edited by John Brockman. In phenotropic computing, components of software would connect to each other through a gracefully error-tolerant means that's statistical and soft and fuzzy and based on pattern recognition in the way I've described.

Advice to Developers

question What do you want to say directly to developers?

answer What's most important is to keep an optimistic mindset. There's almost a hypnosis that comes about in the computer world, because such detailed consideration is required to do anything with computers that it becomes all-consuming. Computer scientists sometimes say that you fall in love with what you have to struggle for. If you're working on some big programming project, just getting it into your head so that you can effectively deal with it is an all-consuming process for everybody. It's not a bad reflection on you as a programmer that it consumes you. It's a good reflection. To be effective at any large software project, you have to become so committed to it. You have to incorporate so much of it into your brain. I used to dream in code at night when I was in the middle of some big project.


"The real difference between the current idea of software, which is protocol adherence, and the idea I'm discussing, pattern recognition, has to do with the kinds of errors we're creating."

- Virtual Reality Pioneer
Jaron Lanier


And at that point, there's a danger that you lose the faith that used to exist in prior generations: that computing could get better at a fundamental level. And since almost everybody in the whole profession goes through the academic world and out into the industrial world, everyone gets consumed in this way. So, I'm afraid we have lost our greater ambitions, and that deeply concerns me.

If you talk to people who want academic careers and want to get degrees, an example of a hot topic might be quantum computing, which seems exciting. And I think it is exciting, but to me, what's much more important is how we fundamentally conceive of software. If we can't solve the problem of how to write big programs, it won't matter whether they're quantum or conventional. We're still going to have the same complexity barrier. And this overwhelmingly important issue that's at the center of everything isn't receiving the attention it deserves. I suppose we are all so emotionally committed to the particular problems under our noses that we lack the long-range faith that software could be fundamentally better. That's the long-range faith that drives fields like medicine, and I want to try to bring that faith back to computer science.

question What advice do you have for developers just starting out?

answer There's a lot I would say. If you're interested in user interfaces, there's a wonderful opportunity these days to push what a user interface can be. If a user interface gives a user some degree of power, try to figure out if you can give the user more power, while still keeping it inspiring and easy to use. Can you do it? For instance, could you design a search engine that would encourage people to do more complex searches than they can do on a service like Google today, but still do them easily? I haven't seen a really good visual interface, for instance, for setting up searches on Google. Could you do that? Could you suddenly make masses of people do much more specific and effective searches than they currently are doing just by making a better user interface?

There are hundreds of challenges, and any one of them, if done well, could really improve the lives of millions of people very quickly. So, there's a lot of ripe territory. I'm going to call it "ripe," but I'm not going to call it "low hanging fruit," because I think these problems are genuinely difficult problems, and it's important not to pretend that they're easy. I think that anyone who makes progress on a problem like that deserves an enormous amount of credit.

I would recommend that developers read the history of computer science very skeptically. Read some of the earlier writings by people like Turing, Shannon, and von Neumann and try to think through how these guys were wrong. What would they think about differently if they were starting out today?

Here's the problem with computers: it's just so much work to think about programs that people treat the details of software as if they were acts of God. When you go to school and learn how to program, you are taught about an idea like a computer file as if it were some law of nature. But if you go back in history, files used to be controversial. The first version of the Macintosh before it was released didn't have files. Instead, they had the idea of a giant global soup of little tiny primitives like letters. There were never going to be files, because that way, you wouldn't have incompatible file formats -- right?

The important thing to look at is how files became the standard. It just happened that UNIX had them, IBM mainframes had them, DOS had them, and then Windows. And then Macintosh came out with them. And with the Internet, because of the UNIX heritage, we ended up thinking in terms of moving files around and file- oriented protocols like FTP. And what happened is that the file just became a universal idea, even though it didn't start out as one.

So, now, when you learn about computer science, you learn about the file as if it were an element of nature, like a photon. That's a dangerous mentality. Even if you really can't do anything about it, and you really can't practically write software without files right now, it's still important not to let your brain be bamboozled. You have to remember what's a human invention and what isn't. And you have to think about files in the same way you think about grocery carts. They are a particular invention with positive and negative elements. It's very important to keep that sense of skepticism alive. If you do that, it will really have an influence on the quality of code that you create today.

See Also

Jaron Lanier's Home Page
(http://www.jaronlanier.com/)

Brief Biography of Jaron Lanier
(http://people.advanced.org/~jaron/general.html)

One Half of a Manifesto
(http://www.edge.org/3rd_culture/lanier/lanier_index.html)

end.


Reader Survey
I found this article...   very worth reading    worth reading    not worth reading
Comments only, please. Questions can be answered in the Help Pages
 
Recent Features


[ This page was last updated Feb-04-2003 ]

Company Info | Licensing | Employment | Press | Help |
JavaOne | Java Community Process |Java Wear and Books
 
 
Unless otherwise licensed, code in all
technical manuals herein (including articles,
FAQs, samples) is provided under this License.

Sun Microsystems, Inc.

Copyright © 1995-2003 Sun Microsystems, Inc.
All Rights Reserved. Terms of Use. Privacy Policy.