Today, a descendant of that Cold War mechanism is used to track seismological phenomena, transmit pressing news bulletins, and send email to mom. Does this signal a complete shift in priorities? In part, yes; more appropriately though, it is an example of a technology with more uses than anybody ever imagined.
The Internet we use today is one of the few positive legacies of Cold War paranoia, providing efficient and inexpensive communications between people around the world. As the Iraqis proved during the Gulf War, commercially available Internet technologies were indeed resistant to enemy fire. But as ``Information Superhighway'' becomes the most over-used phrase of the 1990s, mass numbers of people are signing up and trying to become part of the Internet community. By understanding the motives, methods, and technologies behind the Internet's development, we can get a sense of the power and importance of this project gone happily amok.
On January 2, 1969, designers began working on an experiment to determine whether computers at different universities could communicate with each other without a central system. The corporation Bolt, Baranek and Newman had been awarded the contract to develop the Interface Message Processor (IMP), the basis of the new communications system. IMPs were small machines which were part of each host and were dedicated to forming the network between computers [1]. IMPs would use a technology called packet-switching, which split large sections of data into small parts called packets, each labeled with its destination address. Packets could be sent in any order and through different routes which all led to the same destination [2]. Upon arrival at the destination computer, the packets could be reassembled. (While the term has died out, IMPs form the backbone of packet-switching networks today.)
The advantage of the packet-switching system was very clear. Under a traditional central system, all information had to be chanelled through one source, processed, and sent off somewhere else. Packet-switching, though, allowed for another method; information could first be sent to one place, and if that site was not working or processing too slowly, could be routed, on-the-fly, somewhere else. This concept, called dynamic re-routing, would allow all hosts to be ``equal.'' With every computer having the same routing abilities, an enemy would have to destroy nearly all computers on the network to be sure that communication lines were dead.
While these developments were looking quite positive, the designers soon ran into trouble. The original systems only supported client-server applications like telnet and FTP, and couldn't handle host-host relationships [1]. This limit would impair the functionality of the network. A new protocol to take care of this went into development soon afterwards; called Network Control Protocol (NCP), it became the primary concept behind networking. Armed with these tools, researchers were ready to unveil their creation: ARPAnet.
While the technology was growing quickly, the number of terminals hooked up to ARPAnet was still moving slowly. Between 1969 and early 1977, ARPAnet only added 107 hosts. (In contrast, more than one million hosts were added to the Internet between January and August 1994) [4]. Even so, engineers at DARPA and RAND recognized that this new communications network was going to grow into something far larger than they had ever imagined, and needed to develop a design suitable for a large network.
During 1983, to provide operational separation, the military broke off from ARPAnet and formed MILnet. The Department of Defense continued to run and fund both networks. Further, more networks were popping up; educational and commercial organizations that didn't fall into ARPA's original charter wanted to use the same packet-switching technologies.
In the early 1980s, two large networks sprang up: CSnet (Computer Science Network), for members of the computer science academic and industrial community, and BITNET (Because It's Time Network), for the general academic community. Other small networks, like ones for space scientists and high-energy physicists grew for specific needs [5]. (The latter also helped develop the foundation of the World Wide Web in 1989.) While these networks existed separately from ARPAnet, there was a need for interconnection between all of them. In 1983, CSnet and ARPAnet negotiated an agreement which allowed members of the two networks to exchange electronic mail. Further agreements followed, and the networks began building gateways between one another.
The planners envisioned a three-tiered system. Instead of user organizations (like universities and manufacturers) connecting directly to the backbone of five top supercomputers, they developed a mid-level tier, where regional networks would connect the two levels together[2]. Starting in 1987, the NSF funded research organizations at IBM, MCI, and the Merit Computer Network. Originally, the NSF wanted to incorporate its network into ARPAnet, but a number of political and technical difficulties caused it to build its own network.
The original supercomputer centers turned out to be unsuccessful; few of them worked, and still fewer were cost-efficient enough to maintain. The NSF kept up its network, though, adding more than a dozen backbones and more large regional networks. By 1989, ARPAnet had been co-opted; it folded, having provided the impetus for technologies that far exceeded its capabilities.
New ideas continue to pop up. For example, when he was a Tennessee Senator, Vice President Albert Gore proposed the National Research and Education Network, which (building off of NSFnet) would provide top computing facilities to research communities and schools. This program went into development in 1991 and continues today. While many people are frightened by the prospect of the government having a larger role in Internet policy (even as it divests itself from hardware), others are pleased that the ``information-poor'' may be given a chance to become part of this expanding world [2],[3].
Scientists developing networking technology in the 1960's knew that what they were building would be far bigger than themselves; nobody, however, could have predicted the explosion in Internet access and interest in the past several years. The original designers didn't even think email would be something people would want! Commercial networks, students, and even Internet cafes are scrambling to sign up and be part of a technological revolution. It is important for us to remember that the real revolution took place two decades ago -- today's technology just rides on the wave of yesteryear.
There are also a number of on-line documents that discuss history. A column from Bruce Sterling, originally published in The Magazine of Fantasy and Science Fiction in 1993, and found at gopher://gopher.isoc.org:70/00/internet/history/short.history.of.internet provided the impetus for this article; it is well-written, interesting, and suitably provocative. Ronda and Michael Hauben (at University of Michigan and Columbia University respectively) have developed an on-line manual (http://www.cs.columbia.edu/~hauben/n etbook) with some history and an excellent set of references.
There are a number of first-hand accounts -- from the developers in the 1960's and 1970's -- that can be found with a bit of searching. Still, the primary source for Internet history has to be the Internet Request for Comments. Anything you need (or want) to know can be found here.
Copyright 1995 by Scott Ruthfield
Want more Crossroads articles about Networking? Go to the Index or to the next one or the previous one.
Last Modified:
Location: www.acm.org/crossroads/xrds2-1/inet-history.html