First Monday
Read related articles on Internet communities and Internet economics

The Internet and Public Discourse by Phil Agre

Many legal systems, for example in the United States, have had difficulty comprehending the Internet because incompatible precedents based on so many existing media (post, telephone, newspaper, street corner, etc.) seem to apply. The Internet frustrates these traditional analogies because it is really a meta-medium: a set of layered services that make it easy to construct new media with almost any properties one likes. Despite this great flexibility, however, the dynamics of technical standards are emerging as a potentially conservative force. To help in mapping afresh the legal and political concerns that the Internet has raised, this article sketches a series of four models of the interaction between Internet architecture and public discourse.

Contents

The Internet as a communications medium
The Internet as a computer system
The Internet as discourse
The Internet as a set of standards

For all of its precision in the technical realm, as a social phenomenon the Internet still seems inchoate [ 1 ]. Analyses of law and policy have found it remarkably hard to fit familiar models to the Internet. I want to explore why this is, and to take a shot at remedying the situation. When we talk about "the Internet", of course, we could mean a lot of different things. We could be talking about the TCP/IP protocols and the computers that use them; on this view the Internet is a big electric circuit that happens to cover the earth, or at least the relatively affluent parts of it. But for the most part, when we talk about the Internet in the context of important public issues, we mean to refer to something larger, and to give shape to the intuition that the Internet is increasingly bound up with the conditions and practices of public discourse.

I propose, therefore, to sketch a series of models of the Internet -- a series of analyses of the relationship between the Internet and public discourse. In doing so, I hope to provide rational reconstructions of several of the most widely publicized and broadly contested controversies that surround the Internet, and perhaps supply a vocabulary for addressing those controversies more systematically. I will describe four models.

The Internet as a communications medium

The first of these models suggests, very simply, that the Internet is a communications medium. I take this to be the dominant model, and the dominant terms in which a whole host of controversies surrounding the Internet have been debated. When we consider the Internet in this way in the context of specific disputes, however, the question immediatedly arises of which medium the Internet is. Is it the telephone? Newspaper? Television? Lecture hall? Street corner? The problem is that the Internet, in its many real and envisioned applications, seems to afford all of these analyses, either separately or in monstrous combination. Arguments over constitutional matters such as the Communications Decency Act, or business matters such as the so-called push technologies, or policy matters such as Internet telephony, are effectively debates over which precedent shall apply. Given that every party to these debates typically finds one of the precedents more congenial in its consequences than the others, we are often too busy fighting to take in the awesome extent to which the answers to our questions are indeterminate. The Internet, considered in this way, is very nearly whatever you want it to be.

This would not seem like a good situation. From the point of view of technical people, however, the situation is not at all paradoxical. That you can make the Internet into whatever you want is, for them, precisely the point. The Internet is a kind of meta-medium; the strategy of TCP/IP is to interpose a new service layer between transport and applications, so that developers can choose their metaphors with little concern for how the stuff gets moved around. Digitalization is of course the first key to this strategy, but there is more to it. Nor, I might mention in passing, is the Internet the only example of the strategy. The so-called software radio will shortly permit designers to decouple the formats and protocols of wireless data exchange from the technically horrid details of their analog hardware implementation [ 2 ].

The Internet as a computer system

Let us, therefore, try again. The second model treats the Internet as a computer system, a product of a more general set of practices of system design. Andrew Feenberg [ 3 ], among others, has observed that computers have a dual character - specifically, that computers are representational machines that represent the world in at least two different ways. One of these is as a medium; the test is roughly whether the machine analyzes the representational stuff at the level for which it is meaningful to people. WordPerfect doesn't know the genre of your document, and PhotoShop doesn't know what is going on in your images.

At another, more basic level, however, a computer operates on the basis of a systematic analysis of the world to which its computations are supposed to refer. The first step in designing a computer system is the construction of a data model, or what philosophers call an ontology - an enumeration of the types of things that the designer supposes the world to contain [ 4 ]. These categories, so-called entities, might include people, cars, bank accounts, products, documents, or computers. They might also include entities within the machine or network, such as printer jobs. The idea is that the operation of the machine presupposes, and depends upon, the maintenance of an accurate one-to-one correspondence between the data records in the machine and the real things in the world that the data records are supposed to represent.

This fact, seemingly simple enough, has vast consequences; it directs our attention to the tremendous variety of material arrangements by which the internal workings of machines are tied to the rest of the world. A machine can only compute with what it can capture, and so the world must be instrumented accordingly, whether through paperwork or tracking devices or ID cards or heaven knows what [ 5 ]. Even beyond this, consider the consequences of a simple computational operation such as the addition of two numbers. If a machine contains one number that originated in New Jersey and another number that originated in Idaho, the sum of those two numbers is only meaningful if the numbers are commensurable, that is, if the same sorts of things exist to be measured in both places, and if both measurements were conducted in the same way. If computers are to perform a great diversity of meaningful computations, as they do every day, then the world must be standardized in a great diversity of ways [ 6 ].

My main concern here is not with numbers, however, but with the Internet and its place in public discourse. And for that purpose the ontologies that matter most are precisely ontologies of discourse, that is, the elements that computer system developers have imagined discourse to comprise. So far as narrow matters of technical practice are concerned, designers enjoy a vast freedom to choose whatever categories they like. This is the sense in which the Internet is a meta-medium: Internet-based applications can be designed using ontologies derived from many spheres of life, including the various media industries and other conventionalized forms of communication. Of course, the Internet functions as a newspaper, or as a telephone, or as a lecture hall, et cetera, to the extent that the software is coupled to an institutional field - the one within which the ontology of the newspaper or phone system or university already functions. The couplings between most Internet applications and their institutional surroundings have thus far been relatively weak, and this has contributed to a sense of the Internet as a wholly separate sphere. This situation, however, is changing rapidly as the Internet is integrated into the workings of institutions.

We should be concerned with this coupling in many ways. The designer's creative freedom, for example, sounds like a kind of power, the power involved in defining one's ontology in one way rather than another, and the consequences of implementing that ontology on a new kind of hardware that comes with manifold institutional couplings of its own. For one thing, every datum that is captured in a digital medium can in principle be stored indefinitely and reused easily for any purpose. Communication that might otherwise have been bounded by four walls, or the expense of photocopying, or the vagaries of human memory, now exists in Platonic perfection as a digital record that can potentially be submitted to a wide variety of other purposes. As a result, the regulation of those purposes arises as a systematic problem that had formerly been kept within relatively manageable bounds by the enabling and constraining limits of the physical world, or of previous, less generalized media.

The Internet as discourse

To reckon with at least certain aspects of the seemingly wide-open design of new digital media, it will help to sketch out a third model of the Internet. Our starting point is once again the main tradition of computer system design practices, but now under a different aspect. From this perspective, what system developers do is to transform social discourse into machinery. Paradoxical as this description may sound, it is only a mild inflection of system developers' own understanding of systems analysis: one starts with a corpus of discourse, namely someone's explanation of what the system is supposed to do, and one performs grammatical analyses on this discourse. The nouns - car, person, bank account - become entities in the aforementioned data model, the verbs - register, hire, open - become the names of procedures and methods, and so on. The question that is not a part of system developers' sphere of professional concern is where the discourse comes from. In even referring to it as a discourse, I intend to point to its social origins: the institutional processes, with all of their strengths and limitations, through which the discourse arises.

The Internet makes a fine example. The Internet's predecessor, the ARPANET, was the implementation of a particular discourse - the Advanced Research Projects Agency's discourse about the American scientific community and its infrastructural needs. We can see this whole discourse as a discourse by looking at the spectacular career of the Internet in subsequent years. The original ARPANET discourse, like any discourse, made a series of unarticulated or partly articulated assumptions, and these assumptions were, so to speak, built into the protocols. One assumption was that the user community had a strong capacity for collective self-regulation, so that the network need not be terribly secure. As the Internet's use has spread beyond the scientific community, all manner of holes have become visible in the Internet protocols. Peer pressure in the scientific community is sufficiently effective that one would not even think of scientists sending spam, at least not routinely or on a massive scale. As a result, a variety of weaknesses in the Internet's electronic mail protocols have only become evident as spammers have begun to exploit them in the last couple of years.

This example, and others like it, point to a process of social discovery that is part and parcel of all technology adoption, and particularly the adoption of distributed computer technologies. It is a hermeneutic process: as the technology is used in new ways, we gain a deeper understanding of the ideas that motivated it. Those ideas, and the discourses that convey them, have their own historicity, their own metaphors, their own depths of unarticulated assumptions, and as we hit ourselves on the head in the adoption and adaptation of new technologies, we create the conditions for bringing those depths somewhat more fully into consciousness.

Moreover, because of the aforementioned ability of digital media to repeal the frequently useful limitations of the physical world, as disputes arise in the new context we are frequently forced to conceptualize more deeply the moral bases of our rules [ 7 ]. So long as walls functioned as walls, we could make laws about privacy and property by making laws about walls. As electronic media increasingly breach physical walls, we are compelled to articulate, fully now, the moral basis for privacy and property without so much reference to the architectural basis. And inasmuch as the walls of digital environments are simply discursive constructs like any others, walls are increasingly located precisely where the law says they are, and not just where custom and engineering practicality have placed them. This shift can be overemphasized (law has always had opinions about where walls should go, walls had already been breached by other technologies before the Internet came along, and so on), but its direction can hardly be denied.

The Internet as a set of standards

It is evident, therefore, that the discourse-made-machinery that constitutes the Internet has a political significance that is almost frighteningly profound. Computer systems are the products of discourse, among other things, and they are, among other things, important media for such discourse. To comprehend this reciprocal relationship between the Internet and social discourse, it will be helpful to articulate a fourth and final model of the Internet. Our focus here is on standards. Ted Nelson [ 8 ] accurately asserts that the software industry is about the politics of standardization. And as we have seen, both here and in Larry Lessig's [ 9 ] analysis of content filtering software, it also works the other way: software design, at least some of the time, sets the standards of politics. Put another way, the antitrust concern with the control of standards is a dialectical complement of the free expression concern with the standards of control. We care about standards because of the fantastically complicated economic question of who captures the often considerable value that is created through the establishment of a standard [ 10 ]. And we also care about standards because, as we have seen, they arise through the condensation of processes of social discourse. Social discourses are not neutral or innocent; to the contrary, to at least the extent that our discourses about discourse take substantive positions about the nature of society and social relationships, the standards of emerging media of social discourse tend to embody these positions as well. This is a rough and simple statement of something that requires considerably more analysis, but I think it at least accurately captures one of our concerns.

Underlying each of these concerns are the economic dynamics of standards, and particularly the technical compatibility standards where the issues most sharply arise. The work of Paul David and many others suggests that standards are path-dependent, and that because of network effects they tend to have a winner-take-all quality, with one standard becoming dominant and devotees of other standards becoming stranded [ 11 ]. Neoclassical economists have mounted a sophisticated counterattack on these models of market failure [ 12 ], and the matter is anything but settled. My purpose here, however, is not so much to settle it as to delineate the specifically political reasons why we care about it. Roughly put, to the extent that Internet standards shape public discourse, their rule-setting function is a matter of political concern. And to the extent that the Internet serves as a medium for the agenda-setting from which a wide variety of technical standards emerge, the properties of that medium and the larger technical public sphere of which it is a part are likewise matters of political concern [ 13 ].

This political perspective illuminates both the economic and the technical dynamics of standardization. One way that standards create social value is through what we might call economies of generality. To the extent that activities in a series of sites can be fitted to a common framework, many types of information and knowledge work achieve greater economies of scale. In enterprise computing, for example, the trend is away from custom-built systems that reflect the ontologies and discourses of particular organizations to the adoption of standardized software modules, bought off the shelf and configured for each organization's needs. The price here is the work of conforming the organization to the software package; one benefit among many is that the cost of developing the package can be distributed across many organizational users. As Nathan Myhrvold puts it, with personal computer software you can get $100 million worth of software for $100, and so it goes with increasingly many de facto standard software packages as well [ 14 ].

The concern here is not precisely the imposition of bland homogeneity and uniformity upon the whole world; the establishment of standards on one layer frequently creates the conditions for an explosion of creativity on the layer above. The concern, rather, is that this burst of creativity, too, becomes subject to the same path-dependent, winner-take-all kind of standardization that made it temporarily possible. If it does not seem substantively crucial which personal computer operating system takes over the world, or which internetworking protocol, consider the emerging tornado of activity, one or two layers up, to build new infrastructure for digital universities [ 15 ]. Such projects may bring new efficiencies, but also the danger of a greater degree of ontological standardization, not to mention a potentially greater capacity for the regulation of content, as lectures and class discussions are captured digitally for the ideologically motivated to peruse [ 16 ].

None of this is inevitable. My analysis describes concerns and dangers, forces and patterns, not essences and predictions. Nonetheless, if this model-building exercise accomplishes anything, may it provide an emphatic counterpoint to the romantic millennialism that portrays the Internet as the end of politics and the guarantor of decentralization. It is neither. To the contrary, the economics and the politics of the Internet are as one, and the institutional transformations that the Internet is already facilitating are political processes in the deepest possible sense - a near-total renegotiation of the mechanisms and mediations of our lives together here on earth.

About the Author

Philip E. Agre is an associate professor of communication at the University of California, San Diego. He received his PhD in computer science from MIT in 1989 and taught at the University of Chicago and the University of Sussex before arriving at UCSD in 1991. He is the author of Computation and Human Experience (Cambridge: Cambridge University Press, 1997) and the coeditor of Computational Theories of Interaction and Agency (with Stanley J. Rosenschein; Cambridge: MIT Press, 1996), Technology and Privacy: The New Landscape (with Marc Rotenberg; Cambridge: MIT Press, 1997), and Reinventing Technology, Rediscovering Community: Critical Studies in Computing as a Social Practice (with Douglas Schuler; Ablex, 1997). He also edits the Red Rock Eater News Service, an Internet mailing list that distributes useful information on the social and political aspects of networking and computing to 4000 people in 60 countries. His home page is located at http://communication.ucsd.edu/pagre
E-mail: pagre@ucsd.edu

Notes

1. This article is a revised version of a paper presented at the 1998 Annual Meeting of the American Association of Law Schools, San Francisco. I wish to thank Jon Weinberg for organizing the panel on "Fitting Models to the Internet" of which the paper was a part. I also appreciate the comments of the anonymous referees and the bibliographic assistance of Paul Jonusaitis and Dave McArthur.

2. Joe Mitola, 1995. "The software radio architecture", IEEE Communications Magazine, Volume 33, number 5, pp. 26-38.

3. Andrew Feenberg, 1991. Critical Theory of Technology. New York: Oxford University Press.

4. On data models see Graeme C. Simsion, 1994. Data Modeling Essentials: Analysis, Design, and Innovation. New York: Van Nostrand Reinhold. On their significance for the present argument see Philip E. Agre, 1997. "Beyond the mirror world: Privacy and the representational practices of computing", In: Philip E. Agre and Marc Rotenberg (eds.), Technology and Privacy: The New Landscape, Cambridge: MIT Press.

For a more sophisticated approach to the ontology of computation, see Brian Smith, 1996. On the Origin of Objects. Cambridge: MIT Press.

5. Philip E. Agre, 1994. "Surveillance and capture: Two models of privacy", Information Society, Volume 10, number 2, pp. 101-127.

6. Geoffrey Bowker, 1994. "Information mythology: The world of/as information", In: Lisa Bud-Frierman (ed.), Information Acumen: The Understanding and Use of Knowledge in Modern Business, London: Routledge.

7. For one example of this pattern, see the analysis of the obsolescence of the common-law concept of negotiability in Raymond T. Nimmer and Patricia Krauthouse, 1995. "Electronic commerce: New paradigms in information law", Idaho Law Review, Volume 31, pp. 937-966.

8. Personal communication.

9. Larry Lessig, "What things regulate speech", available through Cyberspace Law Abstracts, at http://www.ssrn .com/update/lsn/cyberspace/csl_papers.html

10. Michael L. Katz and Carl Shapiro, 1994. "Systems competition and network effects", Journal of Economic Perspectives, Volume 8, number 2, pp. 93-115.

11. Paul A. David, 1985. "Clio and the economics of QWERTY", American Economic Review, Volume 72, number 2, pp. 332-337. See also W. Brian Arthur, 1989. "Competing technologies, increasing returns, and lock-in by historical events", Economic Journal, Volume 99, pp. 116-131.

On Internet standards in particular, see Mark A. Lemley, 1996. "Antitrust and the Internet standardization problem", Connecticut Law Review, Volume 28, pp. 1041-1094.

12. S. J. Liebowitz and Stephen E. Margolis, 1994. "Network externality: An uncommon tragedy", Journal of Economic Perspectives, Volume 8, number 2, pp. 113-150.

13. Timothy Schoechle, 1995. "The Emerging role of standards bodies in the formation of public policy", IEEE Standards Bearer, Volume 9, number 2, pp. 1, 10. See also Richard W. Hawkins, 1995. "Standards-making as technological diplomacy: Assessing objectives and methodologies in standards institutions", In: Richard Hawkins, Robin Mansell, and Jim Skea (eds.), Standards, Innovation and Competitiveness: The Politics and Economics of Standards in Natural and Technical Environments. Hants (England): Edward Elgar.

On the rule-setting function of Internet standards, see Joel Reidenberg, forthcoming. "Lex informatica: The formulation of information policy rules through technology", Texas Law Review, Volume 76.

14. Myhrvold's comments appeared in an interview in Wired Volume 3, number 9, pp. 152-155,198, and at http://www.wired.com/wired/3.09/features/myhrvold.html It should be noted that many products exhibit substantial economies of scale, for example because of large fixed costs of production. Information commodities such as software are distinctive in that their costs of production are almost all fixed. For the vast consequences of this fact in the case of media content, see C. Edwin Baker, 1997. "Giving the audience what it wants", Ohio State Law Journal, Volume 58, number 2, pp. 311-417. Just as network effects are sometimes analyzed as demand-side economies of scale because they consist of the mutually reinforcing benefits enjoyed by the putatively homogenous consumers of a product or service, economies of generality might be understood as demand-side economies of scope: they depend on the possibility of applying the same software package to a large number of organizations despite their inevitable heterogeneity.

On the role of economies of scope in the development of the computer industry, see Alfred D. Chandler, Jr., 1997. "The computer industry: The first half-century", In: David B. Yoffie (Ed.), Competing in the Age of Digital Convergence. Boston: Harvard Business School Press, pp. 99-100; and Kenneth Flamm, 1988. Creating the Computer: Government, Industry, and High Technology. Washington: Brookings Institution, pp. 210-214.

15. See, for example, the ambitious and detailed ontology of discourse that is being developed in the educational domain by the IEEE P1484.9 Task Ontology Working Group, http://www.manta.ieee.org/p1484. More generally, see Educom's Instructional Management Systems Project, at http://www.imsproject.org/

16. Thomas Sowell, 1994. "Letting in the light", Forbes (12 September), p. 98.


Contents Index

Copyright © 1998, ƒ ¡ ® s † - m ¤ ñ d @ ¥