End to ends

April 7 2004

Related articles

Andrew Otwell (drawing on Abe Burmeister, Clay Shirky, Christopher Alexander, and it looks like one or two others) has posted his provocative thoughts on the metastatic logic of scalability in urban and interface design. His point, among others, is that practices appropriate to the design of systems for small groups of people - the groups, in other words, in which most of us spend the majority of our lives, certainly the majority of personally meaningful moments - are simply different from those useful in the design of "enterprise-scale" systems.

One implication of this insight is that the hierarchical tree so familiar from org charts and site maps gets improperly imposed on these intimate systems, very much to their disadvantage. Let's take a closer look to see why I wind up only partially agreeing.

the plane of a foolish consistency

In the case of large-scale systems, I can tell you from daily personal experience, the working information architect is presented with challenges having to do with consistency of experience, with smoothness. We're faced, often enough, with a situation in which the heterogeneous needs of tens of thousands of people inside an organisation communicating with potentially millions of people outside (and vice versa!) are forced through an aperture maybe seven, ten, twelve options wide: the front page of the corporate Web site.

For reasons having primarily to do with economics - that is, with the managed scarcity of resources - we're willy-nilly forced to regularize and rationalize this situation. We devise personae and use cases to account for the majority of anticipated interactions with the site. We design structures - hierarchical trees, generally - to accommodate those interactions. We discount edge cases. The result is repeatable, consistent, relatively easy to reconfigure on the fly. In a word: modular.

Most of the time, it must be said, this arborescent modularity works pretty well. In contrast to the creative ferment that characterized Web design circa 1997-1999, this more-or-less standard interface and deep structure paradigm has become normative precisely because it does treat the majority of interactions efficiently. (It also happens to play nicely to the predilections of software developers used to the nested hierarchical logic imposed by code, whose preferences play a significant role in the outcome of any large-scale Web project. But that, my friends, is a different story.)

It should go without saying, though, that not every Web site, and certainly not every physical space or system of same, faces the same pressures as the ones on which enterprise-scale information architecture is predicated. As I understand it, part of Alexander's argument with tree structures - and, by inference, Andrew's - is that they tend to cut across the relations that mean the most to our conceptions of the good life.

No problem there. If the critique is that a modularity imposed from an inapposite desire for scalability is detrimental to the organic growth of smaller systems, systems neither required nor destined to scale arbitrarily, I'm right there with you. But even this is too simplistic a view: it's not merely the shape of an architecture that determines what sort and richness of activities it can support, but its intelligence as well.

stupid is as stupid does

Intelligence? I'm tempted - OK, I'll surrender to the temptation - to introduce yet another metaphor from network architecture, the "end-to-end network," as a way to understand what may be going wrong in these structures. Because I think it's not necessarily a hierarchical, arborescent, functional modularity that does most to compromise quality of life or experience in designed systems, but their overspecification.

The term arises out of a tension in the theory of network design, going back some twenty-five years now. At that point in time, the programmability of a network, then a novel technical possibility, appeared to offer an attractive solution to seemingly intractable issues of congestion or underuse. A traffic-management function, for example, to allow packets of information moving through the network to change their priority dynamically - to let them redefine themselves as ambulances and fire engines should traffic thicken, as it were - is the kind of elaboration engineers thought might prove useful.

Computer scientists David Reed, Jerome Saltzer, and David Clark advanced a countervailing theory, one whose adoption in time proved essential to the rapid spread and easy extensibility of the Internet. They held that "building complex function into a network implicitly optimizes the network for one set of uses while substantially increasing the cost of a set of potentially valuable uses that may be unknown or unpredictable at design time."

This, ultimately, is the argument behind adopting, instead, a "small pieces loosely joined" architecture. Designing function into the network itself freezes a moment in time, with all its arrangements and priorities and valuations intact. The trouble is, of course, that all of those things change over time, in unpredictable directions. As Reed, Saltzer and Clark rather dryly put it, "enthusiasm for the benefits of optimizing current application needs by making the network more complex may be misplaced."

The model they offered, the "stupid" or "end-to-end" network, pushes functionality outward, to the objects connected to the network. The network itself is intentionally agnostic regarding - even ignorant of - the needs imposed by any given functionality. The result is a relatively unrestrictive specification set for what can be attached to the network. It's this freedom that has let the hundred flowers of the Internet bloom, especially the World Wide Web built on it and through which you are presumably reading these words.

the white zone is for loading and unloading only

Let's jump back across the gulf of metaphor, and think about Modernist-era urban planning practice in this light. City planners used to step right into the pitfall Reed, Saltzer and Clark described, by overdesigning and overspecifying; in architectural terms, one might say they collapsed program and circulation, with "loops" and "zones" dedicated to given functions. (Christopher correctly torpedoes Paolo Soleri's ostensibly organic arcologies as fraudulent in this regard. Drawing a beguilingly biomorphic shape and labeling the voids within "Cultural Center" or "Factories and Utilities" has as much to do with helping a community find its own appropriate shape as Levittown does.)

By contrast, in the end-to-end model, "[m]oving functions and services upward in a layered design, closer to the application(s) that use them, increases the flexibility and autonomy of the application designer to apply those functions and services to the specific needs of the application."

Education, that is, is something that might happen inside a school (or library, or home, or wherever, but at any rate through the interaction of people taking on the roles of student and teacher, respectively), not in a Learning Zone hardwired into the city plan. And why should rest and recuperation not be possible throughout an urban complex, wherever and whenever required? Why force people to haul their weary bones to the Relaxation Zone?

When framed this way, the natural thing is to say, sure, that degree of overspecification and functional differentiation is silly and counterproductive; urban infrastructure should exist solely to facilitate activity ("application") by permitting the free circulation of people within it.

when you get to the bottom you go back to the top

All of which brings us back to that problematic modularity. But in this light, we can see that modularity, and maybe even the apparition of an arborescent framework behind the individual modules, emerges to some extent from below. The repeating, soulless grid of franchises that Abe's talking about here exists simply because people don't want to travel half an hour to the Services Core to fulfill the basic requirements of everyday life. It's an emergent property of the maximum distance people will travel for a given commodity, a quantity which changes from (sub)culture to culture and from commodity to commodity, but in any event rarely exceeds a few kilometers.

In the case of Manhattanites on foot and coffee, say, the maximum displacement may be as short as a single city block, for that is the distance by which many a Starbucks is separated from another hereabouts. They serve different microcommunities, draw from different catchment basins: the traffic from a cluster of two or three office buildings during business hours is probably enough to keep any single outlet operating in the black.

Not only would it appear that this modularity is itself an organic thing, then, and not necessarily something imposed, but it certainly seems superior as an organising principle for the city to others proffered by post-Modern urban planning.

induced anxiety

In the years since the eclipse of High Modernism, and especially since cheap computing power made complexity theory a topic of immediate relevance, the possibility that pleasing forms might be generated algorithmically has seduced many a designer. (Greg Lynn, discussed on v-2 some three years ago, is notable in this regard, though one would surely be remiss in not mentioning the antecedent work of John Cage and Brian Eno in a different realm.)

It was inevitable that someone would turn these methods of production to the design of urban space; Makoto Sei Watanabe's Induction Cities is one such attempt, a long-term research project into the possibilities of generating urban form from algorithms.

The trouble with the Induction City model is that it's just as intellectually bankrupt, just as irrelevant to how life is actually lived by human beings in human communities, as the hyperrationalized Brasilia. As a "tool for the visualization of concepts," fine. But I surely hope nobody is seriously suggesting that these generative maps be seen as any kind of blueprint for cities in which human beings are intended to live.

Far from thinking the status quo is too rigid, Watanabe apparently has a hard time with the sheer unreliability of architecture and urban planning as they're currently practiced: Roughly speaking, the method will work to substantiate "inspiration." It will be able to reduce the probability of "inspiration" turning out to be a failure. The scientific method will come nearer to engineering, and the great master's art will come closer to science.

My problems with this line of thought are several. Not to set the hapless Watanabe up as a strawman for all those working in parallel endeavors, but how can someone sophisticated enough to understand that "a city cannot be designed" and that "[l]iving organisms are not governed by a grand designer" possibly believe that simply setting a few parameters and letting a computer draft a pleasingly fractal shape has anything to do with the real methods by which human communities organize themselves?

"What we need is not critical perceptions of what a city is, but a methodology for creating a city." Really? In a bizarre inversion, what Watanabe is doing here - despite wielding eminently buzzword-compliant, zeitgeisty toys like "a neural network, genetic algorithms and artificial intelligence" - is nothing other than imposing control from above. If anything, the bizarre flows set up by his Induction Cities seem bound to create moments of human isolation and disconnection, without even the justification from efficiency that an imposed arborescent form offers.

The problem, again, isn't the tree per se, and we should resist the temptation to equate it with relations inimical to the development of community or affinity. No, the problem is the imposition of form - any form. It seems to me that humans already know how to create cities just fine by ourselves, without the help of any algorithms other than the ones we're already carrying around upstairs. After all, we do it every day, and have been since before we began recording our efforts and calling it "history."

beyond the will to a system

I mistrust all systemizers and I avoid them.
The will to a system is a lack of integrity.
- Friedrich Nietzsche

Here, finally, is one of the tensions I live with, or in. The above lines from Twilight of the Idols are ones that Anne Galloway uses as an epigram of sorts on her site, and they happen to reflect a sentiment that I've long accepted, even reveled in.

Life, living things, organic things: they're messy, they continually flow and leak and fold back on themselves. It takes a certain maturity to accept this, to find beauty in it, especially for those of us who (have been trained to) associate harmony first and foremost with order. It's not easy to let go of the idea - the introduction of which into my own life which I associate with eighth-grade biology's unit on Linnean classification - that the universe of phenomenal objects can be comprehensively named, ordered, and understood.

Nietzsche is here talking primarily about intellectual systems, about ways of organizing knowledge such that anomalies or edge cases must be sawn off to fit. The "lack of integrity" he's talking about I understand to mean the inability to face up to the fact that the world is endlessly varied, and the implications of this heterogeneity. Socially, politically, personally, this mistrust of systematization holds true. Where it breaks down for me is when we begin talking about metadesign, about frameworks and arrangements in which to provide for communication between those smaller-scale chunks of meaning and being.

To return to the original question implied in Andrew's post: what is the language of form appropriate to the design of systems for small groups of people? I think most of the people reading this would agree that, almost without exception, it would be their own language, whatever that would happen to be and whether it produced maps resembling "semilattices," pyramids, Lorenz butterflies, or anything else.

But there does come a place where a systematic approach is called for, and that place is the network that connects these local, heterogeneous, wildly and delightfully variable moments with each other and that facilitates movement between and among them.

I wonder, in a world where many, many small groups of people must coexist with each other, if it couldn't be the case that a degree of systematization, applied with care, actually provides for greater diversity overall? I'm thinking of signage programs, or protocols for the design of thoroughfares: the provisions of a relatively "stupid" end-to-end urbanity, just enough and no more.

Such systematization is all about providing a stable platform for the emergence of what are, I trust, the more interesting sorts of complexity and diversity. Put concretely: would you rather live in a city with a hundred different, locally varying practices for the labeling and coloring and shape and placement of street signs, or one that imposed this one standard on its constituent parts? All of the interesting, complicated stuff still exists in the things connected to the network, but the network is left to do its job.

Maybe this is just a different way of agreeing with Andrew, with Clay Shirky and Christopher Alexander, with everyone who's ever felt that arrangements which make perfect sense at the scale of a city or a global site break down badly on the neighborhood or blog level. Maybe it's yet another misplaced technical metaphor destined to founder on the rocks of lived reality. All I know is that the relatively small situations that matter most to me in life - comfortable cafes and well-provisioned magazine stands, decent repertory cinemas, above all great conversations - are accounted for neither in hierarchical directory structures nor in putative "programs of flow."

It seems strange to me that a quarter-century old paper from network systems theory could provide the most convincing explanation of just how to support the emergence of these things that mean so much to me, but there you have it. I guess I'm just an end-to-end guy.

return to v-2