The WorldWideWeb is a powerful medium which has many applications beyond just publishing static documents. It is certainly an interface to the space of "documents." But already, with established features such as input-forms and server-side scripting, we see that the web is also increasingly becoming an interface to the space of what is traditionally called "applications."
And, to support all the ever inventive and encompassing uses of the web, browsers are called upon to have ever more features. But, however judiciously browser designers choose the features to implement, a consequence is that many browsers tend to be able to do many things, but also tend to be fairly shallow in each area.
Designing for the lowest common denominator is fine for the first generation of the software, but we're starting to rely and demand more out of the WWW, and are stretching the current design.
A thing is to not just standardize on the specific features that we want in browsers. But, also importantly, I think, is to standardize on an extensible architecture so that the web software can grow incrementally.
In this talk I'll describe a few possible approaches for a browser to gain more flexibility, and to briefly describe one particular approach as implemented by a system known as ViolaWWW.
Here are some obsesrvations which make a dynamically extensible architecture look like a good thing to have.
One, creeping featurism leads to to really big programs. Not good. It would be nice to have an architecture such that people with special needs can plug in an additional, or replacement, module that solves their particular problems.
Otherwise, programs get really big, and 90% of all users use maybe 10% of the features.
Two, with the server side scripts, we're seeing lots of documents that are really front-ends to what are traditionally call "applications". This basically goes with the trend of the bluring line between applications and information. And generally the merging of various technologies.
Three, without an easy extensible architecture, special interest groups might end up needing to make special modifications to the software, and then we would end up with different version of the same browser.
There are other problems which argue for the idea of extensible software, but we won't go into them here.
We already do "extend" browsers with things like "external viewers." But there's not a very good integration with the browser. Ideally those external viewers should be rendering in-place inside the document, and be working together with the browser, be tightly integrated with the browser and other parts...
So, a solution is what's been touted under with name "component software". The basic idea is that, rather than buildig one single monolithic application that does everythig from day one, we should be building a framework or architecture that ican be dynamic in its ability to have functionality added or deleted on the fly.
Those component parts can be many different things. For example, such parts could be: special navigation control, visualization controls, self guiding slide show presentation tool, and so on.
What are some tradeoffs?
With external programs, you get good performance, but typically you'd have to manualy install them. So they're kinda troublesome, and can be unsecure if they come as binary executables and you don't know exactly what they do. GIF viewer are usually quite safe, but you'd think twice if it's just any porgram from any where that's "designed to work with your browser" in some non obvious ways.
With embedded software, or "component software", they're basically programs which get installed automatically. And there're different ways of going about it.
One approach is to use native linkable executable objects. These have good performance but can be unsecure. And this is platform dependent.
Another approach is the objects with interpreted/compiled scripts model. The scripting language can have performacnce problems. But a lot of this is a matter of better interpreter or even compiler design. This approach can also be much safer than the executable objects approach, because the interperter can have a chance to catch dangerous operations.
Yet another approach is to have some kind of interface protocol such that the computation can be happening remotely, and have things rendered locally. This can be quite secure, but could also come with a high bandwidth penalty.
So, all these approaches have pros and cons, but the various approaches should be complementary and non execlusive. In fact it'll probably best be a mixture of All Of The Above.
Now I'll describe a particular research system which uses the scripting objects approach.
This is the Viola system that is being developed at O'Reilly and Associates. This system has the following interesting characteristics:
One, it has a relatively small core engine. A toolkit with the primitives coded in C.
Two, an extension scripting language for implementing and glueing together applications. For example, the Viola WWW browser applications is built using the Viola toolkit.
Three, program objects can be embedded into documents and the toolbar. For example, this is a little bookmark tool embedded into the ViolaWWWW's toolbar. This tool is linked to the document, and so comes and goes with the document.
This is a little mini chess board application that is published on the web server, and instantiated locally by the interpreter. A nice is that you could have high interactivity without incurring a lot of bandwidth -- you can move pieces around, and the board can do simple checks for illegal moves, and transmit only the essential movement information... As opposed to using the ISMAP feature and transmitting a new chess board picture for every move.
This way, you're minimizing the client-server traffic, and maximizing user interactivity.
This next example is a front-end application to a backend. And, the back-end is what actually does the computation and the drawing.
It's important to point out that no special modification was made to the browser to make these application. This is possible because all the basic primitives are already in the interpreter engine, and the rest is put together using the scripting language.
The current security policy is pretty straightforward and somewhat limiting for the objects. It basically goes this way: all imported objects are marked as untrusted, and as such these imported objects have no system priviledges, have no access to sub interpreters, and can not coarse other objects to execute scripts arbitrarily.
For efficiency, the "scripting" language is compiled to bytecode. Ideally, we want to make the language and interpreter fast enough such that we practically do not need to ship around platform dependent executables.
Adoptinga componentware standard. That is, do more research into industry standards, and see how to fit them into the web. For example, tweaking viola scripting language to be compliant with OpenDoc's Open Scripting Architecture.
This 'system resource manager and accounting' issue is basically more nitty gritties of striking the balance between having security and still provide enough resources for the objects to be as powerful as possible. Resources like CPU usage, permission for objects to make network connections, things like that...
A vision is that the WWW will evolve into a system where the information coming thru the network can be more than just the document data, but may also include programs which help to view and manipulate the data.
And to make this transition as smooth as possible, one thing that we should be looking into is a better operating model than we have right now; Into a more extensible, non monolithic, component based, firstname.lastname@example.org