Skip to content


Semantic Web Technologies

Semantic Web in the news

Submitted by timbl on Thu, 2008-03-27 16:43. ::

Well, the Semantic Web has been in the news a bit recently.

There was the buzz about Twine, a "Semantic Web company", getting another round of funding. Then, Yahoo announced that it will pick up Semantic Web information from the Web, and use it to enhance search. And now the Times online mis-states that I think "Google could be superseded". Sigh. In an otherwise useful discussion largely about what the Semantic Web is and how it will affect people, a misunderstanding which ended up being the title of the blog. In fact, the conversation as I recall started with a question whether, if search engines were the killer app for the familiar Web of documents, what will be the killer app for the Semantic Web.

Text search engines are of course good for searching the text in documents, but the Semantic Web isn't text documents, it is data. It isn't obvious what the killer apps will be - there are many contenders. We know that the sort of query you do on data is different: the SPARQL standard defines a query protocol which allows application builders to query remote data stores. So that is one sort of query on data which is different from text search.

One thing to always remember is that the Web of the future will have BOTH documents and data. The Semantic Web will not supersede the current Web. They will coexist. The techniques for searching and surfing the different aspects will be different but will connect. Text search engines don't have to go out of fashion.

The "Google will be superseded" headline is an unfortunate misunderstanding. I didn't say it. (We have, by the way, asked it to be fixed. One can, after all, update a blog to fix errors, and this should be appropriate. Ian Jacobs wrote an email, left voice mail, and tried to post a reply to the blog, but the reply did not appear on the blog - moderated out? So we tried.)

Now of course, as the name of The Times was once associated with a creditable and independent newspaper :-), the headline was picked up and elaborated on by various well-meaning bloggers. So the blogosphere, which one might hope to be the great safety net under the conventional press, in this case just amplified the error.

I note that here the blogosphere was misled by an online version of a conventional organ. There are many who worry about the inverse, that decent material from established sources will be drowned beneath a tide of low-quality information from less creditable sources.

The Media Standards Trust is a group which has been working with the Web Science Research Initiative (I'm a director of WSRI) to develop ways of encoding the standards of reporting a piece of information purports to meet: "This is an eye-witness report"; or "This photo has not been massaged apart from: cropping"; or "The author of the report has no commercial connection with any products described"; and so on. Like creative commons, which lets you mark your work with a licence, the project involves representing social dimensions of information. And it is another Semantic Web application.

In all this Semantic Web news, though, the proof of the pudding is in the eating. The benefit of the Semantic Web is that data may be re-used in ways unexpected by the original publisher. That is the value added. So when a Semantic Web start-up either feeds data to others who reuse it in interesting ways, or itself uses data produced by others, then we start to see the value of each bit increased through the network effect.

So if you are a VC funder or a journalist and some project is being sold to you as a Semantic Web project, ask how it gets extra re-use of data, by people who would not normally have access to it, or in ways for which it was not originally designed. Does it use standards? Is it available in RDF? Is there a SPARQL server?

A great example of Semantic Web data which works this way is Linked Data. There is growing mass of interlinked public data much of it promoted by the Linked Open Data project. There is an upcoming Linked Data workshop on this at the WWW 2008 Conference in April in Beijing, and in June 17-18 in New York at the Linked Data Planet Conference. Linked data comes alive when you explore it with a generic data browser like the Tabulator. It also comes alive when you make mashups out of it. (See Playing with Linked Data, Jamendo, Geonames, Slashfacet and Songbird ; Using Wikipedia as a database). It should be easier to make those mashups by just pulling RDF (maybe using RDFa or GRDDL) or using SPARQL, rather than having to learn a new set of APIs for each site and each application area.

I think there is an important "double bus" architecture here, in which there are separate markets for the raw data and for the mashed up data. Data publishers (e.g., government departments) just produce raw data now, and consumer-facing sites (e.g., soccer sites) mash up data from many sources. I might talk about this a bit at WWW 2008.

So in scanning new Semantic Web news, I'll be looking out for re-use of data. The momentum around Linked Open Data is great and exciting -- let us also make sure we make good use of the data.

Accountability Appliances: What Lawyers Expect to See - Part III (User Interface)

I've written in the last two blogs about how lawyers operate in a very structured enviroment. This will have a tremendous impact on what they'll consider acceptable in a user interface. They might accept something which seems a bit like an outline or a form, but years of experience tell me that they will rail at anything code-like.

For example, we see

:MList a rdf:List

and automatically read

"MList" is the name of a list written in rdf

Or,

air:pattern {
:MEMBER air:in :MEMBERLIST.


and know that we are asking our system to look for a pattern in the data in which a particular "member" is in a particular list of members. Perhaps because law is already learning to read, speak, and think in another language, most lawyers look at lines like those above and see no meaning.

Our current work-in-progress produces output that includes:


bjb reject bs non compliant with S9Policy 1

Because

phone record 2892 category HealthInformation

Justify

bs request instruction bs request content
type Request
bs request content intended beneficiary customer351
type Benefit Action Instruction
customer351 location MA
xphone record 2892 about customer351



Nearly every output item is a hotlink to something which provides definition, explanation, or derivation. Much of it is in "Tabulator", the cool tool that aggregates just the bits of data we want to know.

From a user-interface-for-lawyers perspective, this version of output is an improvement over our earlier ones because it removes a lot of things programmers do to solve computation challenges. It removes colons and semi-colons from places they're not commonly used in English (i.e., as the beginning of a term) and mostly uses words that are known in the general population. It also parses "humpbacks" - the programmers' traditional
concatenation of a string of words - back into separate words. And, it replaces hyphens and underlines - also used for concatenation - with blank spaces.

At last week's meeting, we talked about the possibility of generating output which simulates short English sentences. These might be stilted but would be most easily read by lawyers. Here's my first attempt at the top-level template:

 

Issue: Whether the transactions in [TransactionLogFilePopularName] {about [VariableName] [VariableValue]} comply with [MasterPolicyPopularName]?

Rule: To be compliant, [SubPolicyPopularName] of [MasterPolicyPopularName] requires [PatternVariableName] of an event to be [PatternValue1].

Fact: In transaction [TransactionNumber] [PatternVariableName] of the event was [PatternValue2].

Analysis: [PatternValue2] is not [PatternValue].

Conclusion: The transactions appear to be non-compliant with [SubPolicyName] of [MasterPolicyPopularName].



This seems to me approximately correct in the context of requests for the appliance to reason over millions of transactions with many sub-rules. A person seeking an answer from the system would create the Issue question. The Issue question is almost always going to ask whether some series of transactions violated a super-rule and often will have a scope limiter (e.g., in regards to a particular person or within a date scope or by one entity), denoted here by {}.

From the lawyer perspective, the interesting part of the result is the finding of non-compliance or possible non-compliance. So, the remainder of the output would be generated to describe only the failure(s) in a pattern-matching for one or more sub-rules. If there's more than one violation, the interface would display the Issue once and then the Rule to Conclusion steps for each non-compliant result.

I tried this out on a laywer I know. He insisted it was unintelligible when the []'s were left in but said it was manageable when he saw the same text without them.


For our Scenario 9, Transaction 15, an idealized top level display would say:


Issue: Whether the transactions in Xphone's Customer Service Log about Person Bob Same comply with MA Disability Discrimination Law?

Rule: To be compliant, Denial of Service Rule of MA Disability Discrimination Law requires reason of an event to be other than disability.

Fact: In transaction Xphone Record 2892 reason of the event was Infectious Disease.

Analysis: Infectious disease is not other than disability.

Conclusion: The transactions appear to be non-compliant with Denial of Service Rule of MA Disability Discrimination Law.



Each one of the bound values should have a hotlink to a Tabulator display that provides background or details.



Right now, we might be able to produce:


Issue: Whether the transactions in Xphone's Customer Service Log about Betty JB reject Bob Same comply with MA Disability Discrimination Law?

Rule: To be non-compliant, Denial of Service Rule of MA Disability Discrimination Law requires REASON of an event to be category Health Information.

Fact: In transaction Xphone Record 2892 REASON of the event was category Health Information.

Analysis: category Health Information is category Health Information.

Conclusion: The transactions appear to be non-compliant with Denial of Service Rule of MA Disability Discrimination Law.




This example highlights a few challenges.

1) It's possible that only failures of policies containing comparative matches (e.g., :v1 sameAs :v2; :v9 greaterThan :v3; :v12 withinDateRange :v4) are legally relevant. This needs more thought.

2) We'd need to name every sub-policy or have a default called UnnamedSubPolicy.

3) We'd need to be able to translate statute numbers to popular names and have a default instruction to include the statute number when no popular name exists.

4) We'd need some taxonomies (e.g., infectious disease is a sub-class of disability).

5) In a perfect world, we'd have some way to trigger a couple alternative displays. For example, it would be nice to be able to trigger one of two rule structures: either one that says a rule requires a match or one that says a rules requires a non-match. The reason for this is that if we always have to use the same structure, about half of the outputs will be very stilted and cause the lawyers to struggle to understand.

6) We need someway to deal with something the system can't reason. If the law requires the reason to be disability and the system doesn't know whether health information is the same as or different from disability, then it ought to be able to produce an analysis that says something along the lines of "The relationship between Health Information and disability is unknown" and produce a conclusion that says "Whether the transaction is compliant is unknown." If we're reasoning over millions of transactions there are likely to be quite a few of these and they ought to be presented after the non-compliant ones.

 

 

Accountability Appliances: What Lawyers Expect to See - Part I

Submitted by kkw on Wed, 2008-01-02 12:59. :: | | |

Just before the holidays, Tim suggested I blog about "what lawyers expect to see" in the context of our accountability appliances projects. Unfortunately, being half-lawyer, my first response is that maddening answer of all lawyers - "it depends." And, worse, my second answer is - "it depends upon what you mean by 'see'". Having had a couple of weeks to let this percolate, I think I can offer some useful answers.

Conceptually, what does the lawyer expect to see? The practice of law has a fundamental dichotomy. The law is a world of intense structure -- the minutae of sub-sub-sub-parts of legal code, the precise tracking of precedents through hundreds of years of court decisions, and so on. But, the lawyers valued most highly are not those who are most structured. Instead, it is those who are most creative at manipulating the structure -- conjuring compelling arguments for extending a concept or reading existing law with just enough of a different light to convince others that something unexpected supersedes something expected. In our discussions, we have concluded that an accountability appliance we build now should address the former and not the latter.

For example, a lawyer could ask our accountability appliance if a single sub-rule had been complied with: "Whether the federal Centers for Disease Control was allowed to pass John Doe's medical history from its Epidemic Investigations Case Records system to a private hospital under the Privacy Act Routine Use rules for that system?" Or, he could ask a question which requires reasoning over many rules. Asking "Whether the NSA's data mining of telephone records is compliant with the Privacy Act?" would require reasoning over the nearly thirty sub-rules contained within the Privacy Act and would be a significant technical accomplishment. Huge numbers of hours are spent to answer these sorts of questions and the automation of the more linear analysis would make it possible to audit vastly higher numbers of transactions and to do so in a consistent manner.

If the accountability appliance determined that a particular use was non-compliant, the lawyer could not ask the system to find a plausible exception somewhere in all of law. That would require reasoning, prioritizing, and de-conflicting over possibly millions of rules -- presenting challenges from transcribing all the rules into process-able structure and creating reasoning technology that can efficiently process such a volume. Perhaps the biggest challenge, though, is the ability to analogize. The great lawyer draws from everything he's ever seen or heard about to assimilate into the new situation to his client's benefit. I believe that some of the greatest potential of the semantic web is in the ability to make comparisons -- I've been thinking about a "what's it like?" engine -- but this sort of conceptual analogizing seems still a ways in the future.

 

Stay tuned for two additional blogs:

Structurally, what does the lawyer expect to see? The common law (used in the UK, most of its former colonies, including the US federal system, and most of US states) follows a standard structure for communicating. Whether a lawyer is writing a motion or a judge is writing a decision, there is a structure embedded within all of the verbiage. Each well-formed discussion includes five parts: issue, rule, fact, analysis, and conclusion.

Physically, what does the lawyer expect to see? At the simplest level, lawyers are expecting to see things in terms they recognize and without unfamiliar distractions; even the presence of things like curly brackets or metatags will cause most to insist that the output is unreadable. Because there is so much information, visualization tools present opportunities for presentations that will be intuitively understood.

And:

The 1st Lawyer to Programmer/Programmer to Lawyer Dictionary! Compliance, auditing, privacy, and a host of other topics now have lawyers and system developers interacting regularly. As we've worked on DIG, I've noticed how the same words (e.g., rules, binding, fact) have different meanings.

Giant Global Graph

Submitted by timbl on Wed, 2007-11-21 18:45. ::


Well, it has been a long time since my last post here. So many topics, so little time. Some talks, a couple of Design Issues articles, but no blog posts. To dissipate the worry of expectation of quality, I resolve to lower the bar. More about what I had for breakfast.

So The Graph word has been creeping in. BradFitz talks of the Social Graph as does Alex Iskold, who discusses social graphs and network theory in general, points out that users want to own their own social graphs. He alo points out that examples of graphs are the Internet and the Web. So what's with the Graph word?

Maybe it is because Net and Web have been used. For perfectly good things .. but different things.

The Net we normally use as short for Internet, which is the International Information Infrastructure. Al Gore promoted the National Information Infrastructure (NII) presumably as a political pragma at the time, but clearly it became International. So let's call it III. Let's think about the Net now as an invention which made life simpler and more powerful. It made it simpler because of having to navigate phone lines from one computer to the next,you could write programs as though the net were just one big cloud, where messages went in at your computer and came out at the destination one. The realization was, "It isn't the cables, it is the computers which are interesting". The Net was designed to allow the computers to be seen without having
to see the cables.

Simpler, more powerful. Obvious, really.

Programmers could write at a more abstract level. Also, there was re-use of the connections, in that, as the packets flowed, a cable which may have been laid for one purpose now got co-opted for all kinds of uses which the original users didn't dream of. And users of the Net, the III, found that they could connect to all kinds of computers which had been hooked up for various reasons, sometimes now forgotten. So the new abstraction gave us more power, and added value by enabling re-use.

The word Web we normally use as short for World Wide Web. The WWW increases the power we have as users again. The realization was "It isn't the computers, but the documents which are interesting". Now you could browse around a sea of documents without having to worry about which computer they were stored on. Simpler, more powerful. Obvious, really.

Also, it allowed unexpected re-use. People would put a document on the web for one reason, but it would end up being found by people using it in completely different ways. Two delights drove the Web: one of being told by a stranger your Web page has saved their day, and the other of discovering just the information you need and for which you couldn't imagine someone having actually had the motivation to provide it.

So the Net and the Web may both be shaped as something mathematicians call a Graph, but they are at different levels. The Net links computers, the Web links documents.

Now, people are making another mental move. There is realization now, "It's not the documents, it is the things they are about which are important". Obvious, really.

Biologists are interested in proteins, drugs, genes. Businesspeople are interested in customers, products, sales. We are all interested in friends, family, colleagues, and acquaintances. There is a lot of blogging about the strain, and total frustration that, while you have a set of friends, the Web is providing you with separate documents about your friends. One in facebook, one on linkedin, one in livejournal, one on advogato, and so on. The frustration that, when you join a photo site or a movie site or a travel site, you name it, you have to tell it who your friends are all over again. The separate Web sites, separate documents, are in fact about the same thing -- but the system doesn't know it.

There are cries from the heart (e.g The Open Social Web Bill of Rights) for my friendship, that relationship to another person, to transcend documents and sites. There is a "Social Network Portability" community. Its not the Social Network Sites that are interesting -- it is the Social Network itself. The Social Graph. The way I am connected, not the way my Web pages are connected.

We can use the word Graph, now, to distinguish from Web.

I called this graph the Semantic Web, but maybe it should have been Giant Global Graph! Any worse than WWWW? ;-)   Not the "Semantic Web" term has been established for a long time, I'm not proposing to change it.  But let's think about the graph which it is.  (Footnote: "Graph" also happens to be the word the RDF specifications use, but that is by the way. While an XML parser creates a DOM tree, an RDF parser creates an RDF graph in memory.)

So, if only we could express these relationships, such as my social graph, in a way that is above the level of documents, then we would get re-use. That's just what the graph does for us. We have the technology -- it is Semantic Web technology, starting with RDF OWL and SPARQL.  Not magic bullets, but the tools which allow us to break free of the document layer. If a social network site uses a common format for expressing that I know Dan Brickley, then any other site or program (when access is allowed) can use that information to give me a better service. Un-manacled to specific documents.

I express my network in a FOAF file, and that is a start of the revolution. I blogged on FOAF files earlier, before the major open SNS angst started. The data in a FOAF file can be read by other applications. Photo-sharing, travel sites, sites which accept your input because you are a part of the graph.

The less inviting side of sharing is losing some control. Indeed, at each layer --- Net, Web, or Graph --- we have ceded some control for greater benefits.

People running Internet systems had to let their computer be used for forwarding other people's packets, and connecting new applications they had no control over. People making web sites sometimes tried to legally prevent others from linking into the site, as they wanted complete control of the user experience, and they would not link out as they did not want people to escape. Until after a few months they realized how the web works. And the re-use kicked in. And the payoff started blowing people's minds.

Letting your data connect to other people's data is a bit about letting go in that sense. It is still not about giving to people data which they don't have a right to. It is about letting it be connected to data from peer sites. It is about letting it be joined to data from other applications.

It is about getting excited about connections, rather than nervous.


In the short, what-can-I-code-up-this-afternoon-to-fix-this term, it is about other sites following the lead of my.opera.com, livejournal, advogato, and so on (list) also exporting a public RDF URI for their members, with what information the person would like to share.Right now, this blog re-uses the FOAF data linked to us to fight spam.

In the long term vision, thinking in terms of the graph rather than the web is critical to us making best use of the mobile web, the zoo of wildy differing devices which will give us access to the system. Then, when I book a flight it is the flight that interests me. Not the flight page on the travel site, or the flight page on the airline site, but the URI (issued by the airlines) of the flight itself. That's what I will bookmark. And whichever device I use to look up the bookmark, phone or office wall, it will access a situation-appropriate view of an integration of everything I know about that flight from different sources. The task of booking and taking the flight will involve many interactions. And all throughout them, that task and the flight will be primary things in my awareness, the websites involved will be secondary things, and the network and the devices tertiary.


I'll be thinking in the graph. My flights. My friends. Things in my life. My breakfast. What was that? Oh, yogourt, granola, nuts, and fresh fruit, since you ask.

tabulator maps in Argentina

Submitted by connolly on Mon, 2006-08-07 11:39. :: | |

My spanish is a little rusty, but it looks like inktel is having fun with the tabulator's map support too.

tags pending: geo, tabulator

An Introduction and a JavaScript RDF/XML Parser

Submitted by dsheets on Mon, 2006-07-17 15:02. :: | | | |

My name is David Sheets. I will be a sophomore at MIT this fall. I like to be at the intersection of theory and practice.

This summer, I am working as a student developer on the Tabulator Project in the Decentralized Information Group at MIT's CSAIL. My charge has been to develop a new RDF/XML parser in JavaScript with a view to a JavaScript RDF library. I am pleased to report that I have finished the first version of the new RDF/XML parser.

Before this release, the only available RDF/XML parser in JavaScript was Jim Ley's parser.js. This parser served the community well for quite a while but fell short of the needs of the Tabulator Project. Most notably, it didn't parse all valid RDF/XML resources.

To rectify this, work on a new parser was begun. The result that is being released today is a JavaScript class that weighs in at under 400 source lines of code and 2.8K gzip compressed (12K uncompressed). For maximum utility, a parser should be small, standards-compliant, widely portable, and fast.

To the best of my knowledge, RDFParser is fully compliant with the RDF/XML specification. The parser passes all of the positive parser test cases from the W3. This was tested using jsUnit -- a unit testing framework similar to jUnit but for JavaScript. To run the automated tests against RDFParser, you can follow the steps here. This means the parser supports features such as xml:base, xml:lang, RDF Collections, XML literals, and so forth. If it's in the specification, it should be supported. An important point to note is that this parser, due to speed concerns, is non-validating. Additionally, RDFParser has been speed optimized resulting in code that is slightly less readable.

The new parser is not as portable as the old parser at this time. It has only been tested in Firefox 1.5 but should work in any browser that supports the DOM Level 2 specification.

RDFParser runs at a speed similar to Jim Ley's parser. One can easily construct example RDF/XML files that run faster on one parser or another. I took five files that the tabulator might come across in day-to-day use and I ran head-to-head benchmarks between the two parsers.

Parse time is highly influenced by compact serialization. The more nested the RDF/XML serialization, the more scope frames must be created to track features from the specification. The less nested, the fewer steps to traverse the DOM, the more triples per DOM element.

Planned in the next release of RDFParser is a callback/continuation system so that the parser can yield in the middle of a parse run and allow other important page features to run.

API documentation for RDFParser included in the Tabulator 0.7 release is available.

Finally, I'd be happy to hear from you if you have questions, comments, or ideas regarding the RDFParser or related technologies.

Links on the Semantic Web

Submitted by timbl on Fri, 2005-12-30 15:04. :: | |

On the web of [x]HTML documents, the links are critical. Links are references to 'anchors' in other documents, and they use URIs which are formed by taking the URI of the document and adding a # sign and the local name of the anchor. This way, local anchors get a global name.

On the Semantic Web, links are also critical. Here, the local name, and the URI formed using the hash, refer to arbitrary things. When a semantic web document gives information about something, and uses a URI formed from the name of a different document, like foo.rdf#bar, then that's an invitation to look up the document, if you want more information about. I'd like people to use them more, and I think we need to develop algorithms which for deciding when to follow Semantic Web links as a function of what we are looking for.

To play with semantic web links, I made a toy semantic web browser, Tabulator. Toy, because it is hacked up in Javascript (a change from my usual Python) to experiment with these ideas. It is AJAR - Asynchronous Javascript and RDF. I started off with Jim Ley's RDF Parser and added a little data store. The store understands the mimimal OWL ([inverse] functional properties, sameAs) to smush nodes representing the same thing together, so it doesn't matter if people use many different URIs for the same thing, which of course they can. It has a simple index and supports simple query. The API is more or less the one which cwm and had been tending toward in python.

Then, with the DOM and CSS and Ecmascript standards bookmarked, the rest was just learning the difference between Javascript and Python. Fun, anyway.

The result .. insert a million disclaimers... experimental, work in progress, only runs on Firefox for no serious reason, not accessible, too slow, etc ... at least is a platform for looking at Semantic Web data in a fairly normal way, but also following links. A blue dot indicates something which could be downloaded. Download some data before exploring the data in it. Note that as you download multiple FOAF files for example the data from them merges into the unified view. (You may have to collapse and re-expand an outline).

Here is the current snag, though. Firefox security does not allow a script from a given domain to access data from any other domain, unless the scripts are signed, or made into an extension. And looking for script signing tools (for OS X?) led me to dead ends. So if anyone knows how to do that, let me know. Untill I find a fix for that, the power of following links -- which is that they can potentially go anywhere -- is alas not evident!

Syndicate content