Athan Services
Univ. Bologna
Freie Uni. Berlin
Uni. Aberdeen

Legal Norms Modelling with LegalRuleML (OASIS)

This tutorial presents the principles of the OASIS LegalRuleML applied to the legal domain and discuss why, how, and when LegalRuleML is well-suited for modelling norms. To provide a framework of reference, we present a comprehensive list of requirements for devising rule interchange languages that capture the peculiarities of legal rule modelling in support of legal reasoning. The tutorial comprises syntactic, semantic, and pragmatic foundations, a LegalRuleML primer, a comparison with related other approaches, as well as use case examples from the legal domain.

The tutorial includes the following topics:

  • defeasibility of rules and defeasible logic;
  • deontic operators (e.g., obligations, permissions, prohibitions, rights);
  • temporal management of the rules and temporal expressions within the rules;
  • qualification of norms (constitutive, prescriptive, etc.);
  • jurisdiction of norms;
  • isomorphism between rules and natural language normative provisions;
  • identification of constituent parts of the norm;
  • authorial tracking of the rules;
  • how to model alternatives formalization of norms.

The various concepts and constructions will be illustrated by examples taken from concrete real life use cases.

[Book chapter] [Slides]

Freie Universität Berlin,
Vienna University of Technology, Austria

Higher-Order Modal Logics: Automation and Applications

After defining the syntax and (possible worlds) semantics of some higher-order modal logics, we show that they can be embedded into classical higher-order logic by systematically lifting the types of propositions, making them depend on a new atomic type for possible worlds. This approach allows several well- established automated and interactive reasoning tools for classical higher-order logic to be applied also to modal higher-order logic problems. Moreover, also meta reasoning about the embedded modal logics becomes possible. Finally, we illustrate how our approach can be useful for reasoning in the web and with expressive ontologies. We also illustrate the reasoning about prominent semantic web logics and sketch a possible solution for the handling of inconsistent data.

[Book chapter] [Slides]

Université Paris Sud, France
Vienna University of Technology, Austria

Ontology-Mediated Query Answering With Data-Tractable Description Logics

Recent years have seen an increasing interest in ontology-mediated query answering, in which the semantic knowledge provided by an ontology is exploited when querying data. Adding an ontology has several advantages (e.g. simplifying query formulation, integrating data from different sources, providing more complete answers to queries), but it also makes the query answering task more difficult. In this tutorial, we will give a brief introduction to ontology-mediated query answering using description logic (DL) ontologies. Our focus will be on DLs for which query answering scales polynomially in the size of the data, as these are best suited for applications requiring large amounts of data. We will describe the challenges that arise when evaluating different natural types of queries in the presence of such ontologies, and we will present algorithmic solutions based upon two key concepts, namely, query rewriting and saturation. The lecture will conclude with an overview of recent results and active areas of ongoing research.

[Book chapter] [Slides]

University of New Brunswick, Canada

PSOA RuleML: Integrated Object-Relational Data and Rules

Suppose you are working on a project using SQL queries over relational data and then proceeding to SPARQL queries over graph data to be used as a metadata repository. Or, vice versa, on a project complementing SPARQL with SQL for querying an evolving mass-data store. Or, on a project using SQL and SPARQL from the beginning. In all of these projects, object-relational interoperability issues may arise.

Indeed, both on intranets and on the Internet, most data is stored in one of two paradigms: As relations (predicate-centered), e.g. in the SQL-queried Deep Web, vs. graphs (object-centered), e.g. in the SPARQL-queried Semantic Web. This divide has also led to separate relational vs. object-centered rule paradigms for processing the data (e.g., for inferencing/reasoning with them). Projects involving both relations and graphs are thus impeded by the paradigm boundaries, from modeling to implementation. These boundaries can be bridged or even dissolved by languages combining the relational and object-centered paradigms for data as well as rules.

A heterogeneous combination (an amalgamation), as in F-logic and W3C RIF, allows atomic formulas in both the relational and object-centered language paradigms for data atoms as well as rules, possibly mixed within the same rule.

A homogeneous combination (an integration), as in Positional-Slotted, Object-Applicative (PSOA) RuleML, blends the relational and object-centered atomic formulas themselves into a uniform kind of atom, allowing paradigm-internal transformation of data as well as rules. Data, i.e. ground facts, include (table-row-like) relational atoms with positional arguments and (graph-node-like) object-centered atoms with an Object IDentifier (OID) and slotted arguments (for the node's outgoing labeled edges). Rules can use non-ground (variable-containing) versions of all of these atoms anywhere in their premises and conclusions.

Generally, the object-relational integration is achieved by permitting a relation application to have an OID ‒ typed by the relation as its class ‒ and, orthogonally, the relation's arguments to be positional or slotted. The resulting positional-slotted, object-applicative (psoa) atoms can be employed as

  1. predicate-centered, positional atoms without an OID and with an ‒ ordered ‒ sequence of arguments,
  2. object-centered, positional atoms (shelves) with an OID and with a sequence of arguments,
  3. predicate-centered, slotted atoms without an OID and with an ‒ unordered ‒ multi-set of slots (each being a pair of a name and a filler),
  4. object-centered, slotted atoms (frames) with an OID and with a multi-set of slots.

In the Family Example, the psoa atoms applied as binary predicates in the premise (1.) derive a psoa atom used as a typed frame in the conclusion (4.), whose OID can be generated on-the-fly for each invocation of the conclusion-existential rule.

PSOA RuleML is a Horn-logic language (optionally, with equality) that reduces the number of RIF-BLD terms by generalizing its positional and slotted (named-argument) terms as well as its frame terms and class memberships. It can be extended in various ways, e.g. with Negation As Failure (NAF), augmenting RuleML's MYNG configurator for the syntax and adapting the RIF-FLD-specified NAF dialects for the semantics. Conversely, PSOA RuleML is being developed as a module that is pluggable into larger (RuleML) logic languages, thus making them likewise object-relational.

This tutorial first reviews object-relational combinations with a focus on the PSOA RuleML integration. It then explores the integration semantics with systematic examples in the presentation syntaxes of F-logic, RIF, and RuleML/POSL, supported by Grailog visualization and serialized in RuleML/XML. Next, it presents a use case of bidirectional SQL-PSOA-SPARQL transformation (schema/ontology mapping). The tutorial then formalizes the first-order model-theoretic semantics, blending (OID-over-)slot distribution, as in RIF, with integrated psoa terms, as in RuleML. Finally, it surveys the PSOATransRun implementations spearheaded by Gen Zou, translating PSOA RuleML knowledge bases and queries to TPTP (PSOA2TPTP) and Prolog (PSOA2Prolog).

[Book chapter] [Slides] [Introducing RuleML]

Polytechnic University of Bari, Italy

Recommender Systems and Linked Open Data

The World Wide Web is moving from a Web of hyper-linked documents to a Web of linked data. Thanks to the Semantic Web technological stack and to the more recent Linked Open Data (LOD) initiative, a vast amount of RDF data have been published in freely accessible datasets connected with each other to form the so called LOD cloud. As of today, we have tons of RDF data available in the Web of Data, but only a few applications really exploit their potential power. The availability of such data is for sure an opportunity to feed personalized information access tools such as recommender systems. We will show how to plug Linked Open Data in a recommendation engine in order to build a new generation of LOD-enabled applications.

[Book chapter] [Slides]

Stanford University, US

FOL Herbrand Semantics / Herbrand Manifesto

The traditional semantics for First Order Logic (sometimes called Tarskian semantics) is based on the notion of interpretations of constants.  Herbrand semantics is an alternative semantics based directly on truth assignments for ground sentences rather than interpretations of constants. Herbrand semantics is simpler and more intuitive than Tarskian semantics; and, consequently, it is easier to teach and learn.  Moreover, it is more expressive.  For example, while it is not possible to finitely axiomatize integer arithmetic with Tarskian semantics, this can be done easily with Herbrand Semantics.  The downside is a loss of some common logical properties, such as compactness and completeness.  However, there is no loss of inferential power.  Anything that can be proved according to Tarskian semantics can also be proved according to Herbrand semantics.  In this presentation, we define Herbrand semantics; we look at the implications for research on logic and rules systems and automated reasoning; and we assess the potential for popularizing logic.

Georg Gottlob, Michael Morak and Andreas Pieris
University of Oxford

Recent Advances in Datalog +/-

This tutorial, which is a continuation of the tutorial “Datalog and Its Extensions for Semantic Web Databases” presented in the Reasoning Web 2012 Summer School, discusses recent advances in the Datalog+/- family of languages for knowledge representation and reasoning. These languages extend plain Datalog with key modeling features such as existential quantification (signified by the “+” symbol), and at the same time apply syntactic restrictions to achieve decidability of ontological reasoning and, in some relevant cases, also tractability (signified by the symbol “-”). In this tutorial, we first introduce the main Datalog± languages that are based on the well-known notion of guardedness. Then, we discuss how these languages can be extended with important features such as disjunction and default negation.

[Book chapter] [Slides part 1] [Slides part 2]

Coherent Knowledge Systems, USA
Michael Kifer
Stony Brook University, USA
Paul Fodor
Stony Brook University, USA

Powerful Practical Semantic Rules in Rulelog: Fundamentals and Recent Progress

In this tutorial, we cover the fundamental concepts and recent progress in the area of Rulelog, a leading approach to semantic rules knowledge representation and reasoning. Rulelog is expressively powerful, computationally affordable, and has capable efficient implementations. A large subset of Rulelog is in draft as an industry standard to be submitted to RuleML and W3C as a dialect of Rule Interchange Format (RIF).

"Textual" Rulelog, in which Rulelog is closely combined with natural language processing by using Rulelog to interpret and generate English, is a key area of ongoing research and development (R&D).

Rulelog extends well-founded declarative logic programs (LP) with:

  • strong meta-reasoning, including higher-order syntax (Hilog), reification, and rule id's (within the logical language)
  • explanations of inferences
  • efficient higher-order defaults, including "argumentation rules"
  • flexible probabilistic reasoning, including evidential probabilities and tight integration with inductive machine learning
  •      this is a key area of recent technology progress and ongoing R&D
  • bounded rationality, including restraint ‒ a "control knob" to ensure that the computational complexity of inference is worst-case polynomial time
  • "omni-directional" disjunction in the head (of a rule)
  • existential quantifiers (mixed with universal quantifiers) in the head
  • sound tight integration of first-order-logic ontologies including OWL
  • frame syntax, similar to RDF triples and object-orientation
  • and several other lesser features, including aggregation operators and integrity constraints

Implementation techniques for Rulelog inferencing include transformational compilations and extensions of LP "tabling" algorithms.   "Tabling" here includes: smart cacheing of conclusions; and incrementally revising the cached conclusions when rules are dynamically added or deleted. "Tabling" is thus a mixture of backward-direction and forward-direction inferencing. There are both open-source and commercial tools for Rulelog that vary in their range of expressive completeness and of user convenience.   They are interoperable with databases and spreadsheets, and complement inductive machine learning and natural language processing techniques.

The most complete system today for Rulelog is Ergo from Coherent Knowledge Systems. Using Ergo, we will illustrate that Rulelog technology has applications in a wide range of tasks and domains in business, government, and science. We will tour areas of recent applications progress that include: legal/policy compliance, e.g., in financial services; education/tutoring; and e-commerce marketing. This tutorial will provide a comprehensive and up-to-date introduction to these developments and to the fundamentals of the key technologies and outstanding research issues involved.    

[Book chapter] [Slides]

University of Calabria, Italy
Francesco Ricca
University of Calabria, Italy

Answer Set Programming: A tour from the basics to advanced development tools and industrial applications

Answer Set Programming (ASP) is a powerful rule-based language for knowledge representation and reasoning that has been developed in the field of logic programming and non monotonic reasoning.
After more than twenty years from the introduction of ASP, the theoretical properties of the language are well understood and the solving technology has become mature for practical applications.
In this tutorial, we first present the basics of the ASP language, and we then concentrate on its usage for knowledge representation and reasoning in real-world contexts. In particular, we report on the development of some industry-level applications with the ASP system DLV, and we illustrate two advanced development tools for ASP, namely ASPIDE and JDLV, which speed-up and simplify the implementation of applications.

[Book chapter] [Slides]

Insight @ NUI Galway, Ireland

IoT-Intelligence: From Data Streams to Actionable Knowledge

A fast growing torrent of data is being created by companies, social networks, mobile phones, smart homes, public transport vehicles, healthcare devices, and other modern infrastructures. Being able to unlock the potential hidden in this torrent of data would open unprecedented opportunities to improve our daily lives that were not possible before.
Advances in the Internet of Things (IoT), Semantic Web and Linked Data research and standardization have already established formats and technologies for representing, sharing and re-using knowledge on the Web, but key challenges to go from data to actionable knowledge include the ability to synthesize knowledge (and therefore answers) with expressive inference that can capture decision processes in a scalable way. In this talk I will cover formal and practical aspects of stream reasoning for the Web of Data, focusing on the interplay between semantic query processing and rule-based inference, and I will showcase how these techniques can be applied in a smart city scenario.

[Book chapter] [Slides part 1] [Slides part 2]

University of Potsdam, Germany

Towards Embedded Answer Set Solving

This tutorial introduces advanced problem solving techniques addressing the growing range of applications of Answer Set Programming (ASP) in practice; its particular focus lies on recent techniques needed for embedding ASP in complex software environments. The tutorial starts with an introduction to the essential formal concepts of ASP, needed for understanding its semantics and solving technology. In fact, ASP solving rests on two major components: A grounder turning specifications in ASP's modeling language into propositional logic programs and a solver computing a requested number of answer sets of the program. We illustrate ASP's grounding techniques and describe the major algorithms used in the ASP grounder gringo 4. This is accompanied with an introduction to the new ASP language standard. The remainder of the tutorial is dedicated to using ASP in conjunction with Python for modeling complex reasoning scenarios. This involves an introduction to the API of clingo 4, an ASP system extending clasp and gringo with control capacities expressible in Python (and Lua). We illustrate this by developing a sample board game and sketch more sophisticated usages in robotics and preference handling. All involved ASP systems are freely available from http://potassco.sourceforge.net.

[Book chapter - starts on page 4] [Slides]


All About Fuzzy Description Logics and Applications

The aim of this talk is to present a detailed, self-contained and comprehensive account of the state of the art in representing and reasoning with structured fuzzy knowledge. Fuzzy knowledge comes into play whenever one has to deal with concepts for which membership is a matter of degree (e.g., the degree of illness is a function of, among others, the body temperature). Specifically, we address the case of the fuzzy variants of conceptual languages of the OWL 2 family and discuss some implementation related issues. We conclude by illustrating some applications and their use in the area of symbolic machine learning.

[Book chapter] [Slides]

University of Miami, USA

The TPTP World of Automated Theorem Proving    

The TPTP World is a well known and established infrastructure that supports research, development, and deployment of Automated Theorem Proving (ATP) systems for classical logics. The data, standards, and services provided by the TPTP World have made it increasingly easy to build, test, and apply ATP technology. This tutorial reviews the core features of the TPTP World, describes key service components of the TPTP World and how to use them, presents some successful applications, and gives an overview of planned developments.

[Book chapter] [Slides]