Skip available courses

Available courses

  • MODULARITY, as studied for many years in software engineering, allows mechanisms for easy and flexible reuse, generalization, structuring, maintenance, design patterns, and comprehension. Applied to ontology engineering, modularity is central not only to reduce the complexity of understanding ontologies, but also to facilitate ontology maintenance and ontology reasoning.
    Recent research on ontology modularity shows substantial progress in foundations of modularity, techniques of modularization and modular development, distributed reasoning and empirical evaluation. These results provide a foundation for further research and development.
    The workshop follows a series of successful events that have been an excellent venue for practitioners and researchers to discuss latest work and current problems, and is this time organised as a satellite workshop of ESSLLI 2011 (week 2), following an introductory ESSLLI course on notions of modularity in ontologies (week 1).
    For more information, see the workshop homepage.
    Guest accessSelf enrolment
  • This ESSLLI 2011 course presents and discusses five problems of deontic logic and normative reasoning in computer science. The five problems were distilled from the 10 problems of the preceding ESSLLI 2010 course. The probems are (1) the shortcomings of standard deontic logic, (2) contrary to duty reasoning, (3) norms on time and action, (4) the dynamics of normative systems and normative states, (5) deontic modalities in relation to BDI and other modalities.

    Self enrolment
  • In this course we present a framework for fully-automatically generating deduction calculi from the specification of a propositional logic.  The logic of interest is assumed to be defined by a high-level specification of its formal semantics. The aim is to turn this into a set of inference rules, which forms a sound and complete deduction calculus for the logic. Ideally we also want to be able to guarantee termination if the logic is decidable. Automated synthesis of calculi is a very challenging problem and in general it is of course not possible to turn every specification of a logic into a sound, complete and terminating deduction calculus.  It is however possible to solve the problem for a large number of logics.

    This course will focus on the tableau calculus synthesis framework we introduced earlier. The type of calculi generated are ground semantic tableau calculi that operate on labelled formulae where the labels represent individuals or states in the underlying models.

    The course is an introductory course in logic and computation. It requires basic knowledge of:

    • modal logic, description logic, or other non-classical logics, and
    • first-order logic

    Knowledge of tableau-based reasoning is not essential but will be an advantage.

    Website of the course

    Guest accessSelf enrolment
  • In Philosophy there exists an enormous body of expertise on the meaning of conditionals. However, these theories generally do not explain how the meaning of these sentences is related to their form. Presently we can observe a growing interest in this linguistic question. But due to of the complexity of the topic the theoretical landscape looks very heterogeneous: there is a lot of variation with respect to the particular problems addressed and the formalisms used.

    The goal of this course is to structure and analyze the landscape of compositional approaches to the semantics of conditionals. We will start with developing a clear idea of what such an approach should achieve. We will formulate concrete constraints for the syntactic input of such a theory, as well as for the semantic output that it should produce. With these constraints at hand, we will discuss and evaluate the recent literature in the field.

    keywords: conditional sentences, syntax-semantics-interface, compositional semantics, formal semantics, language typology

    Self enrolment
  • This course offers an introduction to the art and science of computing meanings of natural language expressions. It covers the basics of formal semantics and supplements them with implementations in order to take you from the theory of meaning construction to actual meaning computation.

    Self enrolment
  • Formal logic is widely regarded as a foundation for specification, verification and reasoning about multi-agent systems. In recent years, a new group of modal logics emerged. These logics focus on the notion of strategy, and try to address abilities of agents in environments that involve many autonomous entities acting in parallel.

    In this course, we present an overview of results and algorithms for decision problems in propositional strategic logics, with a special focus on Alternating-time Temporal Logic (ATL). We begin with an introduction of basic concepts and logics, then we present existing results for the satisfiability/validity problem; finally, we discuss model checking and show how its complexity changes when we change the way the details of the problem are defined. The rest of the course is devoted to more sophisticated models of agent behavior, including rationality assumptions, bounded resources, and imperfect information scenarios.

     

    Website for this course: http://www.in.tu-clausthal.de/index.php?id=esslli2011

    Self enrolment
  • This will be an introductory, hand's-on course in which students will (a) learn about the scope of reference resolution phenomena in the world's languages, focusing on those that are more difficult for machine processing, (b) learn how reference resolution has been treated in natural language processing systems, (c) be introduced to a new theory and methodology of configuring reference resolution engines and (c) configure their own reference resolution engines for a language of their choice.

    Guest accessSelf enrolment
  • Linguists have increasingly become interested in experimental research as a supplement to traditional analytical methods. This course is a practical introduction to experimental methods for linguists. We will examine studies in experimental syntax, semantics, and pragmatics, to lay a foundation in the key issues: experimental design, methods, and data analysis. We will address the big questions (e.g., how can experiments further my research program?), the medium-sized questions (e.g., how do I design an experiment to test a theory?), and the small questions (e.g., what software should I use?). The goal is to provide sufficient background for attendees to (a) understand how to integrate experiments into their research, (b) know how to locate resources to help them develop those experiments, and (c) evaluate the rapidly-growing body of experimental research being published in the field.

    Self enrolment
  • This course is a mild introduction to Formal Language Theory for students with little or no background in formal systems. The motivation is computational linguistics, and the presentation isgeared towards NLP applications, with extensive linguistically motivated examples. Still, mathematical rigor is not compromised, and students are expected to have a formal grasp of the material by the end of the course.

    Tentative outline

    • Set theory: sets, relations, strings, languages, etc.
    • Regular languages and regular expressions
    • Languages vs. computational machinery
    • Finite state automata
    • Finite state transducers
    • Context free grammars and languages
    • The Chomsky hierarchy
    • Weak and Strong Generative capacity, grammar equivalence
    Guest accessSelf enrolment
  • Generalized quantifier theory studies the possible meanings and the inferential power of quantifier expressions by logical means. The classical version was developed in the 1980s, at the interface of linguistics, mathematics and philosophy. Until now, advances in "classical" generalized quantifier theory mainly focused on definability questions and their applications to linguistics (see Peters and Westerståhl 2006 for an overview). However, generalized quantifiers habe been also studied from psychological perspective (see, e.g., Moxey and Sanford 1993; Clark 1976). The lectures will survey some of the recently established links between generalized quantifier theory and cognitive science. In particular, we will be concerned with extending generalized quantifier theory with computational aspects in order to draw and empirically test psycholinguistic predictions. One major focus will be computational complexity and its interplay with "difficulty" as experienced by subjects asked to verify quantifier sentences.

    Website of the course

    Self enrolment
  • This course will provide an introduction to graphs in the context of Natural Language Processing (NLP). The aim of the course is two-fold: first, we introduce the audience to the concept of graph and its basic algorithms; second, we overview the different NLP tasks and see how graphs and graph algorithms are developed for and applied to each task. Each area will be shortly introduced and its main graph techniques described and discussed. The course is proposed after several years of workshops organized on the theme, i.e. the TextGraph workshop series, where much novel research has been presented on this interdisciplinary topic.

    Self enrolment
  • This course presents an in-depth examination of the linguistic, logical and cognitive representation of imprecision (the lack of precision or exactness) and approximation (inexactness that is nonetheless close enough to be useful).

    Self enrolment
  • Inquisitive semantics develops a new notion of semantic meaning that directly reflects the use of language to exchange information. The meaning of a sentence is not identified with its informative content, but rather with a proposal to update the common ground of a conversation in one or more ways. If a sentence proposes two or more alternative updates, then it is inquisitive, inviting other participants to choose between these alternative updates. If certain possible worlds are eliminated from the common ground by each of the proposed updates, then the sentence is informative. In this way, informative and inquisitive content are captured in a unified way, as two aspects of a single core notion of meaning. The aim of the course is to familiarize students and researchers with the framework, and to engage them in the further development of the logical-theoretical foundations, and the linguistic and computational applications. This is the perfect time for such engagement, since the fundamental building blocks of the framework are in place, the central research questions are clear, and the wide applicability of the framework can be illustrated with several concrete case studies. At the same time, many open questions remain and there is much room and demand for contributions from students and researchers in logic, linguistics, and computer science.

    Course website: https://sites.google.com/site/inquisitivesemantics/courses/Esslli-2011

    Guest accessSelf enrolment
  • Formal semantics explains how complex meanings may be derived from more simple meanings and the meaning of rules of composition. Formal semantics helps us understand predication (e.g. the composition of a verbal element with its arguments) and modification (e.g. of a noun by an attributive adjective). It provides us with a tool for the analysis of sentential connectives. And, it predicts scope ambiguities in quantificational constructions. This course aims at introducing the elementary techniques used in formal  semantics applied to natural language. It is a prerequisite for reading (and understanding) more advanced literature in formal semantics.

    Self enrolment
  • The aggregation of individual judgments on logically interconnected propositions into one collective judgment has recently drawn attention in economics, law, philosophy, logic and computer science. Examples are the aggregations of individual judgments in juries, expert panels, legal courts, boards, and councils. Despite the apparent simplicity of the problem, seemingly reasonable aggregation procedures, such as propositionwise majority voting, cannot ensure a consistent collective judgment as result of the aggregation. This is the so-called discursive dilemma.

    The literature on judgment aggregation has been shaped by earlier work in social choice theory. Just like preference aggregation in social choice theory, judgment aggregation studies aggregation functions under specific conditions. The bottom line is that it has been shown that no aggregation function can satisfy a number of desirable properties at the same time. Moreover, in the rapidly growing literature, the aggregation problem has been generalized in a number of ways and several impossibility and possibility results have been proved.

    Computer scientists also face the problem of combining different and potentially conflicting sources of information. Recently, methods that originated in computer science have been applied to judgment aggregation and, on the other hand, judgment aggregation has obtained attention from computer scientists as a fruitful paradigm for framing problems stemming from, in particular, distributed artificial intelligence.

    The course will introduce the students to the main results in judgment aggregation and will outline the most recent directions of research.

    Self enrolment
  • Geometric models of meaning have become increasingly popular in natural
    language semantics and cognitive science. In contrast to standard symbolic
    models of meaning (e.g. Montague), which give a qualitative treatment of
    differences in meaning, geometric models are also able to account for the
    quantitative differences, expressing degrees of similarities between meanings, and give an account of typicality and vagueness for words and phrases. In this course we will present new developments in this exciting research field. It is not assumed that every student has the necessary basic background of linear algebra. The first two days are planned to introduce the students into this important field of applied mathematics. Further, the course discusses (i) distributional semantics and the problem of compositionality; (ii) a new theory of questions & answers using the very same algebra that underlies distributional semantics; (iii) several puzzles of bounded rationality and their solutions in terms of geometric models.

    All the course material is available on http://www.blutner.de/esslli/index.html with exception of a password you need for reading some pdf-files. Our strictly confidential password is 'freibier'.

    Guest accessSelf enrolment
  • For further information, please check the course website here

    http://lumiere.ens.fr/~dbonnay/files/conference/logicalconstants.htm

    Self enrolment
  • To create a language for programming entities capable of intelligent behaviour (`agents'), researchers and developers must address deep questions such as: what are the basic constituent parts of an intelligent agent; how should the agent `think' (e.g., which deliberation strategy should it employ -- should it plan a precise sequence of actions in advance or should it adopt an abstract plans with gaps `to be filled-in later'); what relationship should there be between the agent's beliefs and its goals, etc. In seeking to address these questions, researchers have drawn heavily on formal models of agents and on agent logics, including epistemic logics, logics of action, dynamic logic, coalition logics etc. For example, the development of agent programming languages such as AgentSpeak were heavily influenced by the BDI (Beliefs, Desires and Intentions) logics developed to understand what an agent's behaviour should be. These interactions have resulted in an extremely fruitful cross fertilisation between work in logic and computation, and the application of logical techniques to address key practical issues such as the verification of agent programs (i.e., will an agent program meet the specification set out by its developers). This course will address key topics in logics of agent programs including: the Belief Desire Intention model; overview of agent programming languages based on the BDI model; relationship between the operational semantics of BDI-based agent programming languages and logics for reasoning about agents' beliefs and intentions; verification of agent programs using model checking and theorem proving. The course assumes some exposure to modal logic, but no prior knowledge of agent programming languages is required.

    Self enrolment
  • Logical approaches and methods are becoming increasingly popular and important in the modeling and analysis of multi-agent systems (MAS). A variety of rich logical systems have been introduced for capturing various aspects of MAS, incl. knowledge, communication, and strategic abilities. Logical systems and methods can be applied to formal specification and verification of elaborate properties of MAS, as well as to synthesis -- via constructive satisfiability testing -- of multi-agent systems satisfying such properties.

    This course is intended for a wide audience with only basic knowledge of modal and temporal logics. I will introduce and discuss some of the most important and popular families of logics for multi-agent systems: epistemic, dynamic epistemic, temporal-epistemic, and logics of strategic abilities. Eventually, I will introduce a general logical framework for modeling of knowledge, communication, actions and strategic abilities of agents and coalitions in MAS, and of the dynamics of their interaction, and will discuss possible applications of that framework to verification and synthesis of communication protocols.

    The emphasis of the course will be on understanding of the languages and semantics of these logics, and on applying them to model and specify properties of MAS. I illustrate these with several well-known scenarios, such as card games, epistemic puzzles, the coordinated attack problem, etc.

    Guest accessSelf enrolment
  • Nowadays, logical theories in guise of ontologies are designed for applications in bioinformatics, medicine, geography, linguistics and other areas. They are often based on expressive description logics (DLs), which are fragments of first-order logic with well-understood and -implemented reasoning problems and procedures. Given the size of existing ontologies and problems parsing, storing, or reasoning over them, modularity has become important. It is comparable with modularity in software engineering, and is on its way of becoming as well-understood. In this course, we will provide an overview of the state of the art in modularity for ontologies. The course is meant to be a guide to the wealth of module notions, their properties and uses, as well as related concepts.
    Guest accessSelf enrolment
  • Vague predicates are expressions such as “red”, “tall”, “heap” or “many”, whose meaning does not allow us to draw a fixed and determinate boundary between cases to which these expressions apply and cases to which they do not apply.

    The aim of this course is to give a systematic introduction to logics for vagueness and to recent advances on the semantic treatment of vague predicates. The focus of the lectures will be on non-classical logics for first-order languages and on the comparison between three main frameworks: partial two-valued logics (super- and sub-valuationism), three-valued logics (LP and K3), and similarity-based logics (tolerant and strict semantics, Cobreros, Egré, Ripley, van Rooij 2010).

    The main originality of the course will be the emphasis put on the duality between the main logics definable in each of these frameworks. Our ambition is to show systematic correspondences and differences between them, in particular concerning the ways in which they either preserve classical logic or depart from it. The last lecture will confront experimental evidence on vagueness (Ripley 2009, Alxatib and Pelletier 2010, Serchuk et al. 2010, Egré, Gardelle, Ripley, in progress) with each of the frameworks examined in the course.

    --

    The course will be coinstructed with Paul Egré and Robert van Rooij.

    Webpage for the course: http://paulegre.free.fr/ESSLLI_2011_Vagueness/index.html

    Guest accessSelf enrolment
  • The aim of this course is to present an up-to-date introduction to propositional proof complexity with emphasis on game-theoretic techniques, connections to parameterized complexity and DPLL algorithms. We will cover important proof systems, current knowledge about proof lengths in these systems, and implications for the performance of satisfiability algorithms.

    Proof complexity is a vivid field in the intersection of logic and complexity that investigates proofs towards the main aim of understanding the complexity of theorem proving procedures. Most research in proof complexity concerns proving lower bounds to the lengths of proofs. One motivation for this research comes from the Cook-Reckhow program which aims to separate NP and coNP (and therefore also P and NP) by showing super-polynomial lower bounds for the lengths of proofs in increasingly stronger propositional proof systems. Another motivation comes from satisfiability algorithms. Most modern SAT solvers use refinements of DPLL procedures based on Resolution and therefore lower bounds to the lengths of proofs have direct implications for the running time of these procedures.

    The last 30 years have seen tremendous progress in proof complexity: starting with the exponential lower bound of Haken for the pigeonhole principle in Resolution, lower bounds were shown for many proof systems such as the Nullstellensatz system, Cutting Planes, Polynomial Calculus, or bounded-depth Frege systems. For all these proof systems we know exponential lower bounds to the lengths of proofs for concrete sequences of tautologies arising from natural propositional encodings of combinatorial statements.

    For proving these lower bounds, a number of generic approaches and general techniques have been developed. In this ESSLLI course we will highlight game-theoretic lower bound methods. Game-theoretic techniques have wide-spread applications in theoretical computer science and logic. In proof complexity, different types of 2-player games have been developed to show lower bounds to the lengths of proofs. These Prover-Delayer games have the conceptual advantage of transforming the negative task of showing the non-existence of a short proof into the positive task of exhibiting a good strategy for the Delayer. We will introduce both the game by Pudlak and Impagliazzo which is applicable in tree-like proof systems as well as the Pudlak game which works for more powerful systems. Lower bounds previously obtained by more involved arguments such as Haken's bound for Resolution or Ajtai's proof for bounded-depth Frege admit very elegant reformulations in terms of these games.

    Parameterized proof complexity is a recent paradigm developed by Dantchev, Martin, and Szeider which transfers the highly successful approach of parameterized complexity to the study of proof lengths. In the course we will explain how the Prover-Delayer games for Resolution can be extended to study the complexity of proofs in a Resolution system in a setting arising from parameterized complexity. Using these games we will demonstrate lower bounds to the general dag-like Parameterized Resolution system for the pigeonhole principle and analyse a variant of the DPLL algorithm in the parameterized setting.

    Self enrolment
  • The course deals with digital encoding of language data, an increasingly important area due to the growing production and interchange of annotated language resources. Computational linguistics has experienced a shift towards machine learning and statistics-based approaches on the one hand, and towards empirical evaluation of experimental results on the other; for both, language resources are needed, and in order to use and produce them, an awareness of standards for their encoding is necessary.

    Guest accessSelf enrolment
  • There is now a growing body of research on formal algorithmic models of social procedures and interactions between rational agents. These models attempt to identify logical elements in our day-to-day social activities. When interactions are modelled as games, reasoning involves analysis of agents' long-term powers for influencing outcomes. Agents devise their respective strategies on how to interact so as to ensure maximal gain. In recent years, researchers have tried to devise logics and models in which strategies are "first class citizens", rather than unspecified means to ensure outcomes. Yet, these cover only basic models, leaving open a range of interesting issues, especially in games of imperfect information. Game models are also relevant in the context of system design and verification.

    In this course we will discuss research on logic and automata-theoretic models of games and strategic reasoning in multi-agent systems. We will get acquainted with the basic tools and techniques for this emerging area, and provide pointers to the exciting questions it offers.

    Self enrolment
  • Temporal, epistemic, and other multi-agent logics have been in the focus of a very active and fruitful development over the past 30 years, with many and important applications to computer science, artificial intelligence and multi-agent systems.


    Deciding satisfiability of formulae of a given logical system is a central problem in Logic. While there have been many important theoretical developments and technical results on deciding validity and satisfiability in temporal, epistemic, and multi-agent logics, only some of them are really constructive and practically applicable, and that topic is currently in a stage of active development and expansion. The best-known such methods are automata-based and tableau-based methods for satisfiability testing and for model checking. These are closely related. The automata-based methods are more mechanized and more popular, but also more rigid and narrower in scope of application -- so far mainly used for linear time temporal logics -- while the tableau-based methods are, arguably, more flexible, more amenable to algorithmic optimization, and wider in scope of applications, but less popular and applied in practice so far.

    In this course I will develop systematically and illustrate with examples the incremental tableau-building methodology for constructive testing of satisfiability, applicable to a wide variety of logical systems. I will focus first on linear and branching time temporal logics, and then on multi-agent logics of (individual, common, and distributed) knowledge, time, and strategic abilities (of the type of the alternating-time temporal logic ATL). Also, I will briefly discuss how the tableau-based methodology can be applied to model checking in these logics.


    I will sketch proofs of termination, soundness, and completeness of the tableau methods, will discuss their complexity, possible optimizations, and will demonstrate how satisfying models can be extracted from open tableaux. Time and technology permitting, I will demonstrate some implemented and online available tools.


    The main objectives of the course will be to give the students practical skills of applying the methods presented in the course, as well as the theoretical understanding of the tableau-building methodology that would enable them to design tableau-based decision methods for other logics of their interest.

    Guest accessSelf enrolment
  • The course provides an introduction into the field of termination and complexity of term rewrite systems. Term rewriting is a conceptually simple, but powerful abstract model of computation with applications in automated theorem proving, compiler optimization, and declarative programming, to name a few.

    A fundamental property of rewrite systems is termination, the absence of infinite computations. Termination is an undecidable property but many powerful techniques have been designed to ensure termination of rewrite systems. In recent years the emphasis shifted towards techniques that can be automated and several powerful termination tools that automatically establish termination are being developed.

    Once termination of a given rewrite system has been verified, perhaps even automatically, we can ready ourselves for the next question: How many rewrite steps are possible starting from a given term? This leads to investigations into the complexity of rewrite systems, which is measured through the maximal number of computation steps possible.

    Self enrolment
  • Despite 40 years of research on quantification, quantification is currently again a central topic of research at the interface between logic and language. Generalized Quantifier theory and Discourse Representation Theory still are standard bearers when it comes to the representation of quantifier meaning.  But a new frontier in quantification research is to account for the use of quantifiers, in addition to the meaning quantifiers contribute to sentence and discourse.  The PUQOL workshop aims to bring together current research on quantification within semantics, logic, pragmatics, and cognitive science that addresses quantifier use within the perspective of model theoretic semantics and pragmatics.

    Self enrolment
  • The aim of the course is to provide a survey of the most significant advances in philosophy of language, starting from the early days of Montague, Lewis or Kaplan, when philosophy of language and natural language semantics
    still formed a unified discipline, itself grounded in logic, to the present days.

    We will focus on the topic of context-dependence, covering the standard accounts of indexicality (Kaplan and followers), and comparing them to some less standard accounts, such as two-dimensional update semantics (Stalnaker and followers) or situation theory (Barwise & Perry and followers). We will also look at certain forms of context-dependence that do not seem to be reducible to indexicality, and examine to which extent they motivate a departure from traditional truth-conditional approaches (as has been defended, on different grounds, within contextualist as well as relativist approaches to semantics).

    Guest accessSelf enrolment
  • We will take a look at syntax and semantics maintaining a characterization using logical techniques. We will aim to present the main features of (intuitionistic) type logical categorial grammar. The approach is purely lexical and high level: a grammar is a compact attribution of syntactic and semantic properties to basic expressions. It is also computational: we will illustrate with a Prolog parser/theorem-prover CatLog.

     

    Self enrolment
  • This course introduces the notion of unawareness, its formal representation in various logics, and its application to rational choice theory. Unawareness is a propositional attitude that an agent can have towards a proposition or formula when she does not entertain any beliefs about it, perhaps because she is lacking the resources or concepts to properly represent it. The course first reviews and compares a number of logics for representing interactive unawareness of formulas (in particular syntactic and semantic approaches). In the middle part of the course, we briefly survey extensions to awareness of unawareness, dynamic unawareness and first-order logical languages. The course concludes with a concise exposition of reasoning about unawareness in games.

    Self enrolment
  • Discourse particles and modal adverbs form a borderline case between semantics and pragmatics and so can be the source of new insights in these areas. The use of particles has been connected with discourse relations and other coherence relations, the relation between semantic content and discourse context, especially the mutual knowledge and the common discourse goals of the discourse participants, with expression of speaker beliefs, desires and intentions and with the control of interpretations that go beyond semantic content, i.e. explicatures and implicatures. And, last but not least, particles and modal adverbs can contribute to expressive and other non-truth-conditional aspects of meaning. 
    Speakers of the workshop (amongst others) address the question what kind of discourse representations and discourse entities we need to assume given the semantic contribution of discourse particles and modal adverbs.

    Self enrolment

Skip Login

Course materials for the 
European Summer School in Logic, Language and Information 2011, which takes place in Ljubljana, Slovenia, from August 1 to 12, 2011.

Skip Main menuSkip NavigationSkip Online users

Online users

(last 5 minutes)
  • Picture of Guest user  Guest user
Skip Calendar

Calendar

Mon Tue Wed Thu Fri Sat Sun
            1
2 3 4 5 6 7 8
9 10 11 12 13 14 Today Sunday, 15 April 15
16 17 18 19 20 21 22
23 24 25 26 27 28 29
30