Search the Dagstuhl Website
Looking for information on the websites of the individual seminars? - Then please:
Not found what you are looking for? - Some of our services have separate websites, each with its own search option. Please check the following list:
Schloss Dagstuhl - LZI - Logo
Schloss Dagstuhl Services
Within this website:
External resources:
  • DOOR (for registering your stay at Dagstuhl)
  • DOSA (for proposing future Dagstuhl Seminars or Dagstuhl Perspectives Workshops)
Within this website:
External resources:
Within this website:
External resources:
  • the dblp Computer Science Bibliography

Dagstuhl Seminar 02041

The Logic of Rational Agency

( Jan 20 – Jan 25, 2002 )

(Click in the middle of the image to enlarge)

Please use the following short url to reference this page:



The notion of a rational agent is one that has found currency in many disciplines, most notable economics, philosophy, cognitive science, biology, social sciences and, most recently, computer science and artificial intelligence. Crudely, a rational agent is an entity that is capable of acting on its environment, and which chooses to act in such a way as to further its own interests. There is much research activity in the formal foundations of such agents and multi-agent systems. Many mathematical approaches to developing theories of rational agency have been developed, including decision theory, game theory, and mathematical logic. In this seminar, we focussed on logical approaches to rational agency.

There are three aspects to the study of logical approaches to rational agency:

  1. Philosophy
  2. Logical foundations
  3. Application

The first aspect is concerned with the primarily philosophical questions of what rational agency is and how we might go about characterising it. Within the artificial intelligence and AI communities, one approach in particular has come to dominate -- the view of rational agents as practical reasoners, continually making decisions about what actions to perform in the furtherance of their intentions and desires. This view of rational agents is largely seen as going hand-in-hand with the view of agents as intentional systems -- systems that may best be characterised in terms of mentalistic notions such as belief and desires.

The logical foundations aspect of the study is concerned with the extent to which these aspects of agents (practical reasoning and mentalistic notions such as beliefs and intentions) can be captured within a logical framework of some kind. There are many well-documented difficulties with using classical (first-order) logic to express these aspects of agency, and so a key component of the logical aspect is finding and appropriate logical framework within which to express an agent's (different kinds of) beliefs, goals, plans, intentions, and how his actions can affect them over time. Although much has been done on modelling such a attitudes in isolation, it is still not clear how easy it is to combine several of them into one framework, let alone if one changes the perspective to multi-agent system. From a technical point of view, the logics of choice for expressing these aspects are extremely complex, combining temporal, modal, and dynamic aspects in a single framework. The theoretical and meta-logical properties of such logics (computational complexity, expressive power, completeness results, theorem proving techniques) are not well understood.

Finally, the application aspect is concerned with how we might apply logical theories of agency in the construction of automated agents. Logical theories of agency can be used as (1) a specification language, (2) a programming language, and (3) a verification language. Viewed as a specification language, a logic of rational agency can be used to specify the desirable properties of a system that is to be built. The development of formal methods for specifying the desirable properties of computer systems is a major ongoing area of research activity in computer science, and the view of computer systems as rational agents brings a new dimension to this study. Executable logics have also been a major research topic in computer science, with the programming language PROLOG being perhaps the best-known example of an executable logic framework. While the kinds of logics used in the development of agent theory are typically much more complex than those which underpin languages such as PROLOG, there is nevertheless some potential for developing executable fragments of agent logics. Finally, an interesting issue is the extent to which a computer system can formally be shown to embody some theory of agency. It is an as yet open question how we might go about attributing attitudes such as beliefs, desires, and the like to computer programs. Verifying that a system implements some theory of agency is thus a major research issue.

The structure of the seminar reflected the discussion above:

  1. Philosophical foundations
    What is rational agency? What are the right primitives (beliefs, desires, etc) for modelling rational agents? How do these primitives relate to one-another?
  2. Logical foundations
    What are the alternatives (e.g., classical logics, modal logics, first-order meta-logics, dynamic/action logics, deontic logics, temporal logics, ...) for modeling of the primitive components of rational agency? What are appropriate semantic frameworks for these logics (Tarskian model theoretic semantics? Kripke semantics? computationally grounded Kripke semantics? other approaches?) What are the relative advantages of these different frameworks? How do we combine these primitives into a single logic? What are the theoretical properties (expressive power, completeness, decidability/undecidability, computational complexity, proof procedures) of these combined logics? How do we use these logics to capture macro (non-atomic) aspects of rational agency, such as decision making (games, distributed utilities,...), communication, perception, collective action?
  3. Application
    How can we use agent logics in the specifiation of agent systems? How can we manipulate or otherwise refine these specifications to generate implementations? Can we directly execute these logics, and if so how? How do we verify that implemented systems satisfy some theory of agency (deductive approaches, model checking, ...)?

  • John Bell (Queen Mary University of London, GB)
  • Massimo Benerecetti (University of Naples, IT)
  • Julian Bradfield (University of Edinburgh, GB) [dblp]
  • Mehdi Dastani (Utrecht University, NL) [dblp]
  • Boudewijn de Bruin (University of Amsterdam, NL)
  • Frank Dignum (Utrecht University, NL) [dblp]
  • Jürgen Dix (TU Clausthal, DE) [dblp]
  • Clare Dixon (University of Liverpool, GB) [dblp]
  • Maria Fasli (University of Essex, GB) [dblp]
  • Klaus Fischer (DFKI - Saarbrücken, DE) [dblp]
  • Dov M. Gabbay (King's College London, GB) [dblp]
  • Barbara Grosz (Harvard University, US)
  • Paul Harrenstein (Utrecht University, NL) [dblp]
  • Benjamin J. Hirsch (University of Liverpool, GB)
  • Luke Hunsberger (Harvard University, US)
  • Ullrich Hustadt (University of Liverpool, GB) [dblp]
  • David Israel (SRI - Menlo Park, US)
  • Gerhard Lakemeyer (RWTH Aachen, DE) [dblp]
  • Yves Lesperance (York University - Toronto, CA) [dblp]
  • Alessio R. Lomuscio (University College London, GB) [dblp]
  • John-Jules Ch. Meyer (Utrecht University, NL) [dblp]
  • Claudia Nalon (University of Liverpool, GB) [dblp]
  • Marc Pauly (Paul Sabatier University - Toulouse, FR)
  • V.S. Subrahmanian (University of Maryland - College Park, US)
  • Balder Ten Cate (University of Amsterdam, NL) [dblp]
  • Johan van Benthem (University of Amsterdam, NL) [dblp]
  • Wiebe van der Hoek (University of Liverpool, GB) [dblp]
  • Ron van der Meyden (UNSW - Sydney, AU) [dblp]
  • Leon van der Torre (CWI - Amsterdam, NL) [dblp]
  • Emil Weydert (University of Luxembourg, LU) [dblp]
  • John H. Woods (University of Lethbridge, CA)
  • Michael J. Wooldridge (University of Liverpool, GB) [dblp]