- Annette Beyer (for administrative matters)
Dagstuhl Seminar Wiki
- Dagstuhl Seminar Wiki (Use personal credentials as created in DOOR to log in)
- Dagstuhl Materials Page (Use personal credentials as created in DOOR to log in)
In natural language processing (NLP) there is an increasing interest in formal models for processing graphs rather than more restricted structures such as strings or trees. Classic feature structures can be seen as graphs. Recent work in dependency parsing produces graphs rather than trees. New work in deep semantic annotation organizes logical meanings into directed graph structures, and several efforts are now being made that in the near future will yield large amounts of linguistic data annotated with these representations. Graph transformation (graph-to-graph, graph-to-tree, graph-to-string, and vice versa) is thus becoming central to our thinking about natural language understanding, natural language generation, and machine translation.
Formal models of graph transformation have previously been studied and applied in various other areas of computer science, including formal language theory, term rewriting, theory and implementation of programming languages, concurrent processes, and software engineering. Generally speaking, the theory of graph transformation systems studies rule-based mechanisms for the manipulation of graphs. These mechanisms can be used both to generate graphs (by means of graph grammars), to analyze them (by means of graph reduction systems), and to compute input–output relations (by means of general graph transformation systems). A particularly well-studied subject within the area of graph transformation, and one that has received quite some attention recently within the NLP community, are context-free graph grammars.
Few researchers from NLP are familiar with the theory of graph transformation, and at the same time, few researchers from the theory of graph transformation are aware of the specific desiderata, possibilities and challenges that one faces when applying this theory to NLP problems. This Dagstuhl Seminar will therefore bring researchers from the two areas together. The goal is to assess the state of the art, identify areas of common interest, and pave the way for future collaborations. We expect the seminar to create a new interdisciplinary research community, and its proceedings to become a standard reference for future work on graph transformation in NLP.
The specific research questions to be addressed include:
- What linguistic data is modeled using graphs, and what computations on this data are required for practical NLP applications?
- What formal models of graph transformation are available, and what are their properties with respect to expressiveness and algorithmic complexity?
- How well do the models meet other desiderata such as determinization, compositionality, incrementality, and learnability?
- How can formal models of graph transformation be combined with the standard machine learning techniques that are being used in NLP?
- What implementations of graph transformation frameworks do exist, and how can they be used for applications within natural language processing?
Strings are fundamental data structures in natural language processing (NLP). Weighted finite-state string acceptors and transducers, first introduced as theoretical constructs, have proven their worth in speech recognition, part-of-speech tagging, transliteration, and many other applications. The string automaton framework provides efficient generic algorithms for composition, bidirectional application, k-best extraction, determinization, minimization, parameter tuning, etc. These algorithms have been packaged in software toolkits that form the core of many state-of-the-art systems.
Tree automata go further in permitting large-scale, syntactically-motivated re-ordering of subtrees. They were originally devised to help formalize Chomsky's linguistic theories, but their subsequent development was largely disconnected from NLP practice. In 2005, tree automata theorists and machine translation (MT) practitioners began working together to come up with a new kind of statistical MT system based on tree automata. This led to some of the best practical results in common evaluations of MT quality, and syntactic methods are now used in industrial MT systems. This work at the intersection of tree automata and NLP created vibrant new research directions for both areas.
Nowadays, graphs are becoming an even more general fundamental data structure in practical NLP. Classic feature structures can be seen as rooted, directed, edge- and leaf-labeled graphs. Recent work in dependency parsing produces graphs rather than trees. New work in deep semantic annotation organizes logical meanings into directed graph structures, and several efforts are now being made that in the near future will yield large amounts of linguistic data annotated with these representations. Formal models of graph transformation are therefore of fundamental importance for the development of practical systems for these tasks. The situation is familiar: there exists a formal theory of graph transformation, but this theory is largely disconnected from research and practice in NLP.
The theory of graph transformation studies rule-based mechanisms for the manipulation of graphs. A particularly well-studied subject within the area of graph transformation, and one that has received quite some attention recently within the NLP community, are context-free graph grammars. These grammars have many nice properties in common with context-free phrase structure grammars, but are considerably more powerful and versatile; in particular, they can be used to generate context-sensitive string languages (when strings are represented as chain graphs). The price of this expressiveness is a higher computational complexity; in particular, there are context-free graph languages for which parsing is NP-complete. This has triggered research on specialized, more efficient algorithms for restricted classes of graphs. A well-known result in this area is that many in general intractable problems on graphs become solvable in polynomial time when restricted to graphs of bounded tree-width.
With the number of interesting applications and the amount of available data quickly increasing, there is a clear need for the NLP community to acquire knowledge about formal models of graph processing, as such models can greatly simplify practical systems, by providing a uniform knowledge representation and efficient, generic algorithms for inference. Unfortunately, most NLP researchers are unaware of the rich literature on graph transformation, and even those who are find it hard to connect it to their own work. Conversely, few researchers in graph transformation are aware of the new applications of their research within natural language processing, the characteristic properties of the available data, the specific desiderata of these applications, and the research problems that are posed by them.
The overall goal of the seminar was to bring the various research communities together to assess the state of the art, identify areas of common interest, and pave the way for future collaborations. We think that this goal was reached to a very high degree, which will be a major factor in the creation of a new interdisciplinary research community.
Organization of the Seminar
The seminar was attended by 29 participants from 9 countries in North America, Europe, and Africa. It was held from March 15 to March 20, 2015. Since the intention was to foster the creation of a new research community, it was decided to organize the seminar in the form of a self-organized workshop with many informal discussion meetings on topics suggested by the participants themselves. For this, the seminar roughly followed the idea of Open Space Technology. This worked very well and gave rise to many insightful discussions. (See Section 5 for the list of topics discussed.)
- Daniel Bauer (Columbia University, US) [dblp]
- Suna Bensch (University of Umeå, SE) [dblp]
- Henrik Björklund (University of Umeå, SE) [dblp]
- Johanna Björklund (University of Umeå, SE) [dblp]
- David Chiang (University of Notre Dame, US) [dblp]
- Frank Drewes (University of Umeå, SE) [dblp]
- Petter Ericson (University of Umeå, SE) [dblp]
- Daniel Gildea (University of Rochester, US) [dblp]
- Karl Moritz Hermann (Google DeepMind - London, GB) [dblp]
- Berthold Hoffmann (Universität Bremen, DE) [dblp]
- Peter Jonsson (Linköping University, SE) [dblp]
- Laura Kallmeyer (Heinrich-Heine-Universität Düsseldorf, DE) [dblp]
- Kevin Knight (USC - Marina del Rey, US) [dblp]
- Alexander Koller (Universität Potsdam, DE) [dblp]
- Marco Kuhlmann (Linköping University, SE) [dblp]
- Adam Lopez (University of Edinburgh, GB) [dblp]
- Andreas Maletti (Universität Stuttgart, DE) [dblp]
- Jonathan May (USC - Marina del Rey, US) [dblp]
- Mark Minas (Universität der Bundeswehr - München, DE) [dblp]
- Joakim Nivre (Uppsala University, SE) [dblp]
- Stephan Oepen (University of Oslo, NO) [dblp]
- Detlef Plump (University of York, GB) [dblp]
- Giorgio Satta (University of Padova, IT) [dblp]
- Natalie Schluter (University of Copenhagen, DK) [dblp]
- Mark Steedman (University of Edinburgh, GB) [dblp]
- Christoph Teichmann (Universität Potsdam, DE)
- Brink van der Merwe (University of Stellenbosch, ZA) [dblp]
- Heiko Vogler (TU Dresden, DE) [dblp]
- Daniel Zeman (Charles University - Prague, CZ) [dblp]
- artificial intelligence / robotics
- data structures / algorithms / complexity
- automata theory
- graph transformation
- natural language processing